←back to thread

113 points pseudolus | 1 comments | | HN request time: 0.201s | source
Show context
jasonthorsness ◴[] No.44459044[source]
"Their new algorithm adapts to an adversary’s strategy, but on time scales that it picks randomly"

"Even though many real-world data settings are not adversarial, situations without an adversary can still sometimes involve sudden floods of data to targeted spots, she noted."

This is pretty neat. I bet this will find practical applications.

replies(2): >>44459183 #>>44459434 #
troelsSteegin ◴[] No.44459434[source]
Are "adversaries" broadly used in algorithm design? I've not seen that before. I'm used to edge cases and trying to break things, but an "adversary", especially white box, seems different.
replies(4): >>44459936 #>>44460066 #>>44460159 #>>44461092 #
dragontamer ◴[] No.44459936[source]
Really??

Quicksort, mergesort and heapsort are commonly analyzed with worst case / adversaries based decisions.

I know that binary trees (especially red-black trees, AVL trees and other self balancing trees) have huge studies into adversaries picking the worse case scenario.

And finally, error correction coding schemes / hamming distances and other data reliability (ex: CRC32 checks) have proofs based on the worst case adversary bounds.

-------

If anything, I'm struggling to think of a case where the adversary / worst case performance is NOT analyzed. In many cases, worst case bounds are easier to prove than average case... So I'd assume most people start with worst case analysis before moving to average case analysis

replies(1): >>44460093 #
rented_mule ◴[] No.44460093[source]
I think there's a distinction between worst-case and adversarial behavior.

For some types of problems, identifying worst-case behavior is straightforward. For example, in a hash table lookup the worst-case is when all keys hash to the same value. To me, it seems like overkill to think in terms of an intelligent adversary in that case.

But in the problem described here, the worst-case is harder to construct. Especially while exploring the solution space given that slight tweaks to the solution can significantly change the nature of the worst-case. Thinking of it as adversarial implies thinking in terms of algorithms that dynamically produce the worst-case rather than trying to just identify a static worst-case that is specific to one solution. I can imagine that approach significantly speeding up the search for more optimal solutions.

replies(2): >>44461591 #>>44466098 #
1. dragontamer ◴[] No.44466098[source]
> Thinking of it as adversarial implies thinking in terms of algorithms that dynamically produce the worst-case rather than trying to just identify a static worst-case that is specific to one solution.

I think your statement makes sense for say, Quicksort or simple Binary Trees. In this case, the worst-case scenario is a "simple" reversed list. (ex: sorting [5 4 3 2 1] into [1 2 3 4 5]).

The worst-case insertion into an AVL-balanced tree however is a "Fibonacci Tree". AVL trees have a strange property where sorted lists [1 2 3 4 5 6 7] or [7 6 5 4 3 2 1] actually leads to optimal balancing. The sequence for worst case insertion into AVL Tree is something like [1 2 3 4 5 1.5 6] (1.5 to prevent the far-left tree from being perfectly balanced, and then 6 further unbalances the far-right branches)

Some algorithms have very non-intuitive worst-case scenarios.