←back to thread

688 points samwho | 1 comments | | HN request time: 0.227s | source
Show context
ryeats ◴[] No.45019141[source]
O(1) in many cases involves a hashing function which is a non-trivial but constant cost. For smaller values of N it can be outperformed in terms of wall clock time by n^2 worst case algorithms.
replies(2): >>45019209 #>>45025666 #
svara ◴[] No.45019209[source]
I mean, true obviously, but don't say that too loud lest people get the wrong ideas. For most practical purposes n^2 means computer stops working here. Getting people to understand that is hard enough already ;)

Besides, often you're lucky and there's a trivial perfect hash like modulo.

replies(3): >>45019368 #>>45022282 #>>45023070 #
b52_ ◴[] No.45019368[source]
What do you mean? Modulo is not a perfect hash function... What if your hash table had size 11 and you hash two keys of 22 and 33?

I also don't understand your first point. We can run n^2 algorithms on massive inputs given its just a polynomial. Are you thinking of 2^n perhaps?

replies(3): >>45020015 #>>45021292 #>>45057972 #
LPisGood ◴[] No.45020015[source]
n^2 algorithms on _massive_ inputs seems a little far fetched, no?

Around one to one hundred billion things start getting difficult.

replies(2): >>45022269 #>>45037966 #
1. vlovich123 ◴[] No.45022269[source]
The challenge with big-O is you don’t know how many elements results in what kind of processing time because you don’t have a baseline of performance on 1 element. So if processing 1 element takes 10 seconds, then 10 elements would take 16 minutes.

In practice, n^2 sees surprising slowdowns way before that, in the 10k-100k range you could be spending minutes of processing time (10ms for an element would only need ~77 elements to take 1 minute).