Most active commenters

    ←back to thread

    688 points samwho | 16 comments | | HN request time: 0.237s | source | bottom
    1. ryeats ◴[] No.45019141[source]
    O(1) in many cases involves a hashing function which is a non-trivial but constant cost. For smaller values of N it can be outperformed in terms of wall clock time by n^2 worst case algorithms.
    replies(2): >>45019209 #>>45025666 #
    2. svara ◴[] No.45019209[source]
    I mean, true obviously, but don't say that too loud lest people get the wrong ideas. For most practical purposes n^2 means computer stops working here. Getting people to understand that is hard enough already ;)

    Besides, often you're lucky and there's a trivial perfect hash like modulo.

    replies(3): >>45019368 #>>45022282 #>>45023070 #
    3. b52_ ◴[] No.45019368[source]
    What do you mean? Modulo is not a perfect hash function... What if your hash table had size 11 and you hash two keys of 22 and 33?

    I also don't understand your first point. We can run n^2 algorithms on massive inputs given its just a polynomial. Are you thinking of 2^n perhaps?

    replies(3): >>45020015 #>>45021292 #>>45057972 #
    4. LPisGood ◴[] No.45020015{3}[source]
    n^2 algorithms on _massive_ inputs seems a little far fetched, no?

    Around one to one hundred billion things start getting difficult.

    replies(2): >>45022269 #>>45037966 #
    5. Panzer04 ◴[] No.45021292{3}[source]
    n^2 is probably the worst offender of these algorithms - it's fast enough to get into production and slow enough to blow up once you start using it.

    Rockstar infamously wasted five minutes loading GTAV because they had an n^2 algorithm in the startup sequence.

    6. vlovich123 ◴[] No.45022269{4}[source]
    The challenge with big-O is you don’t know how many elements results in what kind of processing time because you don’t have a baseline of performance on 1 element. So if processing 1 element takes 10 seconds, then 10 elements would take 16 minutes.

    In practice, n^2 sees surprising slowdowns way before that, in the 10k-100k range you could be spending minutes of processing time (10ms for an element would only need ~77 elements to take 1 minute).

    7. arp242 ◴[] No.45022282[source]
    > true obviously

    Well, it doesn't seem that obvious, at least not to everyone, because several times I've seen people rewrite things to make it O(1) when it was actually slower for the vast majority of cases (or even all cases).

    replies(1): >>45022525 #
    8. branko_d ◴[] No.45022525{3}[source]
    Even if it's slower for the vast majority of cases, but there are rare cases where the computer would otherwise freeze, that's still a win.
    replies(1): >>45022572 #
    9. arp242 ◴[] No.45022572{4}[source]
    Your computer will not freeze if your input is small enough no matter what big-O says. In many cases it's guaranteed to be small or small-ish.
    replies(1): >>45028418 #
    10. whatever1 ◴[] No.45023070[source]
    N^2 is just two nested loops. It trivially shows up everywhere you have 2D arrays.

    Do I really need to make my code spaghetti if the array sizes are small enough to not impact my speed ?

    replies(2): >>45024221 #>>45025348 #
    11. lurking_swe ◴[] No.45024221{3}[source]
    if you can guarantee N is small, then it’s probably not an issue.
    12. Anon1096 ◴[] No.45025348{3}[source]
    Iterating over a 2D array using nested loops is an O(N) operation. N is the size of your data, how it's represented in an array is irrelevant.
    13. namibj ◴[] No.45025666[source]
    Arguably this is just a fairly poor way of thinking for quite many practical applications. Notably, memory access latency(/inverse throughput) is roughly between O(log(n)) [ https://news.ycombinator.com/item?id=12385472 ] and O(n^(1/2)) [ https://news.ycombinator.com/item?id=12383275 ](the latter aka O(sqrt(n))).

    For example, in applications where the sorted form can be readily maintained, a decent B+-tree tends to massively outperform a hashmap as soon as you get the streamed/non-indexed side (of what's in a way a lookup join) to opportunistically batch items:

    as when you sort your lookups you can use exponential forward search (compare at exponentially increasing distances from the cursor; once you're past the target, run binary search between this now upper bounds and the previous probe point as lower bound) in your index for each next key to reduce the per-lookup cost to be logarithmic in distance from the last-looked-up key (asymptotically always better than single isolated lookups; in practice tends to cap out at 2x the cost in pathological cases if you respect page locality of B+-tree structures). If you aren't ignorant of your LRU cache set during such, you'll get by with overall significantly fewer memory accesses to e.g. fresh DRAM pages (let alone NAND pages) than with a hashmap setup.

    I've severely sped up a C++ program by replacing an `std::unordered_map` with a `std::vector` (iirc technically a pair of; for SoA purposes) by realizing that I could collect the items unordered; sort the structure; then use binary search instead of hashmap lookups. The function that I modified ran something like 3x faster as a result, and that's without anything like vectorization-friendly structures or such.

    14. branko_d ◴[] No.45028418{5}[source]
    > In many cases it's guaranteed to be small or small-ish.

    And in many cases it's assumed to be small, but not guaranteed. That's where the trouble lies.

    A classic example is Windows using a quadratic algorithm for arranging desktop icons that caused severe performance issues when users had many icons on their desktop.

    https://github.com/microsoft/Windows-Dev-Performance/issues/...

    15. b52_ ◴[] No.45037966{4}[source]
    I'd argue that one billion is fairly large for what most people work on. But yes, point taken.
    16. svara ◴[] No.45057972{3}[source]
    > What do you mean? Modulo is not a perfect hash function

    It's a perfect hash function for the case where you work with ints and know the maximal min-max range beforehand; then you can modulo by the size of the range as long as it's not too large. In your example 33-21 --> mod 12

    This comes up for me surprisingly often but obviously depends a lot on what you work on. It is often tempting to reach for a hashtable, but it's a lot less efficient in this case.