←back to thread

688 points samwho | 4 comments | | HN request time: 0.894s | source
Show context
ryeats ◴[] No.45019141[source]
O(1) in many cases involves a hashing function which is a non-trivial but constant cost. For smaller values of N it can be outperformed in terms of wall clock time by n^2 worst case algorithms.
replies(2): >>45019209 #>>45025666 #
svara ◴[] No.45019209[source]
I mean, true obviously, but don't say that too loud lest people get the wrong ideas. For most practical purposes n^2 means computer stops working here. Getting people to understand that is hard enough already ;)

Besides, often you're lucky and there's a trivial perfect hash like modulo.

replies(3): >>45019368 #>>45022282 #>>45023070 #
1. arp242 ◴[] No.45022282[source]
> true obviously

Well, it doesn't seem that obvious, at least not to everyone, because several times I've seen people rewrite things to make it O(1) when it was actually slower for the vast majority of cases (or even all cases).

replies(1): >>45022525 #
2. branko_d ◴[] No.45022525[source]
Even if it's slower for the vast majority of cases, but there are rare cases where the computer would otherwise freeze, that's still a win.
replies(1): >>45022572 #
3. arp242 ◴[] No.45022572[source]
Your computer will not freeze if your input is small enough no matter what big-O says. In many cases it's guaranteed to be small or small-ish.
replies(1): >>45028418 #
4. branko_d ◴[] No.45028418{3}[source]
> In many cases it's guaranteed to be small or small-ish.

And in many cases it's assumed to be small, but not guaranteed. That's where the trouble lies.

A classic example is Windows using a quadratic algorithm for arranging desktop icons that caused severe performance issues when users had many icons on their desktop.

https://github.com/microsoft/Windows-Dev-Performance/issues/...