←back to thread

688 points samwho | 1 comments | | HN request time: 0s | source
Show context
IceDane ◴[] No.45016742[source]
Every time I see an introduction to Big-O notation, I immediately go look for a section about the worst case to find the most common, fundamental misconception that is present in nearly every such text.

Lo and behold:

> Because it's common for an algorithm's performance to depend not just on the size of the input, but also its arrangement, big O notation always describes the worst-case scenario.

This is not true. I'm on my phone, so I can't be bothered to explain this in more detail, but there is plenty of information about this online.

replies(4): >>45016873 #>>45016884 #>>45016889 #>>45017375 #
tromp ◴[] No.45017375[source]
O() doesn't necessarily describe worst-case behaviour. It just provides an asymptotic upper bound. So a quadratic sorting algorithm can still be said to be O(n^3), although that might be a little misleading.
replies(2): >>45017466 #>>45022188 #
didibus ◴[] No.45022188[source]
In math that’s true, but a practitioner software engineer uses it to mean the common "tightest" upper bound we can easily prove for the case we care about (worst, amortized, average, etc.), and not the actual tightest possible bound.
replies(2): >>45022710 #>>45038283 #
1. account42 ◴[] No.45038283[source]
No, that would be \Omega(g(x))