Every time I see an introduction to Big-O notation, I immediately go look for a section about the worst case to find the most common, fundamental misconception that is present in nearly every such text.
Lo and behold:
> Because it's common for an algorithm's performance to depend not just on the size of the input, but also its arrangement, big O notation always describes the worst-case scenario.
This is not true. I'm on my phone, so I can't be bothered to explain this in more detail, but there is plenty of information about this online.
replies(4):