←back to thread

688 points samwho | 3 comments | | HN request time: 0s | source
Show context
fracus ◴[] No.45018314[source]
Having gone through electrical engineering, I always felt that Big O notation was never properly explained unless I happened to miss every time it was introduced. It always felt like it was just hand waived as something we should already know. I'm curious what level of math or computer science is usually responsible for introducing it.
replies(7): >>45018615 #>>45018641 #>>45019080 #>>45020045 #>>45021112 #>>45027527 #>>45029029 #
bongodongobob ◴[] No.45019080[source]
Comp sci 101. Learned it my first year. There really isn't anything to it. It just describes how the number of operations grow as the number of inputs in your algorithm increases. It looks scary, but it's very simple and straightforward.
replies(2): >>45020265 #>>45023496 #
1. Sankozi ◴[] No.45023496[source]
You are commenting under blog post that tried to explain it and got it wrong. It is not simple.

Examples:

- algorithm with O(2^n) complexity might be faster for your problem than O(1) algorithm

- the same algorithm may be O(2^n) or O(n³) depending how you define size function

This is not straightforward.

replies(2): >>45023802 #>>45023898 #
2. bongodongobob ◴[] No.45023802[source]
It is though.
3. fxwin ◴[] No.45023898[source]
that's why he said it describes how runtime behaves as input size changes, not how it behaves for specific values