Most active commenters
  • LPisGood(4)
  • bongodongobob(3)
  • Droobfest(3)

←back to thread

688 points samwho | 29 comments | | HN request time: 1.921s | source | bottom
1. fracus ◴[] No.45018314[source]
Having gone through electrical engineering, I always felt that Big O notation was never properly explained unless I happened to miss every time it was introduced. It always felt like it was just hand waived as something we should already know. I'm curious what level of math or computer science is usually responsible for introducing it.
replies(7): >>45018615 #>>45018641 #>>45019080 #>>45020045 #>>45021112 #>>45027527 #>>45029029 #
2. daemonologist ◴[] No.45018615[source]
My university had an "algorithm analysis" class (required for CS majors) which covered it along with various kinds of proofs. That was a junior/senior year class though; I think it was really expected that you were already somewhat familiar with the concept by the end of your first year (via osmosis I guess).
3. leni536 ◴[] No.45018641[source]
A function f(x) is said to be O(g(x)) if f(x)/g(x) is bounded, that is there is some C so that for every x, f(x)/g(x) < C .

In computer science f(x) is often some complexity function, like number of some specific operations when running an algorithm to completion.

replies(2): >>45020166 #>>45020570 #
4. bongodongobob ◴[] No.45019080[source]
Comp sci 101. Learned it my first year. There really isn't anything to it. It just describes how the number of operations grow as the number of inputs in your algorithm increases. It looks scary, but it's very simple and straightforward.
replies(2): >>45020265 #>>45023496 #
5. LPisGood ◴[] No.45020045[source]
That’s because you could define it in a lot of different ways. For example, if you define it as the number of steps it takes some Turing machine representing some algorithm to halt, then there is no such thing as a logarithmic time algorithm; O(lg n) = O(1)
replies(1): >>45020811 #
6. mvdtnz ◴[] No.45020166[source]
What an asshole way of explaining it. For practical purposes (don't @ me if this is technically wrong, I don't give a shit) it's a description of how the cost of an algorithm grows with the size of the input. For example if the cost doubles every time the input increments then it's O(n^2). If the cost increases in step with the input size then it's O(n). Simples.
replies(4): >>45020348 #>>45020521 #>>45021650 #>>45023105 #
7. nwatson ◴[] No.45020265[source]
It can get a lot more complicated though. At times there are more parameters to an algorithm's complexity than just `n`. E.g., the parameters for big-O might be `n`, `K`, and `ρ`, and some expressions in the big-O might involve combinations of those parameters.

In such cases as well, one might know for a specific application of the algorithm that, for example, `ρ` is bounded, and so the big-O becomes more influenced by the `n` and `K`.

replies(1): >>45021271 #
8. lern_too_spel ◴[] No.45020348{3}[source]
If the cost doubles for each increment in the input size, it's O(2^n).
replies(1): >>45020563 #
9. tauroid ◴[] No.45020521{3}[source]
Okay unless you're doing numerical analysis where they use it for the growth rate of the error term, which has nothing to do with cost or algorithms
10. mvdtnz ◴[] No.45020563{4}[source]
Right you are, my bad.
11. blt ◴[] No.45020570[source]
This needs to be refined: f(x) is O(g(x)) if there exists some X >= 0 such that f(x)/g(x) is bounded for all x > X.

Otherwise, we cannot say that 1 is O(x), for example.

12. umanwizard ◴[] No.45020811[source]
Why is that?
replies(1): >>45021480 #
13. jayd16 ◴[] No.45021112[source]
The Discrete Math class in the CS track is where I got the bulk of it.
14. bongodongobob ◴[] No.45021271{3}[source]
Yeah that's fair. Calc 1 should be enough to understand that.
15. LPisGood ◴[] No.45021480{3}[source]
Log(n) grows slower than n. That means there is some number N where the process terminates in fewer than N state transitions. But in a Turing machine, you can read at most one new cell on the tape with each state transition. Therefore the algorithm halts in constant time.
replies(1): >>45021946 #
16. badosu ◴[] No.45021650{3}[source]
I don't have a computer science education, but I had a math one. I found it a great explanation, but I understand why others would feel it's not.

It definitely is not an "asshole" way of explaining it though.

17. Droobfest ◴[] No.45021946{4}[source]
What is the relationship between n and N? log(n) is still unbounded so I can’t make sense of your comment.
replies(1): >>45021979 #
18. LPisGood ◴[] No.45021979{5}[source]
There is no relationship between n and N, of course. N is some fixed constant.

Log(n) is unbounded, yes, but if a Turing machine halts in sub-linear time then there is some length input for which that TM halts before it reads every cell of the input tape. Therefore, it doesn’t matter how the size of the input grows, that TM must halt in constant time. Make sense?

replies(1): >>45022942 #
19. Droobfest ◴[] No.45022942{6}[source]
I’m sorry but no. Maybe it’s my ignorance of TM’s, but O(log n) doesn’t read all input by definition. It doesn’t follow that it is therefore _independent_ of the input size.

What makes a/your TM special that this is the case?

I don’t mean to take too much of your time though, maybe I’m too dumb for this.

Edit: I can sort of see how O(log n) is impossible or at least O(n) in a TM, but to reduce it to O(1) makes no sense to me

replies(2): >>45026497 #>>45026821 #
20. xigoi ◴[] No.45023105{3}[source]
This is not only “technically” wrong, but completely wrong. O-notation has absolutely nothing to do with algorithms.
replies(1): >>45023329 #
21. writebetterc ◴[] No.45023329{4}[source]
Let's not over correct, of course O-notation has something do with algorithms for the working programmer.
22. Sankozi ◴[] No.45023496[source]
You are commenting under blog post that tried to explain it and got it wrong. It is not simple.

Examples:

- algorithm with O(2^n) complexity might be faster for your problem than O(1) algorithm

- the same algorithm may be O(2^n) or O(n³) depending how you define size function

This is not straightforward.

replies(2): >>45023802 #>>45023898 #
23. bongodongobob ◴[] No.45023802{3}[source]
It is though.
24. fxwin ◴[] No.45023898{3}[source]
that's why he said it describes how runtime behaves as input size changes, not how it behaves for specific values
25. LPisGood ◴[] No.45026497{7}[source]
Remember that TM’s read their input left to right. Therefore if there is some fixed number N where the TM halts on all length N inputs, then it would halt in the same number of steps if input where length N+1 because TM halts before it can read the additional input.
26. umanwizard ◴[] No.45026821{7}[source]
Let me try to explain the proof in more detail. Assume the TM is sublinear. Then there exists some fixed N such that for every input of size n <= N, the number of steps the TM halts in is less than N.

Now consider any input I, and let I_N be the subset of I that fits in N characters (I.e. I_N is the same as I except with all characters past the Nth replaced by blanks).

Then by our earlier statement defining N, the TM must halt on I_N after fewer than N steps. But the TM’s behavior on I for the first N steps must be the same as its behavior on I_N, because it can’t possibly have read any different characters (since in N steps it can only have read at most the first N characters, and those two inputs are the same on that interval). Thus the machine must halt in less than N steps on I. But I was chosen arbitrarily, so we’ve shown that the machine halts in less than N steps on any input, and recall that N was a fixed constant.

replies(1): >>45035092 #
27. ordu ◴[] No.45027527[source]
> It always felt like it was just hand waived as something we should already know.

I learned about Big O notation while learning calculus, along with little-o.

28. anthomtb ◴[] No.45029029[source]
Big O was introduced in my Data Structures and Algorithms (DSA) class which was nominally a second year undergrad class. In practice, most Computer Science (CS) students were far enough advanced to take DSA in their first year.

I took DSA as a third-year Electrical and Computer Engineering student. I recall maybe 15 minutes of exposition on the topic and exam questions of a near-throwaway nature (ergo: write this tree parsing algorithm, 95 points for your code's correctness, 5 points for explaining your algorithm's Big O complexity). It did seem like something that CS professors and teaching assistants considered either trivial or self-explanatory.

29. Droobfest ◴[] No.45035092{8}[source]
This looks logically sound to me, thanks!