←back to thread

1764 points fatihky | 9 comments | | HN request time: 0.266s | source | bottom
1. sonoffett ◴[] No.12701639[source]
quicksort is O(n^2) which is definitely not the "best big-O" for sorting.
replies(4): >>12701670 #>>12701718 #>>12701729 #>>12701867 #
2. sonoffett ◴[] No.12701670[source]
"how do you count the bits most efficiently?"

What does this even mean?

replies(2): >>12701753 #>>12701756 #
3. Jiig ◴[] No.12701718[source]
Its worst case is O(n^2) sure, so heap sort would be better if you know that your data would hit the worst case scenario of quick sort every time.
4. dekhn ◴[] No.12701729[source]
quicksort is O(nlogn) average case, and O(n2) on already sorted arrays.
replies(1): >>12703114 #
5. UK-AL ◴[] No.12701753[source]
What i think he means is counting the amount of set(1) bits in the array,
replies(1): >>12701768 #
6. dekhn ◴[] No.12701756[source]
If you have a collection of data, and you want to know the number of 1 bits, and you want to do it with a minimum of resources... what is the process?

For example, standard bit-shifting and masking the lowest bit to set a counter is one way to do this. Possibly there are other, faster ways, such as using a lookup table (a byte or more can be "counted" at a time). Of course, because so many people were doing this, intel added a popcnt instruction which probably is more efficient (faster) than either of the above, at the expense of more CPU real estate, heat, etc.

Turns out counting 1 bits in a dataset is a super-important problem that shows up in a lot of situations.

7. sonoffett ◴[] No.12701768{3}[source]
Thanks, the question makes more sense now :)
8. pkaye ◴[] No.12701867[source]
Quicksort is O(n log n) average and O(n^2) worst case. Heapsort is O(n log n) for both average and worst case.
9. bvinc ◴[] No.12703114[source]
Just a nitpick: There are versions of quicksort that perform well, O(n*log n), on already sorted data. But there is always some worst case scenario where it can be O(n^2).