←back to thread

108 points BerislavLopac | 8 comments | | HN request time: 0.001s | source | bottom
Show context
graycat ◴[] No.43714234[source]
Well, for a computer that is a finite state machine there are only finitely many states. So, in finite time the machine will either (a) halt or (b) return to an earlier state and, thus, be in an infinite loop. So, in this case can tell if the "program will stop" and, thus, solve "the halting problem".

Uh, we also assume that before we start, we can look at the design of "the machine" and know how many states there can be and from the speed of the machine how many new states are visited each second. So, we will know how long we have to wait before we see either (a) or (b).

replies(6): >>43714273 #>>43714285 #>>43714402 #>>43714513 #>>43714742 #>>43717019 #
Sankozi ◴[] No.43714285[source]
Yes, lots of these problems assume fantasy infinite world.

Big O notation also suffers from this - it's almost useless for real world problems.

replies(5): >>43714298 #>>43714309 #>>43714328 #>>43714554 #>>43718244 #
MattPalmer1086 ◴[] No.43714328[source]
You find Big O notation useless for real world problems?

I find it useful to characterise algorithmic performance, e.g. O(1), O(n) O(n^2), etc.

What do you find useless about it - and do you know of something that works better?

replies(2): >>43714361 #>>43714455 #
1. MrBuddyCasino ◴[] No.43714361[source]
Simpler algorithms are usually faster for small N, and N is usually small. Big O assumption is fantasy world where N is always large, and constant factors that slow down an algorithm can be ignored.
replies(3): >>43714369 #>>43714445 #>>43714572 #
2. MattPalmer1086 ◴[] No.43714369[source]
Yeah, constant factors are not represented and often make a big real world difference. Good point.
3. coldtea ◴[] No.43714445[source]
>Simpler algorithms are usually faster for small N, and N is usually small.

This mentality is how we ended up with cpu and memory hogging Electron apps...

replies(2): >>43714489 #>>43714738 #
4. JohnKemeny ◴[] No.43714489[source]
That's not an accurate description.

For example, it is common in sorting algorithms to do an n² algorithm like bubble sort when the list to sort is small (e.g. <50), whereas when it's larger, we do an n log n algorithm like merge sort.

The issue is that merge sort does recursion, which comes with some extra cost, so that an n² algorithm beats an n log n algorithm, provided n is small.

It has nothing with your criticism to do.

replies(1): >>43717119 #
5. nottorp ◴[] No.43714572[source]
https://news.ycombinator.com/item?id=26296339

Small N right?

replies(1): >>43714683 #
6. MrBuddyCasino ◴[] No.43714738[source]
Electron

a) isn’t really slow for what it does

b) introduces many layers of abstraction, leading to larger memory consumption compared to „native“ apps

c) is certainly not algorithmically unoptimized, it runs on a software stack that has been tuned like few others using billions of dollars

You just loosely associated words and concepts that occupy a similar emotional niche.

7. Al-Khwarizmi ◴[] No.43717119{3}[source]
You can (and probably should) add a threshold to recursive algorithms like mergesort so that they don't end up doing recursive calls on very small arrays.

For arrays smaller than the threshold, insertion is faster than bubble sort. And if you really don't want recursion for large arrays to begin with, you have heapsort or Shell sort.

I don't think there is any practical case where using bubble sort makes sense, except "efficiency doesn't matter for my use case and this is the code I already have".

replies(1): >>43719390 #
8. graycat ◴[] No.43719390{4}[source]
As in one of volumes of Knuth's The Art of Computing, Heap sort is in place and achieves the Gleason bound so can be regarded as not beaten by quick sort.