←back to thread

122 points phsilva | 4 comments | | HN request time: 0.952s | source
Show context
thunkingdeep ◴[] No.43110710[source]
This does NOT mean Python will get Tail Call Optimization, as Guido cannot be shown The Light, and has decided.
replies(4): >>43110815 #>>43110832 #>>43111490 #>>43112657 #
beagle3 ◴[] No.43111490[source]
It is not an optimization ; it changes program semantics - converts programs that will run out of stack eventually regardless of the amount of available memory (and raise exceptions an the process, for example, which a program might rely on. Either way, semantics are changed)

It should only be called Tail Call Elimination.

replies(2): >>43111684 #>>43111776 #
dragonwriter ◴[] No.43111776[source]
By that standard, any optimization that changes scaling in any dimension changes semantics, which, well, I’m not saying its wrong, but I would say it is exactly what people looking for optimization want.
replies(3): >>43111911 #>>43112365 #>>43119601 #
beagle3 ◴[] No.43112365[source]
I disagree.

An optimization that speeds a program by x2 has the same effect as running on a faster CPU. An optimization that packs things tighter into memory has the same effect as using more memory.

Program semantics are usually referred to as “all output given all input, for any input configuration” but ignoring memory use or CPU time, provided they are both finite (but not limited).

TCE easily converts a program that will halt, regardless of available memory, to one that will never halt, regardless of available memory. That’s a big change in both theoretical and practical semantics.

I probably won’t argue that a change that reduces an O(n^5) space/time requirement to an O(1) requirement is a change in semantics, even though it practically is a huge change. But TCE changes a most basic property of a finite memory Turing machine (halts or not).

We don’t have infinite memory Turing machines.

edited: Turing machine -> finite memory Turing machine.

replies(1): >>43112674 #
1. coldtea ◴[] No.43112674[source]
>I probably won’t argue that a change that reduces an O(n^5) space/time requirement to an O(1) requirement is a change in semantics, even though it practically is a huge change

Space/time requirements aren't language semantics though, are they?

replies(1): >>43113125 #
2. tsegratis ◴[] No.43113125[source]
it changes debug semantics

this is the reason guido avoids it. programs will still fail, except now without a stacktrace

replies(1): >>43113756 #
3. rollcat ◴[] No.43113756[source]
GvR always prioritised ease of debugging over performance, and honestly I'm in the same camp. What good does a fast program do if it's incorrect?

But I think you can get a fine balance by keeping a recent call trace (in a ring buffer?). Lua does this and honestly it's OK, once you get used to the idea that you're not looking at stack frames, but execution history.

IMHO Python should add that, and it should clearly distinguish between which part of a crash log is a stack trace, and which one is a trace of tail calls.

Either way this is going to be quite a drastic change.

replies(1): >>43123195 #
4. tsegratis ◴[] No.43123195{3}[source]
that's a nice solution!