It should only be called Tail Call Elimination.
That doesn't follow. This isn't like going from driving a car to flying an airplane. It's like going from driving a car to just teleporting instantly. (Except it's about space rather than time.)
It's a difference in degree (optimization), yes, but by a factor of infinity (O(n) overhead to 0 overhead). At that point it's not unreasonable to consider it a difference in kind (semantics).
for (int i = 0; i < n; i++) a += i;
To:
a += n * (n+1) / 2;
Is this an optimisation or a change in program semantics? I've never heard anyone call it anything slse than an optimisation.
> Is this an optimisation or a change in program semantics?
Note that I specifically said something can be both an optimization and a change in semantics. It's not either-or.
However, it all depends on how the program semantics are defined. They are defined by the language specifications. Which means that in your example, it's by definition not a semantic change, because it occurs under the as-if rule, which says that optimizations are allowed as long as they don't affect program semantics. In fact, I'm not sure it's even possible to write a program that would be guaranteed to distinguish them based purely on the language standard. Whereas with tail recursion it's trivial to write a program that will crash without tail recursion but run arbitrarily long with it.
We do have at least one optimization that is permitted despite being prohibited by the as-if rule: return-value optimization (RVO). People certainly consider that a change in semantics, as well as an optimization.
An optimization that speeds a program by x2 has the same effect as running on a faster CPU. An optimization that packs things tighter into memory has the same effect as using more memory.
Program semantics are usually referred to as “all output given all input, for any input configuration” but ignoring memory use or CPU time, provided they are both finite (but not limited).
TCE easily converts a program that will halt, regardless of available memory, to one that will never halt, regardless of available memory. That’s a big change in both theoretical and practical semantics.
I probably won’t argue that a change that reduces an O(n^5) space/time requirement to an O(1) requirement is a change in semantics, even though it practically is a huge change. But TCE changes a most basic property of a finite memory Turing machine (halts or not).
We don’t have infinite memory Turing machines.
edited: Turing machine -> finite memory Turing machine.
Space/time requirements aren't language semantics though, are they?
With this kind of "benign" change, all programs that worked before still work, and some that didn't work before now work. I would argue this is a good thing.
But I think you can get a fine balance by keeping a recent call trace (in a ring buffer?). Lua does this and honestly it's OK, once you get used to the idea that you're not looking at stack frames, but execution history.
IMHO Python should add that, and it should clearly distinguish between which part of a crash log is a stack trace, and which one is a trace of tail calls.
Either way this is going to be quite a drastic change.
Python dicts were in insert sort order for 3.6 but this only became a garuntee as opposed to an implementation choice that could be changed at anyvtime with python3.7