It’s a brilliant illusion that works… until you hit aliasing, memory models, or concurrency, and suddenly the beautiful DAG collapses into a pile of phi nodes and load/store hell.
It’s a brilliant illusion that works… until you hit aliasing, memory models, or concurrency, and suddenly the beautiful DAG collapses into a pile of phi nodes and load/store hell.
As some evidence to the second point: Haskell is a language that does enforce immutability, but it's compiler, GHC, does not use SSA for main IR -- it uses a "spineless tagless g-machine" graph representation that does, in fact, rely on that immutability. SSA only happens later once it's lowered to a mutating form. If your variables aren't mutated, then you don't even need to transform them to SSA!
Of course, you're welcome to try something else, people certainly have -- take a look at how V8's move to Sea-of-Nodes has gone for them.
As a fan of a functional language, immutability doesn't mean state doesn't exist. You keep state with assignment --- in SSA, every piece of state has a new name.
If you want to keep state beyond the scope of a function, you have to return it, or call another function with it (and hope you have tail call elimination). Or, stash it in a mutable escape hatch.
Mutation is the result of sloppy thinking about the role of time in computation. Sloppy thinking is a hindrance to efficient and tractable code transformations.
When you "mutate" a value, you're implicitly indexing it on a time offset - the variable had one value at time t_0 and another value at time t_1. SSA simply uses naming to make this explicit. (As do CPS and ANF, which is where that "equivalence" comes from.)
If you don't use SSA, CPS, or ANF for this purpose, you need to do something else to make the time dimension explicit, or you're going to be dealing with some very hairy problems.
"Evil" in this case is shorthand for saying that mutable variables are an unsuitable model for the purpose. That's not a subjective decision - try to achieve similar results without dealing with the time/mutation issue and you'll find out why.
Sure, you still need to keep those algorithms in place for being able to reason about memory loads and stores. But if you put effort into kicking memory operations into virtual register operations (where you get SSA for free), then you can also make the compiler faster since you're not constantly rerunning these analyses, but only on demand for the handful of passes that specifically care about eliminating or moving loads and stores.
Of course, LuaJIT is cheating, because compared to most compilers it has redefined the problem to handling exactly two control-flow graphs (a line and a line followed by a loop), so most of the usual awkward parts of SSA simply do not apply. But isn’t creatively redefining the problem the software engineer’s main tool?..
SSA isn't (primarily) concerned with memory, it's concerned with local variables. It completely virtualizes the storage of local variables--in fact, all intermediate computations. By connecting computations through dataflow edges and not storage, it removes ordering (except that induced by dependence edges) from consideration.
It is, after all, what CPUs do under the hood with register renaming! They are doing dynamic SSA, a trick they stole from us compiler people!
The functional programmers have decided mutability is evil. The imperative programmers have not.
We functional programmers do crazy stuff and pretend we're not on real hardware - a new variable instead of mutating (how wasteful!) and infinite registers (what real-world machine supports that!?).
Anyway, there's plenty of room for alternatives to SSA/CPS/ANF. It's always possible to come up with something with more mutation.
The author of Sea-of-Nodes approach is quite critic of V8's decision, as one would expect.