←back to thread

Why SSA Compilers?

(mcyoung.xyz)
160 points transpute | 1 comments | | HN request time: 0.209s | source
Show context
zachixer ◴[] No.45675783[source]
Every time I see a clean SSA explainer like this, I’m reminded that the “simplicity” of SSA only exists because we’ve decided mutation is evil. It’s not that SSA is simpler — it’s that we’ve engineered our entire optimization pipeline around pretending state doesn’t exist.

It’s a brilliant illusion that works… until you hit aliasing, memory models, or concurrency, and suddenly the beautiful DAG collapses into a pile of phi nodes and load/store hell.

replies(8): >>45675858 #>>45675871 #>>45676050 #>>45676212 #>>45676222 #>>45676288 #>>45678531 #>>45678840 #
1. jcranmer ◴[] No.45676288[source]
SSA isn't about saying mutation is evil. It's about trivializing chasing down def-use rules. In the Dragon Book, essentially the first two dataflow analyses introduced are "reaching definitions" and "live variables"; within an SSA-based IR, these algorithms are basically "traverse a few pointers". There's also some ancillary benefits--SSA also makes a flow-insensitive algorithm partially flow-sensitive just by the fact that it's renaming several variables.

Sure, you still need to keep those algorithms in place for being able to reason about memory loads and stores. But if you put effort into kicking memory operations into virtual register operations (where you get SSA for free), then you can also make the compiler faster since you're not constantly rerunning these analyses, but only on demand for the handful of passes that specifically care about eliminating or moving loads and stores.