The shocking truth is that SSA is functional! That's right, the compiler for your favourite imperative language actually optimizes functional programs. See, for example,
https://www.jantar.org/papers/chakravarty03perspective.pdf. In fact, SSA, continuation passing style, and ANF are basically the same thing.
My experience with SSA is extremely limited, so that might be a stupid question. But does that remain true once memory enters the picture?
The llvm tutorials I played with (admittedly a long time ago) made it seem like "just allocate everything and trust mem2reg" basically abstracted SSA pretty completely from a user pov.
If you're hell bent on functional style, you can represent memory writes as ((Address, Value, X) -> X), where X is a compile-time-only linear type representing the current memory state, which can be manipulated like any other SSA variable. It makes some things more elegant, like two reads of the same address naturally being the same value (as long as it's reading from the same memory state). Doesn't help at all with aliasing analysis or write reordering though so I don't think any serious compilers do this.