Btw. too bad author talks about microsecond guarantees usage but does not provide a link, that would be interesting reading.
Btw. too bad author talks about microsecond guarantees usage but does not provide a link, that would be interesting reading.
In practice, it is not. The canonical Haskell compiler, GHC, is excellent at transforming operations on immutable data, as Haskell programs are written, into efficient mutations, at the runtime level. Also, since web development is quite popular in the Haskell community, lots of people have spent many hours optimizing this precise use-case.
In my experience, the real downside is that compilation times are a bit long -- the compiler is doing a LOT of work after all.
Yes, at the level of native machine code and memory cells, there's not that much of a difference between immutability + garbage collection, and higher level source code that mutates. Thanks to GC you are going to overwrite the same memory locations over and over again, too.
Of course, even a moving GC has limits, itwon't turn a hashtable into something that has local accesses.