Btw. too bad author talks about microsecond guarantees usage but does not provide a link, that would be interesting reading.
Btw. too bad author talks about microsecond guarantees usage but does not provide a link, that would be interesting reading.
Why would there be large memory allocations because of immutable data structures? Btw, you can also use immutable data structure in eg Rust fairly easily. And Haskell also supports mutation and mutable data structures.
However, Haskell can use a lot of memory, but that's more to do with pervasive 'boxing' by default, and perhaps laziness.
Tries (like scala’s Vector) or trie maps (the core map types of Scala, Clojure and probably Haskell?) aren’t copied on updates.
In fact, whether a data structure is an immutable or persistent data structure or merely an unmodifiable data structure (like Kotlin uses) is based on whether it requires full copies on most updates or not. In FP languages, immutable data structures aren’t “specialized” at all.
This hurt my brain. It seems that in some places (e.g. Java land) unmodifiable refers to something that you can't modify but could just be a wrapper around a structure that can be modified. In that case they use immutable to mean something that is nowhere modifiable.
I may be misrepresenting this idea, but I think the terminology is so poor that it deserves to be misunderstood.
// Using mutability.
// `increment` is void, and makes 2 bigger for everyone.
increment(2);
// Typical Java "safety".
// It's still void, but now it throws a RuntimeException
// because the developers are saving you from making everyone's 2 bigger.
increment(2);
// Immutable
// Returns 3
increment(2);
containers and unordered-containers handle most of your needs and they only copy their trees' spines (O log n) on update.