I think it may be one of those things you have to see in order to understand.
I think it may be one of those things you have to see in order to understand.
As such, localized context, everywhere, is perhaps the best way to explain it from the point of view of a mutable world. At no point do you ever need to know about the state of the entire program, you just need to know the data and the function. I don't need the entire program up and running in order to test or debug this function. I just need the data that was sent in, which CANNOT be changed by any other part of the program.
With a very basic concrete example:
x = 7
x = x + 3
x = x / 2
Vs
x = 7
x1 = x + 3
x2 = x1 / 2
Reordering the first will have no error, but you'll get the wrong result. The second will produce an error if you try to reorder the statements.
Another way to look at it is that in the first example, the 3rd calculation doesn't have "x" as a dependency but rather "x in the state where addition has already been completed" (i.e. it's 3 different x's that all share the same name). Doing single assignment is just making this explicit.
In mutating models, typically abstract (mathematical / conceptual) objects are modeled as memory locations. Which means that object identity implies pointer identity. But that's a problem when different versions of the same object need to be maintained.
It's much easier when we represent object identity by something other than pointer identity, such as (string) names or 32-bit integer keys. Such representation allows us to materialize us different versions (or even the same version) of an object in multiple places, at the same time. This allows us to concurrently read or write different versions of the same abstract object. It's also an enabler for serialization/deserialization. Not requiring an object to be materialized in one particular place allows saving objects to disk or sending them around.
As in - it's not very "constant" if you keep re-making it in your loop, right?
Whereas "immutable" throws away that extra context and means "whatever variable you have, for however long you have it, it's unchangeable."
They are very, very different semantically, because const is always local. Declaring something const has no effect on what happens with the value bound to a const variable anywhere else in the program. Whereas, immutability is a global property: An immutable array, for example, can be passed around and it will always be immutable.
JS has always hade 'freeze' as a kind of runtime immutability, and tooling like TS can provide for readonly types that provide immutability guarantees at compile time.
I think it's simply the difference between the curious mind, who explores stuff like Clojure off the job (or is very lucky to get a Clojure job) and the 9 to 5 worker, who doesn't know any better and has never experienced writing a FP codebase.
How do you write code that actually works?
Or then would the block of floats be "immutable but not from this bit"? So the code that processes a block of samples can write to it, the code that fills the sample buffer can write to it, but nothing else should?
However, don't you still need to understand the entire program as ultimately that's what you are trying to build.
And if the state of the entire programme doesn't change - then nothing has happened. ie there still has to be mutable state somewhere - so where is it moved to?
I tried to learn Haskel before but I just got bogged down in the type system and formalization - that never sat with me (ironically in retrospect Monads are a trivial concept that they obfuscated in the community to oblivion, yet another Monad tutorial was a meme at the time).
I used F# as well but it is too multi paradigm and pragmatic, I literally wrote C# in F# syntax when I hit a wall and I didn't learn as much about FP when I played with it.
Clojure had the lisp weirdness to get over, but it's homoiconicty combined with the powerful semantics of core data structures - it was the first time where the concept of working with values vs objects 'clicked' for me. I would still never use it professionally, but I would recommend it to everyone who does not have a background in FP and/or lisp experience.
Like, if you have a constraint is_even(x) that's really easy to check in your head with some informal Floyd-Hoare logic.
And it scales to extracting code into helper functions and multiple variables. If you must track which set of variables form one context x1+y1, x2+y2, etc I find it much harder to check the invariants in my head.
These 'fixed state shape' situations are where I'd grab a state monad in Haskell and start thinking top-down in terms of actions+invariants.
That’s always felt very odd to me.
For exemple, it's endlessly amusing to me to see all the efforts the Haskell community does to basically reinvent mutability in a way which is somehow palatable to their type system. Sometimes they even fail to even realise that it's what they are doing.
In the end, the goal is always the same: better control and warranties about the impact of side effects with minimum fuss. Carmack approach here is sensible. You want practices which make things easy to debug and reason about while mainting flexibility where it makes sense like iterative calculations.
People jump ahead using AI to improve their reading comprehension of source code, when there are still basic practices of style, writing, & composition that for some reason are yet to be widespread throughout the industry despite already having a long standing tradition in practice, alongside pretty firm grounding in academics.
If you want to do an operation on fooA, you don't mutate fooA. You call fooB = MyFunc(fooA) and use fooB.
The nice thing here is you can pass around pointers to fooA and never worry that anything is going to change it underneath you.
You don't need to protect private variables because your internal workings cannot be mutated. Other code can copy it but not disrupt it.
Because that’s not what they’re doing. They’re isolating state in a systemic, predictable way.
you cant change a constant though
Array<Float> append(Float value);
Array<Float> replace(int index, Float value);
The methods don't mutate the array, they return a new array with the change.The trick is: How do you make this fast without copying a whole array?
Clojure includes a variety of collection classes that "magically" make these operations fast, for a variety of data types (lists, sets, maps, queues, etc). Also on the JVM there's Vavr; if you dig around you might find equivalents for other platforms.
No it won't be quite as fast as mutating a raw buffer, but it's usually plenty fast enough and you can always special-case performance sensitive spots.
Even if you never write a line of production Clojure, it's worth experimenting with just to get into the mindset. I don't use it, but I apply the principles I learned from Clojure in all the other languages I do use.
This is one way of thinking about it: https://news.ycombinator.com/item?id=45701901 (Simplify your code: Functional core, imperative shell)
I often hear from programmers that "oh, functional programming must be hard." It's actually the opposite. Imperative programming is hard. I choose to be a functional programmer because I am dumb, and the language gives me superpowers.
I think that Rust made this decision because the x1, x2, x3 style of code is really a pain in the ass to write.
Of course not, that's impossible. Modern programs are way to large to keep in your head and reason about.
So you need to be able to isolate certain parts of the program and just reason about those pieces while you debug or modify the code.
Once you identify the part of the program that needs to change, you don't have to worry about all the other parts of the program while you're making that change as long as you keep the contracts of all the functions in place.
For example, in Haskell, any function that can perform IO has "IO" in the return type, so the "printLine" equivalent is: "putStrLn :: String -> IO". (I'm simplifying a bit here). The result is that you know that a function like "getUserComments :: User -> [CommentId]" is only going to do what it says on the tin - it won't go fetch data from a database, print anything to a log, spawn new threads, etc.
It gives similar organizational/clarity benefits as something like "hexagonal architecture," or a capabilities system. By limiting the scope of what it's possible for a given unit of code to do, it's faster to understand the system and you can iterate more confidently with code you can trust.
The classic example is a list or array. You don't add a value to an existing list. You create a new list which consists of the old list plus the new value. [1]
This is a subtle but important difference. It means any part of your program with a reference to the original list will not have it change unexpectedly. This eliminates a large class of subtle bugs you no longer have to worry about.
[1] Whether the new list has completely new copy of the existing data, or references it from the old list, is an important optimization detail, but either way the guarantee is the same. It's important to get these optimizations right to make the efficiency of the language practical, but while using the data structure you don't have to worry about those details.
I think the authors are quite aware of the relationship between these techniques and mutable state! I imagine it's similar for other canonical functional programming texts.
Besides the "pure" functional languages like Haskell, there are languages that are sort of immutability-first (and support sophisticated effects libraries), or at least have good immutable collections libraries in the stdlib, but are flexible about mutation as well, so you can pick your poison: Scala, Clojure, Rust, Nim (and probably lots of others).
All of these go further and are more comfortable than just throwing `const` or `.freeze` around in languages that weren't designed with this style in mind. If you haven't tried them, you should! They're really pleasant to work with.
----
1: https://www.manning.com/books/functional-programming-in-scal...
2: https://www.manning.com/books/functional-programming-in-kotl...
For example:
(let [result {:a 1}
result (assoc result :b 2)]
...)
He mentions that C and C++ allow const variables, but Clojure doesn't support that.clj-kondo has a :shadowed-var rule, but it will only find cases where you shadow a top-level var (not the case in my example).
My faith in this presumption dwindles every year. I expect AI to only exacerbate the problem.
Since we are on the topic of Carmack, "everything that is syntactically legal that the compiler will accept will eventually wind up in your codebase." [0]
I think in practice this is the ideal middle ground of convenience (putting version numbers at the end of variables being annoying), but retaining mostly sane semantics and reuse of prior intermediate results.
let x = "29"
let x = x.parse::<i32>()
let x = x.unwrap()
These all use the same name, but you still have the same explicit ordering dependency because they are typed differently. The first is a &str, the second a Result<i32, ParseIntError>, the third an i32, and any reordering of the lines would provide a compiler error. And if you add another line `let y = process(x)` you would expect it to do something similar no matter where you introduce it in these statements, provided it accepts the current type of x, because the values represent the "same" data.Once you actually "change" the value, for example by dividing by 3, I would consider it unidiomatic to shadow under the same name. Either mark it as mutable for preferably make a new variable with a name that represents what the new value now expresses
> why are you calling it mutable?
Mostly just convention. Rust has immutable by default and you have to mark variables specifically with `mut` (so `let mut var_name = 10;`). Other languages distinguish between variables and values, so var and val, or something like that. Or they might do var and const (JS does this I think) to be more distinct.
Immutability doesn’t have this connotation.
Depends on what I'm trying to do. If what I'm trying to handle is local to the code, then possibly not. If the issue is what's going into the function, or what the return value is doing, then I likely do need that wider context.
What pure-functional functions do allow is certainty the only things that can change the behaviour of that function are the inputs to that function.
In the end, the world is stateful and even the purest abstractions have to hit the road at some point. But the authors of Haskell were fully aware of that. The monadic type system was conceived as a way to easily track side effects after all, not banish them.
1. A property known at compile time.
2. A property that can't change after being initially computed.
Many of the benefits of immutability accrue properties whose values are only known at runtime but which are still known to not change after that point.
The `assoc` on the second binding is returning a new object; you're just shadowing the previous binding name.
This is different than mutation, because if you were to introduce an intermediate binding here, or break this into two `let`s, you could be holding references to both objects {:a 1} and {:a 1 :b 2} at any time in a consistent way - including in a future/promise dereferenced later.
You’re allowed to rebind a var defined within a loop, it doesn’t mean that you can’t hang on to the old value if you need to.
With mutability, you actively can’t hang on to the old value, it’ll change under your feet.
Maybe it makes more sense if you think about it like tail recursion: you call a function and do some calculations, and then you call the same function again, but with new args.
This is allowed, and not the same as hammering a variable in place.
let x = Foo::new().stuff()?; let x = Bar::new(x).other_stuff()?;
So with the math example and what the poster above said about type changing, most rust code I write is something like:
let x: plain_int = 7
let x: added_int = add(x, 3);
let x: divided_int = divide(x, 2);
where the function signatures would be fn add(foo: plain_int, int); fn divide(bar: added_int, int);
and this can't be reordered without triggering a compiler error.
However, somebody needs to know how the entire program works - so my question was where does that application state live in a purely functional world of immumutables?
Does it disappear into the call stack?
It’s a clear-minded and deliberate approach to reconciling principle with pragmatic utility. We can debate whether it’s the best approach, but it isn’t like… logically inconsistent, surprising, or lacking in self awareness.
And how do you do that without understanding how the program works at a high level?
I understand the value of clean interfaces and encapsulation - that's not unique to functional approaches - I'm just wondering in the world of pure immutability where the application state goes.
What happens if the change you need to make is at a level higher than a single function?
Declaring something as a constant gives you license to only need to understand it once. You don't have to trace through the rest of the code finding out new ways it was reassigned. This frees up your mind to move on to the next thing.
Is Python that different from JavaScript? Because it's easy in JavaScript. Just stop typing var and let, and start typing const. When that causes a problem, figure out how to deal with it. If all else fails: "Dear AI, how can I do this thing while continuing to use const? I can't figure it out."
This is a thoughtful response, but I can't help but chuckle at a response that starts with, just read this book!.
The beautiful thing about this is you can stop naming things generically, and can start naming them specifically what they are. Comprehension goes through the roof.
That means that 90% of the time, there's a big class of behavior I just don't need to look for when reading/debugging code. And if it's a bug related to state, I can pretty quickly zoom in on a few possible places where it might have happened.
This is the bit I don't get.
Why would I do that? I will never want a fooA and a fooB. I can't see any circumstances where having a correct fooB and an incorrect fooA kicking around would be useful.
But then I need to update a bunch of stuff to point to the new array, and I've still got the old incorrect array hanging around taking up space.
This just sounds like a great way to introduce bugs.
On a positive note I have taken those lessons from clojure (using values, just use maps, Rich’s simplicity, functional programming without excessive type system abstraction, etc) and applied them to the rest of my programming when I can and I think it makes my code much better.
I don't get why that would be useful. The old array of floats is incorrect. Nothing should be using it.
That's the bit I don't really understand. If I have a list and I do something to it that gives me another updated list, why would I ever want anything to have the old incorrect list?
Immutability gives you solid contracts. A function takes X as input and returns Y as output. This is predictable, testable, and thread safe by default.
If you have a bunch of stuff pointing at an object and all that stuff needs to change when the inner object changes, then you "raise up" the immutability to a higher level.
Universe nextStateOfTheUniverse = oldUniverse.modifyItSomehow();
If you keep going with this philosophy you end up with something roughly like "software transactional memory" where the state of the world changes at each step, and you can go back and look at old states of the world if you want.Old states don't hang around if you don't keep references to them. They get garbage collected.
That's because Haskell is a predominantly a research language originally intended for experimenting with new programming language ideas.
It should not be surprising that people use it to come up with or iterate on existing features.
But also keep in mind that correct and incorrect is not binary. You might want to pass a fooA to another class that does not want the fooB mutation.
If you just have foo, you end up with situations where a copy should have happened but didn't and then you get unwanted changes.
for (0..5) |i| {
i = i + 1;
std.debug.print("foo {}\n", .{i});
}
In this loop in Zig, the reassignment to i fails, because i is a constant. However, i is a new constant bound to a different value each iteration.To potentially make it clearer that this is not mutation of a constant between iterations, technically &i could change between iterations, and the program would still be correct. This is not true with a c-style for loop using explicit mutation.
You pass in an array of 10 values.
While the function is executing, some other thread adds two more values to the array.
How many values should the result of the function call have? 10 or 12? How do you guarantee that is the case?
The point is to determine the points in your program where mutation happens, and the rest is immutable data and pure functions.
In the case of interacting services, for example, mutation should happen in some kind of persistent store like a database. Think of POST and PUT vs GET calls. Then a higher level service can orchestrate the component services.
Other times you can go a long way with piping the output of one function or process into another.
In a GUI application, the contents of text fields and other controls can go through a function and the output used to update another text field.
The point is to think carefully about where to place mutability into your architecture and not arbitrarily scatter it everywhere.
> You should strive to never reassign or update a variable outside of true iterative calculations in loops.
If you want a completely immutable setup for this, you'd likely have to use a recursive function. This pattern is well supported and optimized in immutable languages like the ML family, but is not super practical in a standard imperative language. Something like
def sum(l):
if not l: return 0
return l[0] + sum(l[1:])
Of course this is also mostly insensitive to ordering guarantees (the compiler would be fine with the last line being `return l[-1] + sum(l[:-1])`), but immutability can remain useful in cases like this to ensure no concurrent mutation of a given object, for instance.For example you can modify sum such that it doesn't depend on itself, but it depends on a function, which it will receive as argument (and it will be itself).
Something like:
def sum_(f, l):
if not l: return 0
return l[0] + f(f, l[1:])
def runreq(f, *args):
return f(f, *args)
print(runreq(sum_, [1,2,3]))[0]: https://docs.python.org/3/whatsnew/3.14.html#a-new-type-of-i...
Updating one or more variables in a loop naturally maps to reduce with the updated variable(s) being (in the case of more than one being fields of) the accumulator object.
That said, utopias are not always a great idea. Making all your code functional might be philosophically satisfying, but sometimes there are good reasons to break the rules.
[0] https://en.wikipedia.org/wiki/Static_single-assignment_form
And I think it is worth noting that there is effectively no difference between “stateful” and “not stateful” in a purely functional programming environment. You are mostly talking about what a thing is and how you would like to transform it. Eg, this variable stores a set of A and I would like to compute a set of B and then C is their set difference. And so on.
Unless you have hybrid applications with mutable state (which is admittedly not uncommon, especially when using high performance libraries) you really don’t have to think about state, even at a global application level. A functional program is simply a sequence of transformations of data, often a recursive sequence of transformations. But even when working with mutable state, you can find ways to abstract away some of the mutable statefulness. Eg, a good, high performance dynamic programming solution or graph algorithm often needs to be stateful; but at some point you can “package it up” as a function and then the caller does not need to think about that part at all.
This is not something that can happen.
What sort of thing would it be useful for?
The kind of things I do tend to have maybe several hundred thousand floating point values that exist for maybe a couple of hundred thousandths of a second, get processed, get dealt with, and then are immediately overwritten with the next batch.
I can't think of any reason why I'd ever need to know what they were a few iterations back. That's gone, maybe as much as a ten-thousandth of a second ago, which may as well be last year.
FCIS can be summed up as: R->L->W where R are all your reads, L is where all the logic happens and is done in the FP paradigm, and W are all your writes. Do all the Reads at the start, handle the Logic in the middle, Write at the end when all the results have been computed. Teasing these things apart can be a real pain to do, but the payoff can be quite significant. You can test all your logic without needing database or other services up and running. The logic in the middle becomes less brittle and allows for easier refactoring as there is a clear separation between R, L and W.
For your first question. Yes, and I might misunderstand the question, so give me some rope to hang myself with will ya ;). I would argue that what you really need to care about is the data that you are working with. That's the real program. Data comes in, you do some type of transformation of that data, and you write it somewhere in order to produce an effect (the interesting part). The part where FP becomes really powerful, is when you have data that always has a certain shape, and all your functions understands and can work with the shape of that data. When that happens, the functions starts to behave more like lego blocks. The data shape is the contract between the functions, and as long as they keep to that contract, you can switch out functions as needed. And so, in order to answer the question, yes, you do need to understand the entire program, but only as the programmer. The function doesn't, and that's the point. When the code that resides in the function doesn't need to worry about what the state of the rest of the program is, you as the programmer can reason about the logic inside, without having to worry about some other part of the program doing something that it should do that at the same time will mess up the code that is inside the function.
Debugging in FP typically involves knowing the data and the function that was called. You rarely need to know the entire state of the program.
Does it make sense?
let x = some_function();
... A bunch of code
let x = some_function();
The values of x are the same. It was just an oversight on my part but wondered if I could set my linter to highlight multiple uses of the same variable name in the same function. Does anyone have any suggestions?
But of course you can learn in whatever way you like. Books are just a convenient example to point to as an indicator of how implementers, enthusiasts, and educators working with these techniques make sense of them and compare them to mutating variables. They're easy to refer to because they're notable public artifacts.
Fwiw, there's also an audiobook of the Red Book. To really follow the important parts, you'll want to be reading and writing and running code, but you can definitely get a sense of the more basic philosophical orientation just listening along while doing chores or whatever. :)
Maybe in that sense there's an "artificial" challenge involved, but it's artificial in the sense of being deliberate rather than merely arbitrary or absurd.
The conversation I'm trying to have is "stop mutating all the dynamic self-modifying code, it's jamming things up". The concept of non-mutating code, only mutating variables, strikes me as extremely OCD and overly bureaucratic. Baby steps. Eventually I'll transition from my dynamic recompilation self-modifying code to just regular code with modifying variables. Only then can we talk about higher level transcendental OOP things such as singleton factory model-view-controller-singleton-const-factories and facade messenger const variable type design patterns. Surely those people are well reasoned and not fanatics like me
fn do_demo() {
let qr = QrCode::encode_text("foobar");
print_qr(qr);
let qr = QrCode::encode_text("1234", Ecc::LOW);
print_qr(qr);
let qr = QrCode::encode_text("the quick brown fox");
print_qr(qr);
}
In other languages that don't allow shadowing (e.g. C, Java), the first example would declare the variable and be syntactically correct to copy out, but the subsequent examples would cause a syntax error when copied out.You're using recursion. `runreq()` calls `sum_()` which calls `sum()` in `return l[0] + f(f, l[1:])`, where `f` is `sum()`
The vagaries don't end there. NodeJS' `assert` namespace has methods like `equal()`, `strictEqual()`, `deepEqual()`, `deepStrictEqual()`, and `partialDeepStrictEqual()`, which is both excessive and badly named (although there's good justification for what `partialDeepStrictEqual()` does); ideally, `equal()` should be both `strict` and `deep`. That this is also a terminology problem is borne out by explanations that oftentimes do not clearly differentiate between object value and object identity.
In a language with inherent immutability, object value and object identity may (conceptually at least) be conflated, like they are for JavaScript's primitive values. You can always assume that an `'abc'` over here has the same object identity (memory location) as that `'abc'` over there, because it couldn't possibly make a difference were it not the case. The same should be true of an immutable list: for all we know, and all we have to know, two immutable lists could be stored in the same memory when they share the same elements in the same order.
If you have something so fundamentally broken as to attempt that, you'd probably want to look at mutexes.
Why one earth would you have something attempt to expand a fixed-sized buffer while something else is working on it?
What about something like `gamma`? Lorentz factor? Luminance multiplier? Factorial generalization?
Why not just use the full sentence rather than assign it to an arbitrary name/symbol `gamma` and leave it dependent on the context?
And it's not that hard to add an inline comment to dispel the confusion
const tau = 2*pi; // Alternate name for 2pi is "tau"Anyway, I have great hopes for effect system as a way to approach this in a principled way. I really like what Ocaml is currently doing with concurrency. It’s clear to me that there is great value to unlock here.
Because any piece of code that holds a reference to a mutable variable is able to, at a distance, modify the behavior of a piece of code that uses this mutable variable.
Conversely, a piece of code that only uses immutable variables, and takes as argument the values that may need to vary between executions, is isolated against having its behavior changed at a distance at any time.
FWIW I believe that JS for one would greatly benefit from much better support for immutable data, including time- and space-efficient ways to produce modified copies of structured data (like you don't think twice when you do `string.replace(...)` where you do in fact produce a copy; `list.push(...)` could conceivable operate similarly).
Or the person doesn't understand, then declares the language to be too difficult to use. This probably happens more than the former, sadly.
ex. I've heard people argue for rewriting perfectly working Erlang services in C++ or Java, because they find Erlang "too difficult". Despite it being a simpler language than either of those.
A const variable that refers to an array is a const variable. The array is still mutable. That's not an exception, its also how a plain-old JavaScript object works: You can add and remove properties at will. You can change its prototype to point to something else and completely change its inheritance chain. And it could be a const variable to an unfrozen POJO all along.
That is not an exception to how things work, its how every reference works.
In common lisp you have the loop macro (or better: iterate), in racket you have the for loops. I wrote a thing for guile scheme [0]. Other than that I dont know if many nice looping facilities. In many languages you can achieve all that with conbinatoes and what not, but always at the cost of performance.
I think this is an opportunity for languages to become safer and easier to use without changing performance.
Is there a name that refers to the broader group that includes both constants and variables? In practice, and in e.g. C++, "variable" is used to refer to both constants and actual variables, due to there not being a different common name that can be used to refer to both.
Caches aren't quite as mix-and-match, but they can still internally manage different temporal versions of a cache line, as well as (hopefully) mask the fact that a write to DRAM from one core isn't an atomic operation instantly visible to all other cores.
Practice is always more complicated than theory.
const std::vector<int>& foo = bar.GetVector();
foo is a constant object reference cannot have its properties changed (and also cannot be changed to refer to a new object). std::vector<int>& foo = bar.GetVector();
Is an object reference that can have its properties changed (but cannot be changed to refer to a new object). const arr = []
arr.push(“grape nuts”]
is just peachy in JS and requires the programmer to avoid using it.More importantly, because working immutably in JS is not enforced, trying to use it consistently either limits which libraries you can use and/or requires you to wrap them to isolate their side effects. ImmerJS can help a lot here, since immutability is its whole jam. I’d rather work in a language where I get these basic benefits by default, though.
Rich Hickey asked once in a talk, “who here misses working with mutable strings?” If you would answer “I do,” or if you haven’t worked much in languages where strings are always immutable and treated as values, it makes describing the benefits of immutability more challenging.
Von Neumann famously thought Assembly and higher-level language compilers were a waste of time. How much that opinion was based on his facility with machine code I don’t know, but compilers certainly helped other programmers to write more closely to the problem they want to solve instead of tracking registers in their heads. Immutable state is a similar offloading-of-incidental-complexity to the machine.
You say that value2 is correct. It logically follows that value1 was incorrect. Why did you assign it then?
The names are free, you can just use a correct name every single time.
def sum_(f, l):
if not l: return 0
return l[0] + f(f, l[1:])
def runreq(f, *args):
return f(f, *args)
print(995,runreq(sum_, range(1,995)))
print(1000,runreq(sum_, range(1,1000)))
when run with python3.11 gives me this output: 995 494515
Traceback (most recent call last):
File "/tmp/sum.py", line 9, in <module>
print(1000,runreq(sum_, range(1,1000)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/sum.py", line 6, in runreq
return f(f, *args)
^^^^^^^^^^^
File "/tmp/sum.py", line 3, in sum_
return l[0] + f(f, l[1:])
^^^^^^^^^^^
File "/tmp/sum.py", line 3, in sum_
return l[0] + f(f, l[1:])
^^^^^^^^^^^
File "/tmp/sum.py", line 3, in sum_
return l[0] + f(f, l[1:])
^^^^^^^^^^^
[Previous line repeated 995 more times]
RecursionError: maximum recursion depth exceeded in comparison
A RecursionError seems to indicate there must have been recursion, no?You can't mutate the reference, but you _can_ copy the values from one array into the data under an immutable reference, so const doesn't prevent basically any of the things you'd want to prevent.
The distinction makes way more sense to me in languages that let you pass by value. Passing a const array says don't change the data, passing a const reference says change the data but keep the reference the same.
Carmamack's post explains it - if you make a series of immutable "variables" instead of reassigning one, it is much easier to debug. This is a microcosm of time travel debugging; it lets you look at the state of those variables several steps back.
In don't know anything about your specific field but I am confident that getting to the point where you deeply understand this perspective will improve your programming, even if you don't always use it.