Most active commenters
  • DrScientist(3)

←back to thread

498 points azhenley | 16 comments | | HN request time: 0.013s | source | bottom
Show context
EastLondonCoder ◴[] No.45770007[source]
After a 2 year Clojure stint I find it very hard to explain the clarity that comes with immutability for programmers used to trigger effects with a mutation.

I think it may be one of those things you have to see in order to understand.

replies(17): >>45770035 #>>45770426 #>>45770485 #>>45770884 #>>45770924 #>>45771438 #>>45771558 #>>45771722 #>>45772048 #>>45772446 #>>45773479 #>>45775905 #>>45777189 #>>45779458 #>>45780612 #>>45780778 #>>45781186 #
1. emil0r ◴[] No.45770884[source]
The way I like to think about is that with immutable data as default and pure functions, you get to treat the pure functions as black boxes. You don't need to know what's going on inside, and the function doesn't need to know what's going on in the outside world. The data shape becomes the contract.

As such, localized context, everywhere, is perhaps the best way to explain it from the point of view of a mutable world. At no point do you ever need to know about the state of the entire program, you just need to know the data and the function. I don't need the entire program up and running in order to test or debug this function. I just need the data that was sent in, which CANNOT be changed by any other part of the program.

replies(1): >>45772025 #
2. DrScientist ◴[] No.45772025[source]
Sure modularity, encapsulation etc are great tools for making components understandable and maintainable.

However, don't you still need to understand the entire program as ultimately that's what you are trying to build.

And if the state of the entire programme doesn't change - then nothing has happened. ie there still has to be mutable state somewhere - so where is it moved to?

replies(9): >>45773150 #>>45773166 #>>45773254 #>>45773339 #>>45774040 #>>45774256 #>>45774298 #>>45775098 #>>45778109 #
3. maleldil ◴[] No.45773150[source]
> there still has to be mutable state somewhere - so where is it moved to?

This is one way of thinking about it: https://news.ycombinator.com/item?id=45701901 (Simplify your code: Functional core, imperative shell)

4. raddan ◴[] No.45773166[source]
In functional programs, you very explicitly _do not_ need to understand an entire program. You just need to know that a function does a thing. When you're implementing a function-- sure, you need to know what it does. But you're defining it in such a way that the user should not know _how_ it works, only _what_ it does. This is a major distinction between programs written with mutable state and those written without. The latter is _much_ easier to think about.

I often hear from programmers that "oh, functional programming must be hard." It's actually the opposite. Imperative programming is hard. I choose to be a functional programmer because I am dumb, and the language gives me superpowers.

replies(1): >>45774575 #
5. jimbokun ◴[] No.45773254[source]
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.

Of course not, that's impossible. Modern programs are way to large to keep in your head and reason about.

So you need to be able to isolate certain parts of the program and just reason about those pieces while you debug or modify the code.

Once you identify the part of the program that needs to change, you don't have to worry about all the other parts of the program while you're making that change as long as you keep the contracts of all the functions in place.

replies(1): >>45774720 #
6. fwip ◴[] No.45773339[source]
It's moved toward the edges of your program. In a lot of functional languages, places that can perform these effects are marked explicitly.

For example, in Haskell, any function that can perform IO has "IO" in the return type, so the "printLine" equivalent is: "putStrLn :: String -> IO". (I'm simplifying a bit here). The result is that you know that a function like "getUserComments :: User -> [CommentId]" is only going to do what it says on the tin - it won't go fetch data from a database, print anything to a log, spawn new threads, etc.

It gives similar organizational/clarity benefits as something like "hexagonal architecture," or a capabilities system. By limiting the scope of what it's possible for a given unit of code to do, it's faster to understand the system and you can iterate more confidently with code you can trust.

7. scott_w ◴[] No.45774040[source]
> However, don't you still need to understand the entire program as ultimately that's what you are trying to build.

Depends on what I'm trying to do. If what I'm trying to handle is local to the code, then possibly not. If the issue is what's going into the function, or what the return value is doing, then I likely do need that wider context.

What pure-functional functions do allow is certainty the only things that can change the behaviour of that function are the inputs to that function.

8. ◴[] No.45774256[source]
9. bcrosby95 ◴[] No.45774298[source]
It lets you refine when and where it happens more than other methods of restricting state change, such as in imperative OOP.
10. DrScientist ◴[] No.45774575{3}[source]
I think you missed the point. I understand that if you writing a simple function with an expected interface/behaviour then that's all you need to understand. Note this isn't something unique to a functional approach.

However, somebody needs to know how the entire program works - so my question was where does that application state live in a purely functional world of immumutables?

Does it disappear into the call stack?

replies(1): >>45774869 #
11. DrScientist ◴[] No.45774720{3}[source]
> Once you identify the part of the program that needs to change,

And how do you do that without understanding how the program works at a high level?

I understand the value of clean interfaces and encapsulation - that's not unique to functional approaches - I'm just wondering in the world of pure immutability where the application state goes.

What happens if the change you need to make is at a level higher than a single function?

replies(1): >>45776162 #
12. MetaWhirledPeas ◴[] No.45774869{4}[source]
It didn't disappear; there's just less of it. Only the stateful things need to remain stateful. Everything else becomes single-use.

Declaring something as a constant gives you license to only need to understand it once. You don't have to trace through the rest of the code finding out new ways it was reassigned. This frees up your mind to move on to the next thing.

replies(1): >>45777402 #
13. SatvikBeri ◴[] No.45775098[source]
A pretty basic example: I write a lot of data pipelines in Julia. Most of the functions don't mutate their arguments, they receive some data and return some data. There are a handful of exceptions, e.g. the functions that write data to a db or file somewhere, or a few performance-sensitive functions that mutate their inputs to avoid allocations. These functions are clearly marked.

That means that 90% of the time, there's a big class of behavior I just don't need to look for when reading/debugging code. And if it's a bug related to state, I can pretty quickly zoom in on a few possible places where it might have happened.

14. jimbokun ◴[] No.45776162{4}[source]
Yes, obviously a program with no mutability only heats up the CPU.

The point is to determine the points in your program where mutation happens, and the rest is immutable data and pure functions.

In the case of interacting services, for example, mutation should happen in some kind of persistent store like a database. Think of POST and PUT vs GET calls. Then a higher level service can orchestrate the component services.

Other times you can go a long way with piping the output of one function or process into another.

In a GUI application, the contents of text fields and other controls can go through a function and the output used to update another text field.

The point is to think carefully about where to place mutability into your architecture and not arbitrarily scatter it everywhere.

15. raddan ◴[] No.45777402{5}[source]
> Only the stateful things need to remain stateful.

And I think it is worth noting that there is effectively no difference between “stateful” and “not stateful” in a purely functional programming environment. You are mostly talking about what a thing is and how you would like to transform it. Eg, this variable stores a set of A and I would like to compute a set of B and then C is their set difference. And so on.

Unless you have hybrid applications with mutable state (which is admittedly not uncommon, especially when using high performance libraries) you really don’t have to think about state, even at a global application level. A functional program is simply a sequence of transformations of data, often a recursive sequence of transformations. But even when working with mutable state, you can find ways to abstract away some of the mutable statefulness. Eg, a good, high performance dynamic programming solution or graph algorithm often needs to be stateful; but at some point you can “package it up” as a function and then the caller does not need to think about that part at all.

16. emil0r ◴[] No.45778109[source]
You are very right in that things need to change. If they don't, nothing interesting happens and we as programmers don't get paid :p. State changes are typically moved to the edges of a program. Functional Core, Imperative Shell is the name for that particular architecture style.

FCIS can be summed up as: R->L->W where R are all your reads, L is where all the logic happens and is done in the FP paradigm, and W are all your writes. Do all the Reads at the start, handle the Logic in the middle, Write at the end when all the results have been computed. Teasing these things apart can be a real pain to do, but the payoff can be quite significant. You can test all your logic without needing database or other services up and running. The logic in the middle becomes less brittle and allows for easier refactoring as there is a clear separation between R, L and W.

For your first question. Yes, and I might misunderstand the question, so give me some rope to hang myself with will ya ;). I would argue that what you really need to care about is the data that you are working with. That's the real program. Data comes in, you do some type of transformation of that data, and you write it somewhere in order to produce an effect (the interesting part). The part where FP becomes really powerful, is when you have data that always has a certain shape, and all your functions understands and can work with the shape of that data. When that happens, the functions starts to behave more like lego blocks. The data shape is the contract between the functions, and as long as they keep to that contract, you can switch out functions as needed. And so, in order to answer the question, yes, you do need to understand the entire program, but only as the programmer. The function doesn't, and that's the point. When the code that resides in the function doesn't need to worry about what the state of the rest of the program is, you as the programmer can reason about the logic inside, without having to worry about some other part of the program doing something that it should do that at the same time will mess up the code that is inside the function.

Debugging in FP typically involves knowing the data and the function that was called. You rarely need to know the entire state of the program.

Does it make sense?