←back to thread

296 points gyre007 | 1 comments | | HN request time: 0.214s | source
Show context
_han ◴[] No.21281004[source]
The top comment on YouTube raises a valid point:

> I've programmed both functional and non-functional (not necessarily OO) programming languages for ~2 decades now. This misses the point. Even if functional programming helps you reason about ADTs and data flow, monads, etc, it has the opposite effect for helping you reason about what the machine is doing. You have no control over execution, memory layout, garbage collection, you name it. FP will always occupy a niche because of where it sits in the abstraction hierarchy. I'm a real time graphics programmer and if I can't mentally map (in rough terms, specific if necessary) what assembly my code is going to generate, the language is a non-starter. This is true for any company at scale. FP can be used at the fringe or the edge, but the core part demands efficiency.

replies(29): >>21281094 #>>21281291 #>>21281346 #>>21281363 #>>21281366 #>>21281483 #>>21281490 #>>21281516 #>>21281702 #>>21282026 #>>21282130 #>>21282232 #>>21283002 #>>21283041 #>>21283257 #>>21283351 #>>21283424 #>>21283461 #>>21285789 #>>21285877 #>>21285892 #>>21285914 #>>21286539 #>>21286651 #>>21287177 #>>21287195 #>>21288087 #>>21288669 #>>21347699 #
agentultra ◴[] No.21282130[source]
I too have been programming professionally for nearly two decades. Much longer if you consider the time I spent making door games, MUDs, and terrible games in the 90s.

I think functional programming gives you powerful tools to reason about the construction of programs. Even down to the machine level it's amazing how amortized functional data structures change the way you think about algorithmic complexity. I think laziness was the game changer here. And if you go all in with functional programming it's surprising how much baseline performance you can get with such little effort and how easy it is to scale to multiple cores and multiple hosts.

There are some things like vectorization that most functional languages I know of are hard pressed to take advantage of so we still reach out to C for those things.

However I think we're starting to learn enough about functional programming languages and how to make efficient compilers for them these days. Some interesting research that may be landing soon that has me excited would enable a completely pure program to do register and memory mutations under the hood, so to speak, in order to boost baseline performance. I don't think we're far off from seeing a dependently typed, pure, lazy functional language that can have bounded performance guarantees... and possibly be able to compile programs that don't even need run time support from a GC.

I grew up on an Amiga, and later IBM PCs, and that instinct to think about programs in terms of a program counter, registers, and memory is baked into me. It was hard to learn a completely different paradigm 18 or so years into my professional career. And to me, I think, that's the great accident that prevented FP from being the norm: several generations were simply not exposed to it early on on our personal computers. We had no idea it was out there until some of us went to university or the Internet came along. And even then... to really understand the breakthroughs FP has made requires quite a bit of learning and learning is hard. People don't like learning. I didn't. It's painful. But it's useful and worth it and I'm convinced that FP will come to be the norm if some project can manage to overcome the network effects and incumbents.

replies(4): >>21282452 #>>21283204 #>>21283484 #>>21289572 #
hootbootscoot ◴[] No.21283484[source]
OTOH, think of the vast hordes of new developers exposed to lot's of FP and NOT having that background in Amiga and PC and bare-metal programming that you do.

FP has been largely introduced into the mainstream of programming through Javascript and Web Dev. Let that sink in.

End of the day, the computer is an imperative device, and your training helps you understand that.

FP is a perfectly viable high-level specification or code-generational approach, but you are aware of the leaky abstraction/blackish box underneath and how your code runs on it.

I see FP and the "infrastructure as code" movement as part and parcel to the same cool end reality goal, but I feel that our current industry weaknesses are related to hiding and running away from how our code actually executes. Across the board.

replies(4): >>21283747 #>>21285839 #>>21289592 #>>21290566 #
socksy ◴[] No.21283747[source]
"End of the day, the computer is an imperative device, and your training helps you understand that."

I mean... it's not though, is it? Some things happen synchronously, but this is not the same thing as being an imperative device. Almost every CPU out there is multi core these days, and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.

If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?

replies(6): >>21284016 #>>21284854 #>>21286017 #>>21286037 #>>21286091 #>>21287421 #
1. agentultra ◴[] No.21284854[source]
Individual cores execute instructions speculatively these days!

Predicting how the program will be executed, even in a language such as C99 or C11, requires several layers of abstraction.

What most programmers using these languages are concerned about is memory layout as that is the primary bottleneck these days. The same is true for developers of FP languages. Most of these languages I've seen have facilities for unboxing types and working with arrays as you do. It's a bit harder to squeeze the Haskell RTS onto a constrained platform which is where I'd either simply write in C... or better, compile a subset of Haskell without the RTS to a C program.

What I find neat though is that persistent structures, memoization, laziness, and referential transparency gave us a lot of expressive power while giving us a lot of performance out of the gate. In an analogous way to how modern CPU cores execute instructions speculatively while maintaining the promise of sequential access from the outside; these structures combined with pure, lazy run time allow us to speculatively memoize and persist computations for more efficient computations. This lets me write algorithms that can search infinite spaces using immutable structures and get the optimal algorithm for the average case since the data structures and lazy evaluation amortize the cost for me.

There's a good power-to-weight ratio there that, to me, we're only beginning to scratch the surface of.