Scheme 'cloths' was a viable option. Lisp remains the most popular scripting language among Autocad users despite Autodesk pushing other languages (.NET and JS). So popular that Autocad clones use it also as a scripting language[1].
Edited [1] https://www.zwsoft.com/zwcad/features#Dynamic-Block
IMHO does a better job than Lodash, because:
1. All functions are automatically curried.
2. The order of parameters lends itself to composition.
EDIT: 3. Transducers.
But those philosophical perspectives aside, personally I find my brain works very much like a Turing Machine, when dealing with complex problems. Apart from my code, even most of my todos are simple step-by-step instructions to achieve something. It’s easily understandable why like me, other non-math folks would prefer a Turing Machine over Lambda Calculus’ way of writing instructions.
This could be why OOP/Imperative was often preferred over FP.
Anyway, I've done quite a few fairly large katas in JS using only an FP style of coding without any dependencies and I really enjoyed it.
For instance, it's very common to have data types with mutable state in OCaml, or to use non-mutable data-structures, closures, higher-order functions in let say Python. I don't see such a clear dichotomy between functional/non-functional programming languages anymore.
Besides, there are other language "features" that I feel have more impact on the code I write. For instance, static/dynamic typing, asynchronous I/O vs actors vs threads, module systems.
I see functional programming more as a tool and a programming discipline, well-suited to solve some problems, rather than a paradigm that one should adhere no matter what.
Maybe he was just being modest, or like John McCarthy, just didn't see or believe its potential.
Note that this was before computers or programming, and that there's no formal proof that a Turning machine can encode any computation - so its convincingness was important.
Exactly this. How baking a cake in FP looks like:
* A cake is a hot cake that has been cooled on a damp tea towel, where a hot cake is a prepared cake that has been baked in a preheated oven for 30 minutes.
* A preheated oven is an oven that has been heated to 175 degrees C.
* A prepared cake is batter that has been poured into prepared pans, where batter is mixture that has chopped walnuts stirred in. Where mixture is butter, white sugar and brown sugar that has been creamed in a large bowl until light and fluffy
Taken from here: https://probablydance.com/2016/02/27/functional-programming-...
Even when the recursive form is a more natural representation, like arithmetic sequences: start at s, increase by d with each step:
a(0) = s, a(n) = a(n-1)+d
a(n) = s + n*d
The analytical form seems simpler, neater, more "right" and more efficient to me - even though, if you want the whole sequence, the recursive form is more efficient (given tail-call optimisation).I suspect I'm just not smart enough.
fp can be much shorter, and the execution model isn't actually hidden, just unfamiliar (and unintuitive and unnatural - for me). Consider: all suffixes of a list. In jq:
while( length>0; .[1:] )
Yet to see a single line of a functional language in production.
As other commenters have mentioned most decent modern lanuages are multi-paradigm.
cake = map (cool . bake 30 175) . splitIntoPans $ mix [ butter, sugar, walnuts ]
If SchemeScript hadn't caught on, it might have been that VBScript took over the web.
Some of my friends are in love with FP. I am not. I've done more FP than most, I can work with it, but my brain has never become in tune with it. I can bang out my intent as imperative code in real time, but with FP I have to stop and think to translate.
FP also means that I can't always easily tell the runtime complexity of what I'm writing and there's a complex black box between my code and the metal.
Maybe some of my friends' brains are superior and can think in FP, all the more power to them. but empirical evidence is that most people are not capable of that, so FP will probably forever remain in the shadow of imperative programming.
> I've programmed both functional and non-functional (not necessarily OO) programming languages for ~2 decades now. This misses the point. Even if functional programming helps you reason about ADTs and data flow, monads, etc, it has the opposite effect for helping you reason about what the machine is doing. You have no control over execution, memory layout, garbage collection, you name it. FP will always occupy a niche because of where it sits in the abstraction hierarchy. I'm a real time graphics programmer and if I can't mentally map (in rough terms, specific if necessary) what assembly my code is going to generate, the language is a non-starter. This is true for any company at scale. FP can be used at the fringe or the edge, but the core part demands efficiency.
https://speakerdeck.com/hadley/the-joy-of-functional-program...
But, that was OK too, because if my guess is right, your company's product also had FAR FAR FAR better COM bindings than Autocad did for 99% of what you'd want to automate.
As a recent example GitHub uses this Haskell application for code analysis: https://github.com/github/semantic
Also, Erlang powers a huge amount of the US’ cellular infrastructure, as well as RabbitMQ, which is used in a ton of production workloads.
There’s actually a pretty decent list on Wikipedia: https://en.m.wikipedia.org/wiki/Functional_programming
For example you can put functions in a list, and push a datastructure through them, like a pipeline.
edit: https://probablydance.com/2016/02/27/functional-programming-...
I mean usually the problem in FP is that you simply can't type mutation (you'd have to use dependent types and so on), okay, so use immutability, great, but then every "step" is just some franken-type-partial-whatever. And TypeScript has great support for these (first of all it infers a lot, but you can use nice type combinators to safeguard that you get what you wanted).
I don't like pure FP exactly because of this, because many times you have to use some very complicated constellation of concepts to be able to represent a specific data flow / computation / transformation / data structure. Whereas in TS / Scala you just have a nice escape hatch.
When copied and pasted into next tab it leads to article.
> [...]
> This could be why OOP/Imperative was often preferred over FP.
Though this doesn't really explain why OOP is preferred over imperative (since the former doesn't really correspond to a set of step-by-step instructions).
In haskell, for instance, the do notation lets you write imperative code:
f article = do
x <- getPrices article
y <- book article
finishDeal article x y
...and then the compiler desugars it to a more descriptive form.It seems to me that OOP, Functional, and Relational programming models try to abstract away the accidental steps, but like all abstractions there are limitations.
I suspect that once familiar with one of these models, imperative seems awfully tedious, however now the code is more obscure to those not well versed in the paradigm, thus we have a trade off between ease of use for many and optimal for some.
I've worked on codebases where people ignore all built-in JS functions (like Array.map/filter) and write Ramda spaghetti instead with multiple nested pipes and lens and what not to show off their FP purism.
Most of the time, you don't need any of this, it just makes the codebase unreadable, and hard for new people to join the project and be productive in a timely fashion.
EDIT: Is this controversial? What are downvoters taking issue with? That Python is a very popular language? That it is much slower than C/C++?
But what if you want to run a bakery and split the work across multiple cooks? In that case it helps to have clearly defined ingredients.
I'm only trying to say that it all depends on the context. Obviously personal preference is a big factor too.
[butter, sugar, walnuts]
mix()
splitIntoPans(pans = 3)
bake(time = 30, temp = 175)
cool(time = 5)
Hmm, wait a second.....Although I don’t work directly with the FPGA stuff, it’s still a very, very small piece of the overall pie (and new).
The motivation behind using Ocaml is mainly in its correctness(!) not because it’s fast (it’s not). See Knight Capital for a good example as to why. There are great videos on YT by Yaron Minsky that explain this better than I can.
Now, various subsets of the items above have been labeled with different names (functional, procedural, OOO, generic, whatever), but of course most of the time no two people can agree on which subset deserves which label.
I must not be the only one, because a lot (but not all) of very successful languages are not very opinionated and let people mix and match bits as needed.
Similarly, google search, at the outer most layers, is javascript, then probably some layer of Go or similar, but then the core is highly tuned C++ and assembler.
That being said, it seems like performance is not an issue for most of the code written these days, aside from not writing quadratic solutions for problems solvable in linear time.
If you use java, .net, go or the likes, odds are you would be fine with a functional language ; and if you need performance with those languages, odds are that you will need arcane knowledge equivalent to what you would need to know to make performant fp code.
One thing lot of programmers do is to abstract SQL to OO style, even though SQL describes a relation that can be computed to a result, in some way similar to a function, but it seems that most prefer to look at is has a state, even though it doesn't.
Sure, the tables where data is stored has a state, but the sum of the tables is a relationship in time & depending how you look at it you get different results. It is very hard to map relationships to OO correctly.
It is probably easier for most people to think about the world as set of things rater than a relation in time. Many of our natural languages are organized around things.
This is exactly how I felt when I inherited a big project that uses lodash/fp. Having spent ~6 months with the code now I prefer having a functional layer on top of JS. It does make sense.
I do not think this is true outside your domain. Amazon uses Java, C++ and Perl. At the time I was there majority of the website code was in Perl. Amazon one of the biggest companies on the planet.
But most importantly, prototypal inheritance, in other words invoking the object's own methods as if they were pure functions is what really puts me off.
Simple as that.
I am a functional and OOP programmer myself. I find functional way more elegant for modeling most mathematical problems, but OOP way better at modeling real life things with states.
OOP and states introduce lots of problems and complexity, but the solution is not removing states, or a series of complex mathematical entelechies.
In fact "removing states" is not really removing them. It is creating new objects with static states on it. It makes it super hard to model real life.
(dynamic)States exist in real life. Temperature, pressure, height, volume, brightness, weight...
There are programmers that understand programs as a religion, they only program in one system ans believe it is the best thing in the world and everybody should be forced to use it. I feels sorry for them and the people that depend on them.
The solution will be new paradigms that are neither OOP nor FP.
OO is the norm because it is has immediate business value and is easier to teach to young people. Most programmers in the work place are produced from educational institutions. Educational institutions has competitive quantifiable
FP requires thinking in terms of calculus. This isn't hard, personally I find it much faster and easier. Thinking in calculus does require some maturity, and possibly some analytical experience, young students may not find comfortable.
---
This question can also be answered in terms of scale.
FP reinforces simplicity. Simplicity requires extra effort, often through refactoring, in order to scale or allow extension for future requirements. This is a mature approach that allows a clearer path forward during maintenance and enhancements, but it isn't free.
OO scales immediately with minimal effort. OO, particularly inheritance, strongly reinforces complexity, but scale is easily and immediately available. This is great until it isn't.
Yes, OCaml has garbage collection. It's a very efficient GC, and it is only ever called when you try to allocate something and the system determines that it's time for cleanup (https://ocaml.org/learn/tutorials/garbage_collection.html, though this might change if/when Multicore OCaml ever happens?). So if you write an innermost function that does arithmetic on stuff but never allocates data structures, you will not have GC problems because you will not have GC during that time, period.
Also, there are cases where destructive mutation of things is more efficient than making pure copies. OCaml allows you to do that, you don't need to fiddle with monads to simulate state.
There really isn't that much black magic there. Just don't believe everything that is said about "FP".
I've been trying to think of a totally clean functional abstraction, i.e. that's functional under the hood, but there's no way to tell. Perhaps in a compiler?
Computers are imperative devices. I don't think that a for-loop or a map function fundamentally impede understanding of this concept. I DO think that pretending that languages that run on top of Virtual Machines need to aknowledge their dependency heirarchy and stop attempting to "level/equalize" the languages in question. One would use C in order to write a VM like V8 that then could run your scripting language. The core of Autocad is surely C++ with some lower-level C (and possibly assembler) regardless of whichever scripting language has then been implemented inside of this codebase, again, on a virtual machine.
The Operating System is a virtual machine. The memory management subsystem is a virtual machine.
Javascript runs on browsers. (Or V8, but then that was originally the JS engine of a browser) and has inherent flaws (lack of type system, for one) that limit it's use in specifying/driving code generation that could provide lower level functionality. THAT is the essential issue. VHDL and Verilog can specify digital logic up to a certain level of abstraction. C++ and C code generation frameworks can be used to generate HDL code to some degree, to the degree that libraries make them aware of the lower-level constructs such HDL's work in. I have no doubt that Pythons MyHDL presents a very low learning curve in terms of having the Python interface, but then the developer needs to be aware of what sort of HDL MyHDL will output and how it will actually perform in synthesis and on a real FPGA.
We don't need MORE layers of opaque abstraction. People need to learn more about how computers work as abstraction doesn't obviate the need to know how the lower levels work in order to optimize ones higher level code.
I can provide specific examples regarding libraries that purport to provide a somewhat blackbox interface, but upon deeper examination DO, in fact, require intimate knowledge of what is inside.
Abstractions are imperfect human attempts to separate concerns and they are temporary and social membranes.
Now, having said all of this: If a person ran a Symbolics Lisp system, such a system was holistic and the higher-level Lisp programmer could drill down into anything in the system and modify it or see how it was made.
I digress... read the source code for any magical black boxes you are thinking of employing in your work.
Functional Programming might have great advantages in correctness but sooner or later the code is going to be run on a real CPU with real instructions and all the mathematical abstractions don’t mean much there.
That said, I can see they have their place for specialized areas.
[butter, sugar, walnuts]
^^^
Somewhere wanted type CakeIngredients but missing record field "Flour"
If imperative style programming came with type inference on the level of the Ocaml compiler sign me up. For now, though, I can spare a few cycles in exchange for correct programs.Actually, a lot of programming language improvements have come from trying to make lisp performant.
Also there are different aspects to performance, and when (for example) it comes to latency, a platform like Erlang/BEAM makes it particularly easy to get low latency in certain contexts without thinking about implementation much. In Haskell you can accomplish similar things with green threads. It will probably need more clock cycles for a given action than a tuned C implementation but that's not always what matters, and the code will probably be cleaner.
VS Code, VIM works for me. Conda or PIP also. Not sure what is missing for you.
>> but not everything in the world is glue code
I never claimed that.
>> and as soon as you need to do something O(n) on your dataset, you’re either paying an enormous performance penalty or you’re not writing that bit in Python
Depends what you need to do.
My entire comment was about that details matter and you can't just blindly pick a language because of out of the box performance.
You cannot have referential transparency and encapsulation at the same time.
In order to prevent mutations (which is a requirement of FP), a module cannot hold any state internally; this necessarily means that the state must be passed to each module action from the outside. If state has to be passed to each module action from the outside, then this necessarily means that the outside logic needs to be aware of which state is associated with which action of which child module. If higher level modules need to be aware of all the relationships between the logic and state of all lower level (child) modules, that is called 'leaky abstraction' and is a clear violation of encapsulation.
Encapsulation (AKA 'blackboxing') is a very important concept in software development. Large complex programs need to have replaceable parts and this requires encapsulation. The goal is to minimize the complexity of the contact areas between different components; the simpler the contact areas, the more interchangeable the components will be. It's like Lego blocks; all the different shapes connect to each other using the same simple interface; this gives you maximum composability.
Real world software applications need to manage and process complex state and the best way to achieve this is by dividing the state into simple fragments and allowing each fragment to be collocated with the logic that is responsible for mutating it.
If you design your programs such that your modules have clear separation of concerns, then figuring out which module is responsible for which state should be a trivial matter.
This may illustrate that humans aren't good compilers of functional code, or in particular that humans aren't good at parsing poorly formatted functional code (again, computer parsers don't care about formatting). But I don't think it indicates that functional code isn't good for reading and writing, even for the same humans.
I also don't think this recipe resembles FP. Where are the functions and their arguments? There is no visible hieararchy. It is unnecessarily obtuse in the first place.
Did you watch the video? The most popular language is JavaScript, which is only not functional but a quirk of history.
The video makes an argument for marketing being the reason.
I think functional programming gives you powerful tools to reason about the construction of programs. Even down to the machine level it's amazing how amortized functional data structures change the way you think about algorithmic complexity. I think laziness was the game changer here. And if you go all in with functional programming it's surprising how much baseline performance you can get with such little effort and how easy it is to scale to multiple cores and multiple hosts.
There are some things like vectorization that most functional languages I know of are hard pressed to take advantage of so we still reach out to C for those things.
However I think we're starting to learn enough about functional programming languages and how to make efficient compilers for them these days. Some interesting research that may be landing soon that has me excited would enable a completely pure program to do register and memory mutations under the hood, so to speak, in order to boost baseline performance. I don't think we're far off from seeing a dependently typed, pure, lazy functional language that can have bounded performance guarantees... and possibly be able to compile programs that don't even need run time support from a GC.
I grew up on an Amiga, and later IBM PCs, and that instinct to think about programs in terms of a program counter, registers, and memory is baked into me. It was hard to learn a completely different paradigm 18 or so years into my professional career. And to me, I think, that's the great accident that prevented FP from being the norm: several generations were simply not exposed to it early on on our personal computers. We had no idea it was out there until some of us went to university or the Internet came along. And even then... to really understand the breakthroughs FP has made requires quite a bit of learning and learning is hard. People don't like learning. I didn't. It's painful. But it's useful and worth it and I'm convinced that FP will come to be the norm if some project can manage to overcome the network effects and incumbents.
Compared with, say, Go where I just hover the cursor.
As for pip, you also need virtual environments to protect you from side effects, and even then, if you’re doing C interop you probably still have dynamic links to so files outside of your virtualenv. Our team spends so much time dealing with environment issues that we’re exploring Docker solutions. And then packaging and distribution of Python artifacts is pretty awful. We’re using pantsbuild.org to build PEX files which works pretty well when it works, but pants itself has been buggy and not well documented.
> I never claimed that
I couldn’t tell since the context of the thread made it sound like you were either implying that Python is suitably performant because the majority of programming is glue code or you were going somewhat off topic to talk about glue code. I now understand it was the latter.
> Depends what you need to do. My entire comment was about that details matter and you can't just blindly pick a language because of out of the box performance.
I agree, but in practice you rarely know the full extent of what you will need, so you should avoid painting yourself into a corner. It really doesn’t make sense to choose Python any more if you are anything less than certain about the performance requirements for your project for all time—we now have languages that are as easy to use as Python (I would argue even easier, despite my deep familiarity with Python) and which don’t paint you into performance corners. Go is probably the best in class here, but there are probably others too.
A video game programmer would probably not be helped because a big part of their coding, as I understand it, is wringing out every clock cycle and byte of memory possible. However, the programmer writing the AR/AP system that allows tracking for in-game purchases would find OCaml, for instance, very beneficial.
I'm not pretending to be the first to state this observation but I feel like it needs reinforcement here.
For example: Stop&Shop has 415 stores, and
365 days * 415 stores * 100 purchases per day * 50 datoms per purchase
will fill up your system in 14 years without even spending datoms on inventory and the like. And that "100 purchases per day" could be low by a factor of 5 or 10 (I don't know).It actually takes a lot of unlearning to let go of control of the machine and let it solve the problem, when you are used to telling it how to solve the problem. I came to that conclusion when I dabbled in ProLog just to learn something different, and I had a really hard time getting my head around CL when I first got into it, due to wanting to tell then machine exactly how to solve the problem. I think it was just ingrained in those of us that grew up closer to the metal and I think the Byte magazine reference, in the talk, has a lot to do with it, we just did not have that much exposure to other ideas, given that mags and Barns and Noble, where our only source to new ideas. That and most of us where kids just hacking on these things alone in our bedroom with no connectivity to anyone else.
I remember before the web getting on newsgroups and WAIS and thinking how much more info was available that the silo'ed BBS we used to dial into. Then the web hit and suddenly all of these other ideas gained a broader audience.
Given a choice between changing my browsing behaviour to see his content or just blocking it so it (the testicle redirect or the other content) will never both my vision again, I go for the latter option.
Personally, I think consistency is more important here because it leads to better predictability. If you don't know whether the compiler assigns something to be evaluated lazily or eagerly that could lead to a lot of nasty debugging issues.
I can't write Lisp to save my life, but I know roughly how you're supposed to do it.
At some point in history, people stopped worrying about not understanding compilers, how they allocate registers and handle loops and do low-level optimizations. The compilers (and languages like C or C++) became good enough (or even better than humans in many cases) in optimizing code.
The same happened with managed memory and databases, and it will happen here, too. Compilers with FP will become good enough in translating to the machine code so that almost nobody will really care that much.
The overall historical trend of programming is more/better abstractions for humans and better automated tools to translate these abstractions into performant code.
I created an Electron (TypeScript/React) desktop application called Onivim [1] and then re-built it for a v2 in OCaml / ReasonML [2] - compiled to native machine code. (And we built a UI/Application framework called Revery [3] to support it)
There were very significant, tangible improvements in performance:
- Order of magnitude improvement in startup time (time to interactive, Windows 10, warm start: from 5s -> 0.5s)
- Less memory usage (from ~180MB to <50MB). And 50MB still seems too high!
The tooling for building cross-platform apps on this tech is still raw & a work-in-progress - but I believe there is much untapped potential in taking the 'React' idea and applying it to a functional, compile-to-native language like ReasonML/OCaml for building UI applications. Performance is one obvious dimension; but we also get benefits in terms of correctness - for example, compile-time validation of the 'rules of hooks'.
- [1] Onivim v1 (Electron) https://github.com/onivim/oni
- [2] Onivim v2 (ReasonML/OCaml) https://v2.onivim.io
- [3] Revery: https://www.outrunlabs.com/revery/
- [4] Flambda: https://caml.inria.fr/pub/docs/manual-ocaml/flambda.html
> “During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.
Rumor on the outside suggests that Jane St uses OCaml for things like deploying software.
If the compiler only forces values that would be forced anyway, there shouldn't be a problem. Which is why GHC actually does it: https://wiki.haskell.org/Performance/Strictness
Strictness analysis is good and useful... and difficult and not magic.
There are two kinds of people, I guess. To me, this description simply encapsulates the process of being a programmer. Boo hoo, you had to think a little bit and come back later to a hard problem in order to figure it out.
I'm sorry, but that's literally how every profession which requires engineering skills plays out. And like other professions, after you solve a problem once you don't have to solve the problem again. It's solved. The next template Gabriel writes in that flavor will not take nearly as long.
Seriously, all of these points he raises against FP are entirely contrived, and come across as the meaningless complaining of an uninspired programmer.
Is there any more info/links available about this?
Nah, I have a PhD in math and I agree with you completely. Imperative is way better. And most mathematicians agree with me. You can see this by cracking open any actual math or logic journal and looking how they write pseudocode (yes, pseudocode: where things like performance don't matter one tiny little bit). You'll see they're almost entirely imperative. Sometimes they even use GOTO!
Then OOP happened and much of the early guarantees about how something ran went away, abstraction everything meant we couldn't reasonably know what was happening behind the scenes. However, that wasn't the biggest issue. Performance of CPUs and memory had improved significantly to the point where virtual method calls weren't such a big deal. What was becoming important was the ability to manage the complexity of larger projects.
The big deal has come recently with the need to write super-large, stable applications. Sure, if you're writing relatively small applications like games or apps with limited functionality scope, then OOP still works (although it still has some problems). But, when applications get large the problems of OOP far outstrip the performance concerns. Namely: complexity and the programmer's inability to cognitively deal with it.
I started a healthcare software company in 2005 - we have a web-application that is now in the order of 15 million lines of code. It started off in the OOP paradigm with C#. Around 2012 we kept seeing the same bugs over and over again, and it was becoming difficult to manage. I realised there was a problem. I (as the CTO) started looking into coping strategies for managing large systems, the crux of it was to:
* Use actor model based services - this helped significantly with cognition. A single thread, mutating a single internal state object, nice. Everyone can understand that.
* Use pure functional programming and immutable types
The reason pure functional programming is better (IMHO) is that it allows for proper composition. The reason OOP is worse (IMHO) is because it doesn't. I can't reasonably get two interfaces and compose them in a class and expect that class to have any guarantees for the consumer. An interface might be backed by something that has mutable state and it may access IO in an unexpected way. There are no guarantees that the two interfaces will play nicely with each other, or that some other implementation in the future will too.
So, the reality of the packaging of state and behaviour is that there's no reliable composition. So what happens is, as a programmer, I'd have to go and look at the implementations to see whether the backing types will compose. Even if they will, it's still brittle and potentially problematic in the future. This lack of any kind of guarantee and the ongoing potential brittleness is where the cognitive load comes from.
If I have two pure functions and compose them into a new function, then the result is pure. This is ultimately (for me) the big deal with functional programming. It allows me to not be concerned about the details within and allows stable and reliable building blocks which can be composed into large stable and reliable building blocks. Turtles all the way down, pure all the down.
When it comes to performance I think it's often waaaay overstated as an issue. I can still (and have done) write something that's super optimised but make the function that wraps it pure, or at least box it in some way that it's manageable. Because our application is still C# I had to develop a library to help us write functional C# [1]. I had to build the immutable collections that were performant - the cost is negligible for the vast majority of use-cases.
I believe our biggest problem as developers is complexity, not performance. We are still very much working with languages that haven't really moved on in 20+ years. Yes, there's some improvements here and there, but we're probably writing approximately as many lines of code to solve a problem as we were 20 years ago, except now everyone expects more from our technology. And until we get the next paradigm shift in programming languages we, as programmers, need coping strategies to help us manage these never-ending projects and the ever increasing complexity.
Does that mean OOP is dead as an idea? No, not entirely. It has some useful features around extensible polymorphic types. But, shoehorning everything into that system is another billion dollar mistake. Now when I write code it always feels right, and correct, I always feel like I can trust the building blocks and can trust the function signatures to be honest. Whereas the OOP paradigm always left me feeling like I wasn't sure. I wish I'd not lost 10+ years of my career writing OOP tbh, but that's life.
Is functional programming a panacea? Of course not, programming is hard. But it eases the stress on the weak and feeble grey matter between our ears to focus on the real issue of creating ever more impressive applications.
I understand that my reasons don't apply to all programmers in all domains. But when blanket statements about performance are wheeled out I think it's important to add context.
leaky abstractions require the occasional lid-lifting... and all abstractions have a tendency to leak somewhere or other, especially if they attempt to be all encompassing.
I think FP is certainly a viable high-level specification but ultimately there is lower-level code 'getting stuff done' (lol, "side effects") One has to be at least roughly aware of HOW ones specification is getting implemented in order to solve problems that arise and in order to optimize.
This is all the more compelling reason to cease this relentless push to "cram more stuff down the tubes" or "add more layers to the stack"
I honestly think that we need to return to KIS/KISS (keeping it simple)
SIMPLIFY and remove extraneous stuff that prevents one from having a total mental model of what is happening.
Python's ecosystem is built on this premise. Let some other language (C) do the fast stuff and leverage that for your applications. It's not a niche language, even though you don't have direct control over things like memory management and GC.
Perhaps the commenter's role of real time graphics programming is actually the niche.
My experience doing functional programming is that hurt my brain, it just doesn't map as cleanly to how I think of things happening compared to imperative programming. It was just really painful to code in, and most of my classmates had the same opinion.
End of the day there are instructions emitted in a linear fashion and other instructions running (OS?) can provide an execution context ("Hi process, here's your memory area with pretend addresses do you can think it's all yours, you go over there to Core-2 and run at Z priority.")
OO is not particularly easy to learn WRT FP, but it does have the contextual advantage of having been delivered on the back of FAST compiled languages like C++.
Java runs as fast as the big money thrown into it's VM can make it run. If the JVM were dog-slow, you would see lower adoption of it.
Ocaml as used by corps like Jane Street etc,is not directly for running application code from (or rather, the application in question does code generation.)
High level languages could be expected to adopt either code-generational or Cython-style approaches. (Chicken Scheme, for example)
C++ merely has the historical accident of bridging 2 generational worlds of computing, hence you can find C++ full-stack.
Anyone for doing DSP purely in Ruby with no C-libs?
The idea of blackboxing/encapsulation is that the parent component should know as little as possible about the implementation of its child components.
I think things like real-time graphics are the exception not the rule. Most of the software run by users these days is in the context of a browser, which is implemented many layers of abstraction away from the machine. Much of the code running servers is also still interpreted scripting languages.
Don't get me wrong, I wish a lot more software was implemented with performance in mind, because the average-case user experience for software could be so much better, but a ton of the software we use today could be replaced by FP and perform just as well or better.
FP has been largely introduced into the mainstream of programming through Javascript and Web Dev. Let that sink in.
End of the day, the computer is an imperative device, and your training helps you understand that.
FP is a perfectly viable high-level specification or code-generational approach, but you are aware of the leaky abstraction/blackish box underneath and how your code runs on it.
I see FP and the "infrastructure as code" movement as part and parcel to the same cool end reality goal, but I feel that our current industry weaknesses are related to hiding and running away from how our code actually executes. Across the board.
Note that you almost never need to care about what the entirety of a "complex program" will generate - but often you need to care about what specific pieces you are working on will generate.
The C language itself might be defined in terms of an abstract machine, but it is still implemented by real compilers - compilers that, btw, you also have control over and often provide a lot of options on how they will generate code.
And honestly, if you have "absolutely no idea what kind of machine code" your C compiler will generate then perhaps it it will be a good idea to get some understanding.
(though i'd agree that it isn't easy since a lot of people treat compiler options as wells of wishes where they put "-O90001" and compiler developers are perfectly fine with that - there is even a literal "-Ofast" nowadays - instead of documenting what exactly they do)
Most code is LOB apps and social media apps churned by software factories and internal IS/IT areas. In these kind of projects coding is a rite of passage before becoming a team leader or a project manager, so most devs won´t invest much in their coding skills. As a result the average code tends to be badly decomposed procedural code over a procedural-like class hierarchy and devs just follow the fads because this is what gets them jobs.
Adding FP to this formula could prove really wrong for those in charge of projects. Better to be conservative and use Java, C#, Python or even nodejs/JavaScript as they allow to churn the same procedural code of ever just in different clothes.
> * A cake is a hot cake that [...]
The difference between a functional programmer and an imperative programmer is an imperative programmer looks at that and says “yeah, great takedown of FP”, while a functional programmer says, “what’s with the unbounded recursion?”
But, more seriously, it's long been established that real programming benefits from use of both imperative and declarative (the latter including—but not limited to—functional) idioms, which is why mainstream imperative OO languages have for more than decade importing functional features at a mad clip, and why functional languages have historically either been impure (e.g., Lisp and ML and many of their descendants) or included embedded syntax sugar that supports expressing imperative sequences using more conventionally imperative idioms (e.g., Haskell do-notation.)
The difference is that embedding functional idioms in imperative languages often requires warnings about what you can and cannot do safely to data without causing chaos, while imperative embeddings in functional code have no such problems.
At least in GCC though, there are a few optimizations included in the various -O flags that have no corresponding fine grained flag (usually because they affect optimization pass ordering or tuning parameters).
This quote chooses one of many FP syntaxes. It's cherry picking. It uses "a = b where c = d." That's equivalent to "let c = d in a = b." Let will allow you to write things like:
let
cake_ingredients = [butter, white sugar, sugar]
batter = cream(ingredients=cake_ingredients,
dish=large_bowl,
condition=LIGHT_AND_FLUFFY)
prepped_pans = pans_full_of(batter)
oven = preheat(your_over, 175 C)
cake = bake(cake, 30 minutes)
in
dessert_tonight = cooled(cake)
This isn't where FP and imperative are different.What's really different is that the let statement doesn't define execution order. That's not so relevant to this part of the mental modeling though.
I think it's great that I can choose between "let ... in ..." or "... where ...". In real life, for a complex bit of language, I happen to often like putting the main point at the top (like a thesis statement), then progressively giving more details. Mix and match however's clear.
Only relatively recently have programmers embraced its functional aspects; prior to that it was mostly used as a procedural language.
Then people started to used functional aspects of it to "shoehorn" it into allowing a quasi-OOP form of programming style, and this form has been baked (in no small part) into the latest version of ECMAScript.
But people following this path, coupled with (I believe) using JQuery, NodeJS, and other tools (and now React) have led most of them (raising hand here) to more fully embrace it as a functional language.
But here's the thing:
You can still use it as a procedural language - and an OOP language - and a functional language! All at the same time if you want - it doesn't care (much)! It's like this weird mismash of a language, a Frankenstein's Monster coupled to Hardware's killer robot.
Yes - with today's Javascript you can still write a unicorn farting stars that follows your mouse on a webpage while playing a MIDI file - and do it all procedurally. In fact, there's tons of code examples out there still showing this method.
You can mix in simple class-like constructs using anonymous functions and other weirdness - or just use the latest supported ECMAScript OOP keywords and such - go for it!
Want to mix it up? Combine them both together - it don't care!
Oh - and why not pass a function in and return one back - or an entire class for that matter! It's crazy, it's weird, it's fun!
It's maddening!
And yes - it's a crazy quirk of history - a language that was created by a single programmer over the course of a weekend (or so legend goes) at Netscape has and is seemingly taking over the world of software development.
Not to mention Web Assembly and all that weirdness.
I need to find an interview with that developer; I wonder what he thinks about his creation (which is greatly expanded over what he started with, granted) and it's role in software today - for good or ill...
Michael Abrash (graphics programmer extraordinaire) said it best, and I'll paraphrase: the best optimizing compiler is between your ears. The right algorithm beats the pants off the most optimized wrong algorithm. Or, as i like to say "there is nothing faster than nothing" -- finding a way to avoid a computation is the ultimate optimization.
And managed memory is wonderful, almost all the time. That is, just until the GC decides to do a big disposal and compaction right in the middle of a time-sensitive loop causing that thing that "always works" to break, unpredictably, due to a trigger based on memory pressure. Been there, done that. If it's a business report or a ETL, big deal. If it's a motor-control loop running equipment, your data or machinery is now trash.
For most of the programming world, and I count myself in this group, the highly abstracted stuff is great. Right up until the moment where something unexpected doesn't work then it turns in to a cargo cult performance because it's nearly all black-box below. Turtles, all the way down.
There is value in understanding the whole stack, even today.
The real question is, why are people now taking a second look at functional programming. And the answer is Moore's law. Moore's law is coming to and end, and CPUs are not getting faster. Instead they are adding more and more cores. To take advantage of lots of cores you need concurrency. OOP is not very concurrency-friendly because objects have state, and to avoid corrupting state in a multi-threaded environment you need locks, and locks reduce concurrency. Functional programming doesn't have state, so you don't need locks, so you can get better concurrency.
My usual issue with the 'you can avoid the GC by not allocating' claims in any language, is how much of the language is still usable? Which features of the language allocate under the hood? Can I use lambdas? pattern matching? list compression or whatever nice collection is available in the language?
Note that I do agree that even in very high performance/low latency applications, there will be components (even a majority of them) that will be able to afford GC without issues; but then it is important to be able to isolate the critical pieces from GC pauses (for example, can I dedicate one core to one thread and guarantee that GC will never touch that thread?)
I mean... it's not though, is it? Some things happen synchronously, but this is not the same thing as being an imperative device. Almost every CPU out there is multi core these days, and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.
If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?
Can I cite you on this? Because I have only ever seen this explained in Programming 101, where Java is the language they teach.
I wonder where this sentiment comes from. I imagine it came from marketing.
I think this is just a feature of imperative languages over functional ones. Functional languages are excellent for many things, but not for this stuff.
> In computer science, imperative programming is a programming paradigm that uses statements that change a program's state.
All CPU's I know of are definitely imperative. My (limited) understanding of GPU instruction sets is that they are fairly similar, except that they use all SIMD instructions.
It's true that in many domains, people care much less about performance than they used to.
At the same time, other people care a lot more about performance. Programming is just big and diverse.
The end of single score scaling is one big reason it's more important than ever.
Another reason is simply that a lot more people use computers now, and supporting them takes a lot of server resources. In the 90's there were maybe 10M or 100M people using a web site. Now there are 100M or 1B people using it.
I think there's (rightly) a resurgence in "performance culture" just because of these two things and others. CppCon is a good conference to watch on YouTube if you want to see what people who care about performance are thinking about.
----
If you're writing a web app, you might not think that much about performance, or to be fair it's not in your company's economics to encourage you to think about it.
But look at the hoops that browser engineers jump through to make that possible! They're using techniques that weren't deployed 10 years ago, let alone 20 or 30 years ago.
Somebody has to do all of that work. That's what I mean by computing being more diverse -- the "spread" is wider.
And why should s/he do so? Between the language and the programmer, which one is the tool? Should not the tool fit the human, and not the other way around?
FP fits the way some people think. It doesn't fit the way others think. And that's fine. It's not a defect that some people think that way, and it's not a defect that some people don't.
This particular solution used functional reactive programming, essentially a composition of signal/event processing functions/automatons.
Sorry no offence but I do not want to write Go at all. If I want to use such a language I will use Rust with nicer features and better out of the box performance (see TechEmpower results), no GC and more safety (no memory corruption bugs or data races).
I am not sure if I am the one who paints himself into a corner.
This is precisely the reason why pure FP is prioritizing referential transparency. Even if objects are perfectly encapsulated, with enough complexity, because other objects will depend on that information, and because that information mutates and changes over time, this is bound to cause some errors.
Compilers can't check program correctness because of the halting problem, so FP aims to give the programmer some patterns + laws to help better reason across this "higher" dimension of moving parts.
Predicting how the program will be executed, even in a language such as C99 or C11, requires several layers of abstraction.
What most programmers using these languages are concerned about is memory layout as that is the primary bottleneck these days. The same is true for developers of FP languages. Most of these languages I've seen have facilities for unboxing types and working with arrays as you do. It's a bit harder to squeeze the Haskell RTS onto a constrained platform which is where I'd either simply write in C... or better, compile a subset of Haskell without the RTS to a C program.
What I find neat though is that persistent structures, memoization, laziness, and referential transparency gave us a lot of expressive power while giving us a lot of performance out of the gate. In an analogous way to how modern CPU cores execute instructions speculatively while maintaining the promise of sequential access from the outside; these structures combined with pure, lazy run time allow us to speculatively memoize and persist computations for more efficient computations. This lets me write algorithms that can search infinite spaces using immutable structures and get the optimal algorithm for the average case since the data structures and lazy evaluation amortize the cost for me.
There's a good power-to-weight ratio there that, to me, we're only beginning to scratch the surface of.
Note the upload date:
If Rust ever approaches Go's ease of use/learning curve/etc without losing its performance or incurring other costs, I'll happily make the switch for my productivity-sensitive applications as well.
Edit: I forgot to also mention: weak typing was an awful idea.
For controlling what the CPU and RAM are doing? Yes. The graphics shader, on the other hand, is a pipeline architecture with extremely tight constraints on side-effects. The fact the shader languages are procedural seems mostly accident of history or association to me than optimal utility, and the most common error I see new shader developers make is thinking that C-style syntax implies C-style behaviors (like static variables or a way to have a global accumulator) that just aren't there.
The way the C-style semantics interface to the behavior of the shader (such as shader output generated by mutating specifically-named variables) seems very hacky, and smells like abstraction mismatch.
So, yeah if you're working in a niche domain where raw performance is the dominant concern, then you should absolutely use a language that optimizes for that. However, in a general case using FP language will work just fine.
> Compiler writers let C programmers pretend that they are writing code that is “close to the metal” but must then generate machine code that has very different behavior if they want C programmers to keep believing that they are using a fast language
The reasons for the imperative style being dominant are largely historical. Back in the day we had single core machines with limited memory and very slow drives. Imperative style and mutability makes a lot of sense in this scenario. Today, the problem of squeezing out every last bit of performance from a single core is not the most interesting one. And we're naturally seeing more and more FP used in the wild because it's a better fit for modern problems.
And yet Python has one of the richest and most widely used scientific computing stacks. If writing performant code in a friendly language is all that important, then Julia stands as a more reasonable alternative than does some functional language.
Here’s an example of people finding functional programming unnatural, maybe you can leverage your experience to explain why he is wrong:
Functional Programming Is Not Popular Because It Is Weird https://probablydance.com/2016/02/27/functional-programming-...
Just this morning, I had to resort to Stack Overflow for using an Either... a concept I thought I well understood. Turns out, the way I've done it in Scala might not be the norm.
Many programmers coming to this library are coming from JavaScript, so expecting them to understand some (or many) of these things, might not be the right approach. The author has gone to some great lengths to blog about the foundations of FP... so this might help a bit. I just wish the docs were fleshed out with more examples. (the repo is open sources.... I could put up or shut up here)
Well... it's complicated. A CPU is imperative. An ALU is functional. A GPU is vectorized functional.
Not ones that capture anything from the environment. I'm not sure about ones that don't, but I imagine you can use them.
> pattern matching?
Yes.
> list compression
List comprehensions, you mean? They don't exist in OCaml, but if they did, they would have to allocate a result list. So no.
> for example, can I dedicate one core to one thread and guarantee that GC will never touch that thread?
I don't think so, maybe someone else can chime in. But more importantly, this is a one-in-a-million use case that is a far cry from "functional programming is always bloated and slow and unpredictable". GC is well-understood, advanced, and comfortably fast enough for most applications. And for other cases, if you really need that hot innermost part, write it in C and call that from OCaml if you're paranoid about it.
Does it matter for data analysis and most web apps, infra as code, etc? Which data scientists do you know fetishize how Python is laying out memory?
OOP is a hot mess. Yes, I know, you’re all very well versed in how to use it “right”, but the concept enables a mess. It’s the C of coding paradigms when it would be great to have a paradigm that pushes towards Rust, and reduces the chance for hot messes from the start.
Most of this work is organizing run of the mill business information. Why it works from a math perspective is more universally applicable and interesting anyway.
The "function" in "functional programming" is a reference to mathematical functions. Mathematical functions do not have side effects, and consequently are referentially transparent (the result doesn't depend on the evaluation order or on how many times the function is evaluated). Code with side effects is not a function in the mathematical sense, it's a procedure. The defining characteristic of functional programming is the absence of side effects. That isn't something you can just tack on to an imperative (or "multi-paradigm") language. No matter how many cosmetic features you borrow from functional languages, like closures and pattern-matching and list comprehensions, you still have the side-effects inherent in the support for imperative code, which means your program is not referentially transparent.
Haskell manages to apply the functional paradigm to real-world programs by essentially dividing itself into two languages. One has no formal syntax and is defined entirely by data structures (IO actions). This IO language is an imperative language with mutable variables (IORefs) and various other side-effects. The formal Haskell syntax concerns itself only with pure functional code and has no side effects. IO actions are composed by applying a pure function to the result of one IO action to compute the next action. Consequently, most of a Haskell program consists of pure code, and side-effects are clearly delineated and encapsulated inside IO data types at the interface between the Haskell code and the outside world.
OO and FP are just higher-level ways of organizing source code that gets reduced to a linear sequence of instructions for any given hardware execution unit.
This is not true.
Many algorithms are intrinsically imperative (e.g., quicksort). You can represent it using some monads in Haskell to hide this, but in the end your code is still imperative; and if you want to parallelize it, you still have to think about synchronization.
Just to dive into Ada/SPARK: https://docs.adacore.com/spark2014-docs/html/ug/en/source/co...
even the most exotic architecure you can think of is imperative (systolic arrays, or transport triggered architecture or...whatever)
there are instructions and they are imperative.
I can vaguely rememeber some recent iterative AI of some kind who had to produce a functioning circuit to do XYZ, and the final netlist that it produced for the FPGA was so full of latches, taking advantage of weird timing skew in the FPGA fabric and stuff, and no engineer could understand the netlist as sensical, but the circuit worked... I suppose when there's that level of non-imperative design, you can truly call it both declarative, and magic.
[1] Certain primitive operations can be reordered but that depends on the compiler having access to the entire program. A call to a shared library function is an effective optimization barrier for any access to non-local data due to potential side effects.
[2] For the purpose of this example I'm assuming the unused `oven` variable was meant to be passed in to the `bake` function.
maybe the physics is imperative too lol
Did they ? Because I keep seeing people around me who want to get into FPGA programming because they aren't getting enough juice from their computers. Sure, if you're making $businessapp you don't care but there is a large segment of people who really really really really want things to go faster, with no limits - content creation, games, finance... hell, I'd sell a rib for a compiler that goes a few times faster :-)
But I'll contend that it's much more productive to basically wrap low-level functionality as modules that higher-level languages could compose. One could then optimize individual modules.
The mechanism of composition should lay it out as desired in memory for best efficiency, and hence the probably need for a layout step, presuming precompiled modules. (it could use 'ld', for example) i'm not sure how you would optimize memory layout for black-boxes, but perhaps some standard interface..
Most people here are doing this already without knowing it, if you look into the dependencies of your higher level programming tools and kit.
End of the day OOP is a code-organization technique. FP is too. They are both useful. We still have complexity. Some poster above needing actor models etc, depends upon the scale I suppose. If one is considering a distributed healthcare application, or is one trying to get audio/video not to glitch etc.
As someone who writes pure FP for a living at a rather large and well known org, these threads physically hurt me. They're consistently full of bad takes from people who don't like FP, or haven't written a lick of it. Subsequently, you get judgements that are chock full of misconceptions of what FP actually is, and the pros and cons outsiders believe about FP are completely different from its practitioners. It's always some whinge about FP not mapping "to the metal", which is comical given say, Rust's derivation from what is quite functional stock.
My personal belief? We just don't teach it. Unis these days start with Python, so a lot of student's first exposure to programming is a multi-paradigm language that can't really support the higher forms of FP techniques. Sure, there may be a course that covers Haskell or a Lisp, but the majority of the teaching is conducted in C, C++, Java or Python. Grads come out with a 4 year headstart on a non-FP paradigm, why would orgs use languages and techniques that they're going to have to train new grads with from scratch?
And training people in FP is bloody time consuming. I've recorded up to 5 hours of lecture content for devs internally teaching functional Scala, which took quadruple the time to write and revise, plus the many hours in 1-on-1 contact teaching Scala and Haskell. Not a lot of people have dealt with these concepts before, and you really have to start from scratch.
And how many of the top 10 languages are running in a virtual machine? Which could be literally doing anything under the hood with your allocations, caching, etc?!
There is nothing wrong with saying, I don't see this working out in my domain due to these concerns it's just silly to say, I never see it taking off because it can't work in my domain.
I think this video nails it pretty dead on. My team works almost exclusively in C# these days for reasons mostly beyond our control. The team generally likes the language quite a bit (it's one of my personal favorites). But when I find myself asking for new features they come in two buckets. I'd like feature that help certain members of my team write less side effect heavy code and I'd like immutability by default with opt-in mutability. Basically I'd like more functional like features. But hey, that's what I see from my niche.
VMs provide an environment, just like any other. Javascript articles are chock full of information on how to not abuse the GC by using closures in the wrong place. C#'s memory allocation is very well defined, Java has a million tuning parameters for their GC, Go is famous for providing goroutines with very well defined characteristics.
Heck people who know C# can look at C# code and tell you almost exactly what the VM is going to do with it. And now days C# allows direct control over memory.
People writing high performance code on Node know how the runtime behaves, they know what types of loads it is best for, none of that is a mystery.
Sure some details like "when does this code get JITed vs interpreted" are left up to the implementation, but it isn't like these things are secret. I think every major VM out there now days is open source, and changes to caching behavior is blogged about and the performance implications described in detail.
The fact is, all programming paradigms are merely ways to limit our code to a subset of what the machine can do, thereby making reasoning about the code easier.
They are purely mental tools, but they almost all have a performance cost for using them. They are turing complete tools of course, any solution is theoretically solvable with any of the major paradigms, but not every paradigm is appropriate for every problem.
So, you know, pick the paradigm that makes it easiest to reason about the problem space, given acceptable performance trade offs.
I don't agree the lack of proactive education is the reason FP isn't the norm. Your conclusion doesn't take into account the counterfactuals:
- C Language took off in popularity despite BASIC/Pascal being the language more often taught in schools
- languages like PHP/Javascript/Python/Java all became popular even though prestigious schools like MIT were teaching Scheme/Lisp (before switching to Python in 2009).
You don't need school curricula to evangelize programming paradigms because history shows they weren't necessarily the trendsetters anyway.
On a related note, consider that programmers are using Git DVCS even though universities don't have formal classes on Git or distributed-version-control. How would Git's usage spread with everybody adopting it be possible if universities aren't teaching it? Indeed, new college grads often lament that schools didn't teach them real-world coding practices such as git commands.
Why does Functional Programming in particular need to be taught in schools for it to become a norm but all the other various programming topics do not?
A lot of what became OO language features arose because people were already using the style in non-OO languages. C being a great example of how you can use a ton of C++ like features, you best end up writing a lot of boiler plate to hook up function pointers and the like.
Going back further we see features in C were being implemented by assembly prigrammers as macro assembly. So the pattern the author puts forward has basically held true for multiple shifts in programming paradigms.
Which leaves me with one point of contention with the presenter. That OO dominance is not happenstance. And neither was the fact that lots of people were writing OO style C. There is something about OOP that helped people think about their code easier. That's why they adopted the features. Maybe not everything about it was great and we're learning that. But it genuinely helped people. Just as English language oriented languages helped people over ASM.
The point is that to be mainstream it's enough to be used by one major app store
How many apps care about FPGA or what compilers do, given that they don't even know what the underlying OS does or when and why memory is allocated?
I work in finance, even there performances are for niche projects, the bulk of the job is replacing excel macros with something slightly less 90s style.
As a side note: I hate hardware, but I love graph algorithms, which is why I love register coloring so much :)
Because I think it is harder for people who have programmed with other paradigms - following an inverse law, most things should get easier to learn with experience, not harder. It's foreign, it's weird, it's back to front. It doesn't have an immediately obvious benefit to what people are used to, and the benefits it has come at scale and up against the wall of complexity (in my opinion). It's hard to adopt upfront. At the small scale it's often painful to use. The syntax is weird. It's aggressively polymorphic, reasoning in abstractions rather than concretions. I could go on (and yet I still adore it).
The only reason FP has been successful as it is, is because its evangelists are incredibly vocal, to the point of being fucking annoying sometimes. It's had to be forced down people's throats at times, and frankly, there's no better place to force a paradigm down someone's throats than at a university, where non-compliance comes at academic penalty, and when the mind is most impressionable.
I was quibbling with the point that because FP languages often don't give low level control they can't become successful even though nearly every language on that top ten list suffers from the same perf oriented deficiency.
To write performant code in any of those top ten languages you have to understand the characteristics and nuances of the underlying tech stack.
And honestly people who don't write performant Java because they didn't bother to learn about the GC wouldn't have magically done otherwise writing C++. Trust me, that language does not intrinsically cause you to write performant code. It does intrinsically cause you to leak memory though.
But the bigger point is that in many domains performance is second to many other concerns. Like you said pick the languages that matches your needs.
So I think we pretty much agree.
FP can't even sell itself well in school as a language where useful things can be done, when the student is stuck in a deep valley of ___morphisms and other alien concepts with claims of aesthetic elegance as the only motivation. I recall the math nerds loved it as relief over C that the rest of the department used, but with me being rather mediocre in math, the weirdness and cult-like vibe from the prof and TA left a really bad taste. The impression was so deep that I have no issues recalling this class a decade later. I've never touched any FP since, unless you count borrowing clever lambda snippets.
Functional programming is weird in the same way Japanese is weird to an Anglophone. A person who learned Japanese as their mother tongue will find English equally weird. The comments in the link you posted already address the points the author tries to make, which all boil down to FP being different from what they're used to.
At some point in history, people stopped worrying about not understanding compilers
This part is misleading too -- I would say there is a renaissance in compiler technology now. For the first 10 years of my career I heard little about compilers, but in the last 10, JS Engines like v8 and Spidermonkey, and AOT compiler tech like LLVM and MLIR have changed that.
The overall historical trend is that computing is getting used a lot more. So you have growth on both ends: more people using high level languages, and more people caring about performance.
It's not either/or -- that's a misleading way of thinking about it. The trend is more "spread", not everyone going "up" the stack. There will always be more people up the stack because lower layers inherently provide leverage, but that doesn't mean the lower layers don't exist or aren't relevant.
And lots of people migrate "down" the stack during their careers -- generally people who are trying to build something novel "up stack" and can't do it with what exists "down there".
The functionally written recipe from https://probablydance.com/2016/02/27/functional-programming-... may be less helpful if I need to know exactly what steps to take to bake a cake, but it will actually be much more helpful if I want to know what a baked cake is. It isn't quite a fair example because it leverages how humans already know what a baked cake is, what a preheated oven is, etc and the clunkiness of the FP-style recipe is likely more due to that than anything fundamental to FP.
Let's try a different example that better maps to real world application logic. The task is to build a scootybooty.
Imperatively, a scootybooty program is:
- Acquire four wheels and two axels.
- Chop down a tree.
- Plane wood from tree into curved flat shape.
- Attach axels to convex side of planed wood shape.
- Attach wheels to axel.
Declaratively it is:
- A scootybooty is a planed plank of wood with two trucks.
- A planed plank of wood is a flat board.
- A truck is an axel with two wheels.
Now imagine your boss asks you wtf this scootybooty thing is and what it can do. Which program more quickly allows you to answer these questions? My favorite thing about the FP/declarative paradigm is that the mental model first-classes the abstract thing you are implementing above how you implement it. Imperative style encourages you to think about the steps it takes to do something moreso than the thing itself which IMO can lead to cart-before-horse type mistakes in planning. Declarative programming: "the forest is made of many trees", imperative programming: "tree, tree, tree, tree, tree, tree..."
Also worth noting that the idea is to use FP around stuff like the actual game logic, and then handle rendering details imperatively.
> I've never touched any FP since, unless you count borrowing clever lambda snippets.
I'd urge you to give it another shot if you have spare time. Even in spite of all the dogshit things associated with it, it's a paradigm I've bet my career on.
If everything is an object, then you can use all your tooling that works with objects, which is everything. If everything is pure, you get easy parallelism. If everything is an actor, you get easy distributability. If everything is a monad or function, you get easy compositionality. The list goes on. Smalltalk, Erlang and Haskell are languages with very dedicated fan bases, which I theorise is because they went all in on their chosen paradigm.
I still think there's something missing in your theory of cause-&-effect. A math topic like quaternions is hard and yet programmers in domains like 3d graphics and games have embraced it more than FP.
I also think Deep Learning / Machine Learning / Artificial Intelligence is even more difficult than Functional Programming and it seems like Deep Learning (e.g. Tensorflow, Pytorch, etc) will spread throughout the computer industry much more than FP. Just because the topic is hard can't be the defining reason.
>The only reason FP has been successful as it is, is because its evangelists are incredibly vocal,
But why is FP in particular only successful because of loud evangelists? Why can't FP's benefits be obvious so that it doesn't require evangelists? Hypothetical example:
- Company X's software using FP techniques is 10x smaller code base, 10x less bugs, and 10x faster feature development -- than Company Y. Ergo, this is why Company X is worth $10 billion while Company Y is only worth $1 billion or bankrupt.
If you think the above would be an unrealistic and therefore unfair comparison, keep in mind the above productivity improvement happened with the industry transition from assembly language to C Language. (A well-known example being 1980s WordPerfect being written in pure assembly language while MS Word was written in C Language. MS Word was iterating faster. WordPerfect eventually saw how assembly was holding them back and finally migrated to C but it was too late.) Yes, there's still some assembly language programming but it's niche and overshadowed in use by higher-level languages like C/C++.
If Functional Programming isn't demonstrating a similar real world massive productivity improvement to Imperative Programming, why is that? I don't think it's college classes. (Again, see all the non-PhD enthusiasts jumping on the free FastAI classes and brushing up on Linear Algebra to teach themselves deep learning.)
Realistically, Java (or something very much like it) is the apex of OOP, at least as most people will experience it. The Ur-example of OOP might be a beautiful, internally consistent vision of mathematical purity, but most of us will never experience it.
Similarly, Agile-fall is the most common form of Agile that people will experience, which is why we always fall into "no true Scotsman" territory when ~~arguing about~~ discussing it.
There is, I think, a disconnect between people who are primarily concerned with the beauty of software - simple models, elegant algorithms, and so on - and the people who are primarily concerned with getting their feature branch merged to master so their manager will let them go to their kid's soccer game.
The beauty of software is important, and there's value in trying to bring the useful, but more esoteric concepts of CS into the mainstream, but at the same time we need to be aware of the ground truth of software development.
i'm not sure what you mean by that because compilers reorder instructions to improve performance all the time (and CPUs do it dynamically too).
While C is less constrained, it's structurally very similar to Pascal; they don't differ in paradigm and are in the same syntax family.
There is an contest organized by the International Conference on Functional Programming: https://en.wikipedia.org/wiki/ICFP_Programming_Contest
It was more or less designed to show the superiority of functional programming languages. Yet in that contest C++ has done better than OCaml or Haskell...
The FP crowd seems to be more active doing advocacy than writing code. Yes, we know, there is that one trading company using OCaml. It's such a niche language that they have to pretty much maintain the toolchain and standard library themselves. Meanwhile, plenty of more successful companies use C++, C# or Java with no problem.
If you want to convince someone of the superiority of FP, write a real killer application. A new browser to compete with Chrome, a video game that can dethrone Skyrim or the Witcher 3. Maybe a DBMS that's even better than PostgreSQL? Basically: show, don't talk.
By this logic Java Streams are the apex of functional programming and anyone who uses them is fully qualified to dismiss the paradigm, even if they don't know anything about proper functional languages.
It probably wasn't clear, but the reason I didn't use any dependencies is because I was avoiding JS's built in inheritance mechanism, which I don't think is very compatible with FP. You can build objects out of closures and build your own object oriented mechanisms if you want. Unfortunately you run into the limitations of the implementations I mentioned.
I always hesitate to link to my own "fun" code, but just so you understand that I was not looking for code quality in this: https://gitlab.com/mikekchar/testy But it shows how I was using an FP style in JS to implement an alternative OO system. I really had fun writing this code and would have continued if there weren't problems with using closures in JS.
Edit: If you look at this, it's probably best to start here: https://gitlab.com/mikekchar/testy/blob/master/design.md
I really should link that in the README...
In it's ideal form it's about taming the metal like a trick pony.
I'm nowhere near that level, being a fortran in any language sorta guy lol, but when I see well-built stuff, I take notes. Matryushka dolls, lol
Kay et al were swimming in the same water as Hewitt etc, conceptually. He said that the key takeaway from OOP was objects passing messages (actor model), not so much the inheritance story (the composition)
but yes, they all criss-cross there
- Modeling all communication as synchronous message-passing. Some communication (such as evaluating mathematical functions) is naturally modeled as synchronous procedure calls, while communication which is naturally modeled as message-passing should be asynchronous by default (to address unpredictable latency, partial failure, etc.).
- Emphasizing implementation inheritance as the primary means of code reuse. This is now generally acknowledged to be a mistake, so I won't elaborate.
- Deferring all method resolution to runtime. This makes the amazing introspective and dynamic capabilities of Smalltalk possible, but it also makes it impossible to statically verify programs for type-correctness.
- Relying on mutable local state rather than explicit, externalized state. This is controversial, and it's a defect of the Actor model as well (yes, passing new parameters into a tail-recursive message receive loop is equivalent to mutating local state). The partisans of OOP and the Actor model believe this to be a virtue, enabling robust emergent collective behavior from small autonomous software agents, but it makes predicting large-scale behavior difficult and debugging nearly impossible.
The interface to the CPU is imperative. Each core (or thread for SMT) executes a sequence of instructions, one by one. Even with out-of-order and speculation, the instructions are executed as if they were executed one by one.
> and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.
They do. Each "core" of the GPU executes a sequence of instructions, one by one, but each instruction manipulates several separate copies of the state in parallel; the effect is like having several identical cores which operate in lockstep.
> If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?
The cause and effect are in the opposite direction. The "imperative mindset" comes from the hardware. Even Lisp machines used imperative machine code (see https://en.wikipedia.org/wiki/Lisp_machine#Technical_overvie... for an example).
That is, in the traditional model of declarative programming, the semantics given are guaranteed, but the actual order of operations are not. So, in a sense, the CPU takes what could be construed as imperative code, but treats it as declarative rather than imperative.
> At some point in history, people stopped worrying about not understanding compilers
> This part is misleading too
Not in the least. Interpreting that to mean "all people stopped worrying" is deliberate misinterpretation.
Because there aren't immediate benefits. They only pop out at scale and with complexity, as I said.
> similar real world massive productivity improvement to Imperative Programming
Because there isn't. It's a reasonable benefit, but it's not transformative. I think it's there, enough to commit to FP completely, but the massive productivity improvement doesn't exist, or at least, only exists in specific cases, e.g. the WhatsApp + Erlang + 50 engineers parable (you could argue that this is due to the actor model and BEAM, rather than FP. An argument for a different day).
I feel like this hard + reasonable benefit isn't really efficient utilisation of people's time, especially when there's things like Deep Learning floating around. I think the immediate reaction to a lot of what FP evangelists claim is a shrug and a "I guess, but why bother?"
I put the blame for that squarely on the Haskell cultists. They've caused everyone to have the impression that functional programming needs to have esoteric syntax and lazy evaluation by default.
It's like how the Java cultists have ruined OOP.
https://www.manning.com/books/functional-programming-in-scal...
E.g. at Facebook, the PHP code I write is usually highly functional.
That's also a great way to make people hate it. An example is literature classes with mandatory reading and how they make students hate reading fiction.
I would also say that this might turn off more students from programming. We had functional programming in uni, where we learned Haskell. Maybe a handful of students liked it or were neutral about it, the vast majority seemed to have a negative view of it.
I think that FP is just more difficult to learn. Just look at essentially any entry level programming course and how people understand loops vs recursion.
While C for userland programs may need to conform to the operating system's libc and system call interface abstractions, on the other side of the syscall boundary is C code (ie. the kernel) that is indeed very "close to the metal".
What do you do?
But university computer science seems to be specialized from mathematics instead of generalized from engineering, so CS professors most of the time have no idea about real world problems. At least here in Germany, where the problem seems especially bad.
For FP I would reply Haskell/PureScript, OCaml, and Scheme.
[1] https://news.ycombinator.com/item?id=21238802
[2] https://medium.com/@brianwill/object-oriented-programming-a-...
It wasn't until I used Less on a project that I encountered a generator that did what I expected it to do in almost every situation. It output essentially what I would have written by hand, with a lot less effort and better consistency.
I expect people who adopted C felt roughly the same thing.
People presenting on OOAD occasionally warn about impedance mismatches between the way the code works and the way you describe it[0]. If what it 'does' and how you accomplish it get out of sync, then there's a lot of friction around enhancements and bug fixes.
It makes me wonder if this is impossible in FP, or just takes the same degree of care that it does in OO.
[0] a number of cow-orkers have 'solved' this problem by always speaking in terms of the code. Nobody has the slightest clue what any of them are talking about 50% of the time.
Spark is the quintessential Google-scale FP project - it was even born out of the MapReduce paper by Google!
And there's plenty of other large-scale projects that are arguably in an FP style specifically to deal with the problems associated with scaling them: the Agda/Isabelle proof checkers, the seL4 kernel, the Chisel/FIIRTL project, Erlang/OTP, the Facebook Anti-Spam system (built on Haxl), Jane Street's massive investment into OCaml, Twitter's investment into Scala.
Not all scale problems are distributed problems. Some distributed problems are tackled by FP, and some aren't. Ultimately, these large-scale projects pop up at similar rates to the usage of the language themselves. It's intellectually dishonest to say that FP can't be used to tackle large scale problems, and the problems that occur at scale, because its repeatedly validated that it can.
1) I wouldn't make it the entry level course. It's clearly a paradigm that's used by a minority of people, so it doesn't make sense to start educating students with it.
2) I mandate that all students take it, maybe in their 3rd year. We're going to mandate it because there are tangible benefits (which we've assumed for the sake of this argument). They're going to find it harder and more confusing because its's different to what they're used to. A lot of them may not like it and won't see immediate benefits. Some may even come to dislike it. Frankly, I don't care, some will pick it up and learn about it further. And when the students that disliked it inevitably run it into the future, they sufficiently prepared to deal with it.
We're back to square 1: forcing it down student's throats. If you still think that we shouldn't be forcing students to learn FP in schools, I think you have a problem not with FP but with structured curriculums.
Over the last 10 years or so, we have come to the painful conclusion that mixing data and code is a very, very bad idea. It can't be done safely. We are putting functionality into processors and operating systems to expressly disallow this behavior.
If that conclusion is true, then Lisp is broken by design. There is no fixing it. It likes conflating the two, which means it can't be trusted.
The code certainly "looks imperative" but it's still a declarative program --- the semantics are rather different from what a typical "imperative programmer" would expect.
What about low-barrier situation with scale and complexity ?
An imaginary situation:let's say you start building your system from a large open-source project that needs a lot of customization.
Will FP be of big enough benefit than ?
I'm curios about the answer, but for a sec, let's assume it does:
Than could it be a uni project ? dig into the belly of 2 beast projects, one FP, one OOP. And see the difference in what you could achieve.
Could something like that work ?
https://tonyg.github.io/squeak-actors/
http://scholarworks.sjsu.edu/cgi/viewcontent.cgi?article=123...
http://scg.unibe.ch/archive/papers/Scha03aTraits.pdf
http://web.media.mit.edu/~lieber/Lieberary/OOP/Delegation/De...
http://bracha.org/pluggableTypesPosition.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.134...
Even if none of the work I mentioned above existed, this sort of criticism is amateurish at best. Real engineering requires considering trade-offs in real-life contexts. For example, compile-time checks aren't going to help you figure out that some vendor supplies incorrect data via a web service. An early prototype, however, can do exactly that.
- Have you ever heard about how Walmart handles Black Fridays?
- Do you even know what's behind Apple's payment system?
- You ever used Pandoc, Couchbase, Grammarly, CircleCI, Clubhouse.io, Pandora, Soundcloud, Spotify?
- Have you ever asked a question - what is an app like WhatsApp that was sold for $19 Billion runs on?
- or How Facebook fights spam, Cisco does malware detection or AT&T deals with abuse complains?
- How Clojure is used at NASA or how Microsoft uses Haskell?
Frankly, I don't even know what's there to debate about. Functional programming is already here and it's been used in the industry for quite awhile already and its usage growing in a quite steady pace. Almost every single programming language today has certain amount of FP idioms, either built-in or via libraries. So yeah, while you're sitting there, contemplating about if FP useful or not, people been building hundreds of awesome products.
That was true 10 years ago. Now they're just tight constraints but not extremely so: there're append buffers, random access writeable resources, group shared memory, etc.
> The way the C-style semantics interface to the behavior of the shader seems very hacky
I agree about GLSL, but HLSL and CUDA are better in that regard, IMO.
Julie Moronuki who never had any exposure to programming at all and has a degree in linguistics decided to learn Haskell as her first programming language, just as an experiment. Not only she did manage to learn Haskell and become an expert, she co-authored one of the best selling Haskell books. I remember her saying that after Haskell other (more traditional) languages looked extremely confusing and weird to her.
Every large (or even small) company has people writing stuff in Perl, Bash, Haskell, Ruby, Rust, VBA, Scala, Lua or what not. I've been that guy, too.
More often than not it is a distraction more than anything, and it ultimately ends up being rewritten in C++, Java or Python. I think there are some niches where it helps; OCaml has had some success with static analysis and proof assistants, or even with code generation projects like FFTW.
But no, it's not some people, it's not most people, it's 99%+ of all developers that stopped worrying about compilers. There will always be a use case for it, but when we're talking about < 1% of all developers we're really spending time talking about a niche.
There will always be niches in any industry, but we shouldn't design our industry/profession about niche cases.
No, you can't. Because like the other commenter noted: "This is utter rubbish." It only looks easy to understand on the surface, but quickly becomes a mess. "spaghetti code" and "lasagna code" are the terms invented in OOP realm. Being said that - some advanced FP concepts can be pretty daunting to grasp as well.
Saying that human brains are OOP or FP oriented is equivalent to saying that human brains wired to recognize patterns in music but not color, or something like that.
Look, I've seen both sides and I know this for sure (this isn't a mere opinion, this is a certain fact) - FP allows to build and maintain products using smaller teams.
You don't have to trust my word, do your research, google "companies using Clojure" (or Haskell, OCaml, Erlang, etc). You will see that either those companies are not too big, or the FP teams in large companies not very large. Skeptics often cite this fact, claiming it to be the proof that FP codebases don't scale to large teams. The truth is - you don't need a big team to build a successful product with FP language. And the number of startups using FP langs is steadily growing.
Firefox is written in Rust
> a video game that can dethrone Skyrim or the Witcher 3.
afaik latest "God Of War" is written in Rust
> Maybe a DBMS that's even better than PostgreSQL?
Datomic - Clojure, Mnesia, Riak, CouchDB - Erlang
Yeah, I know that Rust is not FP lang, it's imperative, but it does adhere to FP principles.
"it's not that useful"? Heh.
you: Here's a talk on making real world commercial games with Clojure
video: dozens of game jam games have been made
E.g.
I didn’t say it is impossible to do X with FP - I said it is not necessary to do X in FP. You can convince yourself of that by looking for larger-scale non-FP counter-examples to the ones you've cherry-picked.
Every single large scale problem is a distributed problem simply because human societies are multi-agent distributed systems and programming languages are human-computer interfaces.
The issues at scale are systemic and arise from the design trade-offs made when your system's requirements bumps against the limits of computation. No language/paradigm can work around those.
The best a language can do is abstract-away the complexities behind a problem - solve it once (in the language/paradigm's preferred way) and give the human an interface/concept to work with.
I mean, fucking todo apps are making "good money" in 2019, it does not mean that they are good examples. These kind of presentations should improve on the state of the art, not content themselves with something that was already possible a few decades ago. No one gets into game dev to make money, the point is to make better things than what is existing - be it gameplay wise, story wise, graphics wise...
It's more like, states better match our more common way to model our sense-data. It's easier to grasp for us, but it doesn't mean it's the way that will cause provide the best desirable results.
If you take the example of mass in physic, most of the time it's perfectly fine to deal with it as an first class attribute of an object. But it's not how Higgs mechanism aboard the notion.
Have a go at in Godbolt, https://godbolt.org/
OCaml also integrates with perf on Linux,
https://ocaml.org/learn/tutorials/performance_and_profiling....
Some performance tips from an old partially archived page.
https://hackerfall.com/story/writing-performance-sensitive-o...
And if you are feeling fancy, doing some pointer style programming
Learned to code in the mid-80's, Basic and Z80 FTW.
Followed up by plenty of Assembly (Amiga/PC), and systems level stuff using Turbo Basic, Turbo Pascal, C++ (MS-DOS), TP and C++ (Windows), C++ (UNIX), and many other stuff.
I was lucky enough that my early 90's university has exposed us to Prolog, Lisp, Oberon (and its descendants), Caml Light, Standard ML, Miranda.
Additionally the university library allowed me to dive into a parallel universe from programming ideas that seldom reach mainstream.
Which was great, not only did I learn that it was possible to marry systems programming with GC enabled languages, it was also possible to be quite productive with FP languages.
Unfortunately this seems to be yet another area that others only believe on its possibilities after discovering it by themselves.
Not really.
"Confessions Of A Used Programming Language Salesman, Getting the Masses Hooked on Haskell"
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72....
You really want your programming language to have innate constructs for directly controlling the baggage the x86 CPU (or any other for that matter) brings with it? I don't.
You also want kernel code to be performant (ie. compiled by a decently optimizing compiler, of which there are many for C), allow you to disable garbage collection or be totally free of it so you can explicitly manage separate pools of memory. C ticks all those boxes which is why its still the most dominant and widespread language for OS kernel development nearly half a century since UNIX was first rewritten in C, and will be for years to come, like it or loathe it, and despite there being much more modern contenders (eg. Rust) which don't have the momentum yet.
And even then pretty much every project out there uses "-Ofast" instead of whatever "-Ofast" enables without caring about what it does or how its behavior will change across compilers.
Beyond that it doesn't really invalidate anything i wrote and is only tangentially relevant to my comment (where i didn't even mentioned C as a low level language, i only said that you can have an idea of what sort of instructions a C compiler will generate for a piece of code if you study its output for a while), why did you post it without any comment of your own?
- vector execution units
- out of order execution
- delay slots
- L1 and L2 explicit cache access
- MMU access
- register windows
- gpgpu
All of that is given access by Assembly opcodes, not C specific language features.
And if you going to refer to language extensions to ISO C for writing inline Assembly, or compiler intrinsics, well the first OS written only in high level language with compiler intrinsics was done 10 years before C existed and is still being sold by Unisys.
The only thing that C has going for it are religious embedded devs that won't touch anything else other than C89 (yep not even C99), or FOSS UNIX clones.
And yeah, thanks to those folks, the Linux Kernel Security summit will have plenty of material for future conferences.
I clearly mentioned that assembler was required for much of this, where components aren't programmed by MMIO. This would be the same regardless of whether you used Rust, Go, or FORTRAN77 to write your kernel.
I'm not even going to bother with your security comments, we all agree by now. There are plenty of people using C99 in embedded at least in userspace, even the Linux kernel uses some C99 extensions (eg. --std=gnu89 with gcc), and those FOSS UNIX clones have taken over the planet at this point in terms of smartphone adoption, data center servers etc. Despite the obvious flaws, this is still a better world to live in than Microsoft's proposed monoculture of the 1990's.
The phones I know as having taken over the world run on C++, Objective-C, Swift, Java, with very little C still left around, and with its area being reduced with each OS release.
As for data centers, there is a certain irony that on Azure those FOSS run on top of Hyper-V, written in C++, on Google Cloud run on top of gVisor written in Go, on Amazon on top of Firecracker written in Rust, and on ChromeOS containers written in a mix of Go/Rust.
So ... you're repeating what I already said.
Android -> Linux -> mostly C, and assembly
IOS -> Darwin -> XNU -> C/C++, and assembly
Hyper-V runs as a Windows Server role. Windows kernel is C, and assembly
gVisor runs on Linux -> C, assembly
Firecracker runs on KVM, which runs on Linux -> C, assembly
In every single thing you have listed, the closest thing to the "bare metal" is C, and assembly. THAT's what makes C special. Its level of adoption, ubiquity and accessibility. Not its spectacular lack of security risks.
Anyway, you have come a very long way from where the parent poster started which was:
Most of the stuff people think of as being the "metal" in
C are, in many cases, virtual abstractions created by the
operating system.
To which I merely pointed out, on the other side of the interface layer is, most commonly C. And assembly.Operating Systems design has to and is obviously evolving away from this. I disagree that we have reached "peak C" and that is going to decline before it gets bigger.
Unfortunately pjmlp many of the conversations we have start this way, and devolve into this. I don't think I'm going to bother again. I think one (or both) of us will just have to agree to disagree. Have a nice day.
In the meantime, did you find a memory leak in my code? https://news.ycombinator.com/item?id=21275440
Not that I want to vehemently disagree with your security statements, but I think I'd love to have a little bit more "show" and less "tell". That also applies to showing practicality of managed languages, practicality of 90's business software development (C++/COM), practicality of dead commercial languages (Delphi + VCL).
Giving just endless lists of ancient buzzwords doesn't help.
I wish you joy and entertainment interfacing your managed data structures with assembly code.
Yesterday I was forced to look into COM for the first time. There was some kind of callback that I was interested in, and it had basically two arrays as arguments, only in a super abstract from. I'm not lying, it was 30 lines of code before the function could actually access the elements in the arrays (with special "safe" function calls to get/set data).
Of course, that stupid callback had to be wrapped as a method in a class, and had to be statically declared as a callback with a special macro that does member pointer hackery, and that has to be wrapped in some more BEGIN_MAP/END_MAP (or so) macros. Oh yeah, and don't forget to list these declarations in the right order.
Thanks, but that's not why I wanted to become a programmer.
JavaScript's use of more and more functional patterns came with Underscore.js and CoffeeScript, which were both inspired by Ruby-based web dev!
I'd say the entire industry, Java included, has been moving towards more FP in a very sluggish fashion.
Regarding show, don't tell.
The 21st century React Native for Windows is written on top of COM/C++,
https://github.com/microsoft/react-native-windows
https://www.youtube.com/watch?v=IUMWFExtDSg
We are having a Delphi conference in upcoming weeks, https://entwickler-konferenz.de/, and it gets regularly featured on the German press, https://www.dotnetpro.de/delphi-959606.html.
I was thinking you'd look at it before writing your next 25 comments, but it seems I was wrong. So I'll just wait, it's fine.
> The 21st century React Native for Windows is written on top of COM/C++
From a skim I could find exactly zero mentions of COM/C++ stuff in there. Sure, this RN might sit on a pile of stuff that has COM buried underneath. That doesn't mean that COM is a necessity to do this React stuff, and not even that it's a good design from a developer's perspective.
You give zero ideas what's a good idea about COM. Just buzzwords and links to stuff and more stuff, with no relation obvious to me.
If you actually have to go through the whole COM boilerplate and the abominations to build a project with COM, just to connect to a service, because some people thought it wasn't necessary to provide a simple API (connect()/disconnect()/read_next_event()) then the whole thing isn't so funny anymore.
I really don't know what kind of COM you have been writing, because COM from VCL, MFC, ATL, UWP, Delphi, .NET surely doesn't fulfill that description.
As for what COM is good for,
"Component Software: Beyond Object-Oriented Programming"
https://www.amazon.com/Component-Software-Object-Oriented-Pr...
As for other languages, I haven't touched COM at all but the idea of making GUIDs for stuff and registering components in the operating system doesn't seem a good default approach to me. Pretty sure it's more reliable to link object files together by default, so you can control and change what you get without the bureaucracy of versioning, etc.
> ReactNative for Windows uses WinUI and XAML Islands, which is UWP, aka COM.
Is the fact that COM is buried under this pile more than an unfortunate implementation detail?
I would argue that when other objects from different parts of the code depend on the same state and there is no clear hierarchy or data flow direction between those objects, then that is going to cause problems regardless of whether the language is OOP or FP. The problems will manifest themselves in different ways but it will be messy and difficult to debug in either case (FP or OOP) because this is an architectural problem and not a programming problem. It will require a refactoring.
OOP helps to reduce architectural problems like this because it encourages developers to break logic up into modules which have distinct, non-overlapping concerns.
I have one compute device, the GPU, that I program with its language (eg GLSL, OpenCL).
I have another compute device, the CPU, that I program with its language (eg C, C++).
I have code to control these devices, that mostly handles scheduling and waiting on the results of these computations (as well as network traffic and user input), and I program that in a language that supports functional style (eg C#, TypeScript).
But that doesn't really happen in reality. FP languages promised auto-parallelisation for decades and never delivered. Plus you can get it in imperative languages too - like with Java's parallel streams. But I never see a parallel stream in real use.
The kind of reordering you see in imperative programs tends to be on the small scale, affecting only nearby primitive operations within a single thread. You don't generally see imperative compilers automatically farming out large sections of the program onto separate threads to be evaluated in parallel. That is something that only really becomes practical when you can be sure that the evaluation of one part won't affect any other part, i.e. in a language with referential transparency.
It’s nice to be able to make a function as concisely as something like:
Const foo_finder = R.find( R.propEq( ‘prop’, ‘foo’ ) )
...
Const a_foo = foo_finder( a_list )
For example according to the documentation in GCC 7.4 -O3 turns on:
-fgcse-after-reload
-finline-functions
-fipa-cp-clone
-fpeel-loops
-fpredictive-commoning
-fsplit-paths
-ftree-loop-distribute-patterns
-ftree-loop-vectorize
-ftree-partial-pre
-ftree-slp-vectorize
-funswitch-loops
-fvect-cost-model
whereas in GCC 9.2 -O3 turns the above, plus: -floop-interchange
-floop-unroll-and-jam
-ftree-loop-distribution
-fversion-loops-for-strides
So unless you control the exact version of the compiler that will generate the binaries you will give out, you do not exactly know what specifying "-O3" will do.Moreover even though you do know the switches, their documentation is basically nothing. For a random example what "-floop-unroll-and-jam" does? The GCC 9.2 documentation combines it with "-ftree-loop-linear", "-floop-interchange", "-floop-strip-mine" and "-floop-block" and all it says is:
> Perform loop nest optimizations. Same as -floop-nest-optimize. To use this code transformation, GCC has to be configured with --with-isl to enable the Graphite loop transformation infrastructure.
...what does that even mean? What sort of effect will those transformations have on the code? Why are they all jumbled in one explanation? Are they exactly the same? Why does it say that they are the same "-floop-nest-optimize"? Which option is the same? All of them? The -"floop-nest-optimize" documentation says:
> Enable the isl based loop nest optimizer. This is a generic loop nest optimizer based on the Pluto optimization algorithms. It calculates a loop structure optimized for data-locality and parallelism. This option is experimental.
Based on the Pluto optimization algorithms? Even assuming that this refers to "PLUTO - An automatic parallelizer and locality optimizer for affine loop nests" (this is a guess, no other references in the GCC documentation as far as i can tell), does it mean they are the same as the the code in pluto, that they based on the code and are modified or that they are based on the general idea/concepts/algorithms?
--
So it isn't really a surprise that most people simple throw out "-Ofast" (or -O3 or -O2 or whatever) and hope for the best. They do not know better and they cannot know better since their compiler doesn't provide them any further information. And this is where all the FUD and fear about C's undefined behavior comes - people not knowing what exactly happens because they are not even told.
EDIT: An example of effective parallelism in Haskell:
import Control.Parallel (par)
fib n
| n < 2 = 1
| n >= 15 = b `par` a `seq` a + b
| True = a + b
where a = fib (n-2); b = fib (n-1)
main = print $ map fib [0..39]
Note that the implementation of `fib` has been deliberately pessimized to simulate an expensive computation. The only difference from the non-parallel version is the use of `par` and `seq` to hint that the two operands should be evaluated in parallel when n >= 15. These hints cannot change the result, only the evaluation strategy. Compile and link with "-threaded -with-rtsopts=-N" and this will automatically take advantage of multiple cores. (1C=9.9s elapsed; 2C=5.4s; 3C=4s; 4C=3.5s)Javascript without any support for mutation or other side effects wouldn't really be recognizable as Javascript any more.
We end up having to rely heavily on compilers like LLVM which make boil down exactly what should depend on what, and how to best lay out the commands accordingly.
Imagine if the dominant programming style in the last few decades had been a declarative one. We wouldn't have had any of this nonsense about working out after the fact what depends on what, we could have been sending it right down to the CPU level so that it could deal with it.
Saying people who don't optimize for performance don't have technical excellence is just like saying people who don't get all of their program to fit into 32kb don't have technical excellence.
Yes it requires skills to get a program to run in such a small amount of space, just like it takes skill to perform detailed performance optimizations. But in either case if that's not your job you're wasting time and someone else's time, even if it makes you happy to do so.
A product is designed to serve a purpose, if instead of working on that purpose is developer is squeezing out a few additional cycles of perf or a few additional kb of memory they have the wrong priorities.
No that doesn't mean go to the other extreme, but choosing to not spending unnecessary time on performance or size optimization is entirely unrelated to technical excellence. And any senior engineer knows this.
I've seen a couple of gui toolkits in rust following the Elm architecture and I think it's an amazing idea. It would be great if I was able to create apps like this using something like Qt behind the scenes.
I've been trying to see why fp isn't intuitive for me.
I suspect it's like a second (human) language acquired as an adult: only those with a talent for language (maybe 5%?) can become fluent with practice.
Regarding my first example, I see recursion (or induction) as the essence of fp; and the recurrence form of arithmetic sequences is the simplest recursion I've seen used in mathematics.
The explicit form in that example is harder to justify as "imperative". But a commonality of imperative style is referring to the original input, rather than a previous step (see the first line of my above comment). This isn't the literal meaning of "imperative", but may be a key distinction between fp and ip style - the one that causes the intuitive/fluency issue for me.
To illustrate using my third (jq) example of suffixes, here's an "imperative" version, in py-like psuedocode:
for i = 1 to length
# a suffix
for j = i to length
print a[j]
print \n
This is so much longer than jq (though shorter if used .[j:]), but it is how I understand the problem, at first and most easily.It always refers to the starting input of the problem, not the previous step, and this might be why it's easier for me.
I'm interested in your comment - could you elaborate please? There's a few ways to relate your comment to different parts of mine, and I'm not sure which one was intended.
Keeping a functional style, regardless of the language (although FP languages lend themselves better to this) can help in keeping code more decoupled, since you have to be explicit about side effects.
I think that both FP and imperative languages have places where they shine, and I freely switch between them depending on the project. Given how much some imperative languages have recently borrowed from FP languages, I think that this shows that functional programming has some significant merits.
We have code so use code which can be parsed, evaluated by the type checker, and so on. What if you mistype 'fop' instead of 'foo'?
There can be leakage, when the given model is not perfectly accurate, and you need the true implementation details (this also happens for imperative code - it can be very helpful to have source of libraries) - in debugging, in performance, in working out how to do things.
But I feel a general issue is that it might not be a good fit for the human code processing system... Our interactions in the real world are more like imperative programming - not just familiarity, but how we evolved. This issue is similar to how quantum physics and relativity aren't a good match to the human physics system, which seems to be the mechanical/contact theory. To convert things to recursion is like working out an inductive proof - you can do it, but it is harder and more work than just getting it done in the first place.
A specific issue about this is that functional recursion is based on the previous step, whereas imperative code is usually based on the starting step. Like, build a new list at each recursion vs. indices into the input list. The latter is easier because it's always the same thing being indexed, instead of changing with each recursion.
But I disagree with you that recursion is the essence of fp. For your concrete example, a more functional version of doing that (in Python) would be something like:
print("\n".join(a[i:] for i in range(len(a)))
No need to reuse f(i-1) when you can express f(i) directly.Reusing the previous step (whether it is using recursion, rising intermediate computations in the form of local variables in a loop, or through a fold) should only be done when absolutely necessary.
But it doesn't actually matter. How often is parallelStream used in reality? Basically never. I would find the arguments of FP developers convincing if I was constantly encountering stories of people who really wanted to use parallelStream but kept encountering bugs where they made a thinko and accidentally mutated shared state until they gave up in frustration and just went back to the old ways. I'd find it convincing if I'd had that experience also. In practice, avoiding shared state over the kind of problems automated parallelism is used for is very easy and comes naturally. I've used parallel streams only rarely, and actually never in a real shipping program I think, but when I did I was fully aware of what mutable state might be shared and it wasn't an issue.
The real problem with this kind of parallelism is that it's too coarse grained and even writing par or parallelStream is too much mental overhead, because you often can't easily predict when it'll be a win vs a loss. For instance you might write a program expecting the list of inputs to usually be around 100 items: probably not worth parallelising, so you ignore it or try it and discover the program got slower. Then one day a user runs it on 100 million items. The parallelism could have helped there, but there's no mechanism to decide whether to use it or not automatically, so in practice it wasn't used.
Automatic vectorisation attacked this problem from a different angle and did make some good progress over time. But that just creates a different problem - you really need the performance but apparently small changes can perturb the optimisations for unclear reasons, so there's an invisible performance cliff. The Java guys pushed auto-vectorisation for years but have now given up on it (sorta) and are exposing explicit SIMD APIs.
As for libraries, you can just treat them as stateful external things you have to interact with the same way IO / network calls are stateful.
You also followed this up with
> The kind of problems that emerge at scale are not the kind of problems FP tackles.
I cherry picked these examples to demonstrate that you're completely talking out of your ass here.
> I didn’t say it is impossible to do X with FP - I said it is not necessary to do X in FP. You can convince yourself of that by looking for larger-scale non-FP counter-examples to the ones you've cherry-picked.
I never said it wasn't possible to tackle these problems without FP.
You need to get rid or the assumption that "if X is better than Y at task Z, everyone will use X rather than Y for task Z". You've used that line of logic to attempt to invalidate FP's capabilities. It simply does not make sense.
Thanks, that's my main concern (fp was just an example). Would you agree the reason it is bad is becase there is more to track mentally in the execution model? (i.e. the intermediate results).
I think a complex execution model is problematic in general (it sounds obvious when I say it that way).
> which mathematicians love to use so much,
hmm... I was thinking "induction", and believed that fp is the same. .. > But I disagree with you that recursion is the essence of fp
This is BTW now, but that statement surprises me. Can you elaborate? What is the essencd of fp (has it one)?
Is your py version "more functional"? I'm so wedded to the idea that fp=recursion that that's the reason it doesn't seem functional to me. What makes it functional? Just that it's a nested expression (i.e. fn calls)?
I want to write UI in javascript because it's really nice language for prototyping but I also want it to be fast and javascript is unpredictable. Now, this might not be the case with OCAML but no matter what optimizations your compiler (or JIT interpreter) can do, you're still living in a lie, it's still some abstraction which is going to leak at some point.
I've recently removed quite a lot of rust dependencies (wrappers) and the speedup is very noticable, it's because abstractions always come with a cost and you can't just pretend you're living in a rainbow world.
BTW: you're not going to get much lower than 50M, cocoa has some overhead (10M IIRC), node has too (20M) and OCaml GC needs some heap too, and if you have any images, you need to keep them somewhere before sending to the GPU and GC to be fast needs to keep some mem around so that allocs are faster than just plain malloc.
BTW2: in rust world, it's common to see custom allocators and data-oriented programming because it starts to get noticeable and this is hard to do if you can't reason about memory.
If anyone is interested too, here's a repo https://github.com/cztomsik/graffiti
For dev teams of sufficiently large size, a general principle is: whatever crazy things the language allows, someone is going to do and commit into the codebase.
The .join() taking the iterator in their example is, if you look closer, very much a fold/reduce repeatedly invoking a join of the thus far assembled string, the next part, and \n. Recursion!
Also rather than mutable i/j variables being incremented (albeit implicitly so in your example), generating a list of all numbers on which to run.