←back to thread

287 points shadaj | 2 comments | | HN request time: 0s | source
Show context
rectang ◴[] No.43196141[source]
Ten years ago, I had lunch with Patricia Shanahan, who worked for Sun on multi-core CPUs several decades ago (before taking a post-career turn volunteering at the ASF which is where I met her). There was a striking similarity between the problems that Sun had been concerned with back then and the problems of the distributed systems that power so much the world today.

Some time has passed since then — and yet, most people still develop software using sequential programming models, thinking about concurrency occasionally.

It is a durable paradigm. There has been no revolution of the sort that the author of this post yearns for. If "Distributed Systems Programming Has Stalled", it stalled a long time ago, and perhaps for good reasons.

replies(5): >>43196213 #>>43196377 #>>43196635 #>>43197344 #>>43197661 #
EtCepeyd ◴[] No.43196377[source]
> and perhaps for good reasons

For the very good reason that the underlying math is insanely complicated and tiresome for mere practitioners (which, although I have a background in math, I openly aim to be).

For example, even if you assume sequential consistency (which is an expensive assumption) in a C or C++ language multi-threaded program, reasoning about the program isn't easy. And once you consider barriers, atomics, load-acqire/store-release explicitly, the "SMP" (shared memory) proposition falls apart, and you can't avoid programming for a message passing system, with independent actors -- be those separate networked servers, or separate CPUs on a board. I claim that struggling with async messaging between independent peers as a baseline is not why most people get interested in programming.

Our systems (= normal motherboards on one and, and networked peer to peer systems on the other end) have become so concurrent that doing nearly anything efficiently nowadays requires us to think about messaging between peers, and that's very-very foreign to our traditional, sequential, imperative programming languages. (It's also foreign to how most of us think.)

Thus, I certainly don't want a simple (but leaky) software / programming abstraction that hides the underlying hardware complexity; instead, I want the hardware to be simple (as little internally-distributed as possible), so that the simplicity of the (sequential, imperative) programming language then reflect and match the hardware well. I think this can only be found in embedded nowadays (if at all), which is why I think many are drawn to embedded recently.

replies(4): >>43196464 #>>43196786 #>>43197684 #>>43199865 #
hinkley ◴[] No.43196786[source]
I think SaaS and multicore hardware are evolving together because a queue of unrelated, partially ordered tasks running in parallel is a hell of a lot easier to think about than trying to leverage 6-128 cores to keep from ending up with a single user process that’s wasting 84-99% of available resources. Most people are not equipped to contend with Amdahl’s Law. Carving 5% out of the sequential part of a calculation is quickly becoming more time efficient than taking 50% out of the parallel parts, and we’ve spent 40 years beating the urge to reach for 1-4% improvements out of people. When people find out I got a 30% improvement by doing 8+6+4+4+3+2+1.5+1.5 they quickly find someplace else to be. The person who did the compressed pointer work on v8 to make it as fast as 64 bit pointers is the only other person in over a decade I’ve seen document working this way. If you’re reading this we should do lunch.

So because we discovered a lucrative, embarrassingly parallel problem domain that’s what basically the entire industry has been doing for 15 years, since multicore became unavoidable. We have web services and compilers being multi-core and not a lot in between. How many video games still run like three threads and each of those for completely distinct tasks?

replies(2): >>43207818 #>>43208672 #
1. linkregister ◴[] No.43207818[source]
> 8+6+4+4+3+2+1.5+1.5

What is this referring to? It sounds like a fascinating problem.

replies(1): >>43209618 #
2. EtCepeyd ◴[] No.43209618[source]
>> When people find out I got a 30% improvement by doing 8+6+4+4+3+2+1.5+1.5

> What is this referring to?

30 = 8+6+4+4+3+2+1.5+1.5