←back to thread

287 points shadaj | 4 comments | | HN request time: 0.341s | source
Show context
rectang ◴[] No.43196141[source]
Ten years ago, I had lunch with Patricia Shanahan, who worked for Sun on multi-core CPUs several decades ago (before taking a post-career turn volunteering at the ASF which is where I met her). There was a striking similarity between the problems that Sun had been concerned with back then and the problems of the distributed systems that power so much the world today.

Some time has passed since then — and yet, most people still develop software using sequential programming models, thinking about concurrency occasionally.

It is a durable paradigm. There has been no revolution of the sort that the author of this post yearns for. If "Distributed Systems Programming Has Stalled", it stalled a long time ago, and perhaps for good reasons.

replies(5): >>43196213 #>>43196377 #>>43196635 #>>43197344 #>>43197661 #
EtCepeyd ◴[] No.43196377[source]
> and perhaps for good reasons

For the very good reason that the underlying math is insanely complicated and tiresome for mere practitioners (which, although I have a background in math, I openly aim to be).

For example, even if you assume sequential consistency (which is an expensive assumption) in a C or C++ language multi-threaded program, reasoning about the program isn't easy. And once you consider barriers, atomics, load-acqire/store-release explicitly, the "SMP" (shared memory) proposition falls apart, and you can't avoid programming for a message passing system, with independent actors -- be those separate networked servers, or separate CPUs on a board. I claim that struggling with async messaging between independent peers as a baseline is not why most people get interested in programming.

Our systems (= normal motherboards on one and, and networked peer to peer systems on the other end) have become so concurrent that doing nearly anything efficiently nowadays requires us to think about messaging between peers, and that's very-very foreign to our traditional, sequential, imperative programming languages. (It's also foreign to how most of us think.)

Thus, I certainly don't want a simple (but leaky) software / programming abstraction that hides the underlying hardware complexity; instead, I want the hardware to be simple (as little internally-distributed as possible), so that the simplicity of the (sequential, imperative) programming language then reflect and match the hardware well. I think this can only be found in embedded nowadays (if at all), which is why I think many are drawn to embedded recently.

replies(4): >>43196464 #>>43196786 #>>43197684 #>>43199865 #
1. gmadsen ◴[] No.43196464[source]
I know c++ has a lack luster implementation, but do coroutines and channels solve some of these complaints? although not inherently multithreaded, many things shouldn't be multithreaded , just paused. and channels insteaded of shared memory can control order
replies(2): >>43196525 #>>43196850 #
2. EtCepeyd ◴[] No.43196525[source]
I've found both explicit future/promise management and coroutines difficult (even irritating) to reason about. Co-routines look simpler at the surface (than explicit future chaining), and so their the syntax is less atrocious, but there are nasty traps. For example:

https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines...

3. hinkley ◴[] No.43196850[source]
Coroutines basically make the same observation as transmit windows in TCP/IP: you don’t send data as fast as you can if the other end can’t process it, but also if you send one at a time then you’re going to be twiddling your fingers an awful lot. So you send ten, or twenty, and you wait for signs of progress before you send more.

On coroutines it’s not the network but the L1 cache. You’re better off running a function a dozen times and then running another than running each in turn.

replies(1): >>43199563 #
4. gmadsen ◴[] No.43199563[source]
fair enough, that was the design choice c++ went with to not break ABI and have moveable coroutine handles

rust accepted the tradeoff and can do pure stack async,

there are things you can do in c++ to not get the dynamic allocation to heap, but it requires a custom allocator + predefining size of coroutines.

https://pigweed.dev/docs/blog/05-coroutines.html