←back to thread

151 points ibobev | 5 comments | | HN request time: 0.012s | source
Show context
jandrewrogers ◴[] No.45660889[source]
I've worked on several thread-per-core systems that were purpose-built for extreme dynamic data and load skew. They work beautifully at very high scales on the largest hardware. The mechanics of how you design thread-per-core systems that provide uniform distribution of load without work-stealing or high-touch thread coordination have idiomatic architectures at this point. People have been putting thread-per-core architectures in production for 15+ years now and the designs have evolved dramatically.

The architectures from circa 2010 were a bit rough. While the article has some validity for architectures from 10+ years ago, the state-of-the-art for thread-per-core today looks nothing like those architectures and largely doesn't have the issues raised.

News of thread-per-core's demise has been greatly exaggerated. The benefits have measurably increased in practice as the hardware has evolved, especially for ultra-scale data infrastructure.

replies(3): >>45661411 #>>45661630 #>>45667563 #
FridgeSeal ◴[] No.45661630[source]
Are there any resources/learning material about the more modern thread-per-core approaches? It’s a particular area of interest for me, but I’ve had relatively little success finding more learning material, so I assume there’s lots of tightly guarded institutional knowledge.
replies(2): >>45664476 #>>45668196 #
1. jandrewrogers ◴[] No.45664476[source]
Unfortunately, not really. I worked in HPC when it was developed as a concept there, which is where I learned it. I brought it over into databases which was my primary area of expertise because I saw the obvious cross-over application to some scaling challenges in databases. Over time, other people have adopted the ideas but a lot of database R&D is never published.

Writing a series of articles about the history and theory of thread-per-core software architecture has been on my eternal TODO list. HPC in particular is famously an area of software that does a lot of interesting research but rarely publishes, in part due to its historical national security ties.

The original thought exercise was “what if we treated every core like a node in a supercomputing cluster” because classical multithreading was scaling poorly on early multi-core systems once the core count was 8+. The difference is that some things are much cheaper to move between cores than an HPC cluster and so you adapt the architecture to leverage the things that are cheap that you would never do on a cluster while still keeping the abstraction of a cluster.

As an example, while moving work across cores is relatively expensive (e.g. work stealing), moving data across cores is relatively cheap and low-contention. The design problem then becomes how to make moving data between cores maximally cheap, especially given modern hardware. It turns out that all of these things have elegant solutions in most cases.

There isn’t a one-size-fits-all architecture but you can arrive at architectures that have broad applicability. They just don’t look like the architectures you learn at university.

replies(3): >>45666836 #>>45668765 #>>45669891 #
2. FridgeSeal ◴[] No.45666836[source]
> Writing a series of articles about the history and theory of thread-per-core software architecture has been on my eternal TODO list

Your past has already been super interesting, so if you ever do get around to writing this, I’d be very excited to read it!

3. jgraettinger1 ◴[] No.45668765[source]
As someone with workloads that can benefit from these techniques, but limited resources to put them into practice, my working thesis has been:

* Use a multi-threaded tokio runtime that's allocated a thread-per-core * Focus on application development, so that tasks are well scoped / skewed and don't _need_ stealing in the typical case * Over time, the smart people working on Tokio will apply research to minimize the cost of work-stealing that's not actually needed. * At the limit, where long-lived tasks can be distributed across cores and all cores are busy, the performance will be near-optimal as compared with a true thread-per-core model.

What's your hot take? Are there fundamental optimizations to a modern thread-per-core architecture which seem _impossible_ to capture in a work-stealing architecture like Tokio's?

replies(1): >>45678129 #
4. packetlost ◴[] No.45669891[source]
I'll toss $20-50 your way to bump up the priority on writing that knowledge down, only strings are it has to actually get done and be publicly available
5. jandrewrogers ◴[] No.45678129[source]
A core assumption underlying thread-per-core architecture is that you will be designing a custom I/O and execution scheduler that is purpose-built for your software and workload at a very granular level. Most expectations of large performance benefits follow from this assumption.

At some point, people started using thread-per-core style while delegating scheduling to a third-party runtime, which almost completely defeats the purpose. If you let tokio et al do that for you, you are leaving a lot of performance and scale on the table. This is an NP-Hard problem; the point of solving it at compile-time is that it is computationally intractable for generic code to create a good schedule at runtime unless it is a trivial case. We need schedulers to consistently make excellent decisions extremely efficiently. I think this point is often lost in discussions of thread-per-core. In the old days we didn’t have runtimes, it was just assumed you would be designing an exotic scheduler. The lack of discussion around this may have led people to believe it wasn’t a critical aspect.

The reality that designing excellent workload-optimized I/O and execution schedulers is an esoteric, high-skill endeavor. It requires enormous amounts of patience and craft, it doesn’t lend itself to quick-and-dirty prototypes. If you aren’t willing to spend months designing the many touch points for the scheduler throughout your software, the algorithms for how events across those touch points interact, and analyzing the scheduler at a systems level for equilibria and boundary conditions then thread-per-core might not be worth the effort.

That said, it isn’t rocket science to design a reasonable schedule for software that is e.g. just taking data off the wire and doing something with it. Most systems are not nearly as complex as e.g. a full-featured database kernel.