In my experience “oversubscribing” threads to cores (more threads than cores) provides a wall-clock time benefit.
I think one thread per core would work better without preemptive scheduling.
But then we aren’t talking about Unix.
In my experience “oversubscribing” threads to cores (more threads than cores) provides a wall-clock time benefit.
I think one thread per core would work better without preemptive scheduling.
But then we aren’t talking about Unix.
This works fine on Linux, and common approach for trading systems where it’s fine to oversubscribe a bunch of cores for this type of stuff. The cores are mostly busy spinning and doing nothing, so it’s very inefficient in terms of actual work, but great for latency and throughput when you need it.
It's not blanket good advice for all things.
Most developers are unfamiliar with the design idioms for TPC e.g. how to properly balance and shed load between cores.
In this very specific case, it seems as though the vast majority of the webserver's work is asynchronous and event-based, so the actual webserver is never waiting on I/O input or output - once it's ready you dump it somewhere the kernel can get to it and move on to the next request if there is one.
I think this gets this specific project close to the platonic ideal of a one-thread-per-core workload if indeed you're never waiting on I/O or any syscalls, but I feel as though it should come with extreme caveats of "this is almost never how the real world works so don't go artificially limiting your application to `nproc` threads without actually testing real-world use cases first".
For workloads that are a mix of IO and non-trivial CPU work, it can still work but is much, much harder to get right.
Has Ron Minnich's port of "Nix" (not NixOS as you may know it), to 9front.
The entire point of this is to disallow the kernel pre-empting and switching out CPU cores that should be dedicated to an "application". (Application Cores).
One could imagine this arrangement plus io_uring would be awfully nice.