In my experience “oversubscribing” threads to cores (more threads than cores) provides a wall-clock time benefit.
I think one thread per core would work better without preemptive scheduling.
But then we aren’t talking about Unix.
In my experience “oversubscribing” threads to cores (more threads than cores) provides a wall-clock time benefit.
I think one thread per core would work better without preemptive scheduling.
But then we aren’t talking about Unix.
In this very specific case, it seems as though the vast majority of the webserver's work is asynchronous and event-based, so the actual webserver is never waiting on I/O input or output - once it's ready you dump it somewhere the kernel can get to it and move on to the next request if there is one.
I think this gets this specific project close to the platonic ideal of a one-thread-per-core workload if indeed you're never waiting on I/O or any syscalls, but I feel as though it should come with extreme caveats of "this is almost never how the real world works so don't go artificially limiting your application to `nproc` threads without actually testing real-world use cases first".
Has Ron Minnich's port of "Nix" (not NixOS as you may know it), to 9front.
The entire point of this is to disallow the kernel pre-empting and switching out CPU cores that should be dedicated to an "application". (Application Cores).
One could imagine this arrangement plus io_uring would be awfully nice.