Fibers
Green threads
Coroutines
Actors
Queues (eg GCD)
…
Basically you need to reason about what your thing will do.Separate concerns. Each thing is a server (microservice?) with its own backpressure.
They schedule jobs on a queue.
The jobs come with some context, I don’t care if it’s a closure on the heap or a fiber with a stack or whatever. Javascript being single threaded with promises wastefully unwinds the entire stack for each tick instead of saving context. With callbacks you can save context in closures. But even that is pretty fast.
Anyway then you can just load-balance the context across machines. Easiest approach is just to have server affinity for each job. The servers just contain a cache of the data so if the servers fail then their replacements can grab the job from an indexed database. The insertion and the lookup is O(log n) each. And jobs are deleted when done (maybe leaving behind a small log that is compacted) so there are no memory leaks.
Oh yeah and whatever you store durably should be sharded and indexed properly, so practicalkt unlimited amounts can be stored. Availability in a given share is a function of replicating the data, and the economics of it is that the client should pay with credits for every time they access. You can even replicate on demand (like bittorrent re-seeding) to handle spikes.
This is the general framework whether you use Erlang, Go, Python or PHP or whatever. It scales within a company and even across companies (as long as you sign/encrypt payloads cryptographically).
It doesn’t matter so much whether you use php-fpm with threads, or swoole, or the new kid on the block, FrankenPHP. Well, I should say I prefer the shared-nothing architecture of PHP and APC. But in Python, it is the same thing with eg Twisted vs just some SAPI.
You’re welcome.