←back to thread

291 points rbanffy | 1 comments | | HN request time: 0.209s | source
Show context
sgarland ◴[] No.44004897[source]
> Instead, many reach for multiprocessing, but spawning processes is expensive

Agreed.

> and communicating across processes often requires making expensive copies of data

SharedMemory [0] exists. Never understood why this isn’t used more frequently. There’s even a ShareableList which does exactly what it sounds like, and is awesome.

[0]: https://docs.python.org/3/library/multiprocessing.shared_mem...

replies(8): >>44004956 #>>44005006 #>>44006103 #>>44006145 #>>44006664 #>>44006670 #>>44007267 #>>44013159 #
tinix ◴[] No.44007267[source]
shared memory only works on dedicated hardware.

if you're running in something like AWS fargate, there is no shared memory. have to use the network and file system which adds a lot of latency, way more than spawning a process.

copying processes through fork is a whole different problem.

green threads and an actor model will get you much further in my experience.

replies(2): >>44008492 #>>44010425 #
sgarland ◴[] No.44010425[source]
Well don’t use Fargate, there’s your problem. Run programs on actual servers, not magical serverless bullshit.
replies(1): >>44034432 #
1. tinix ◴[] No.44034432[source]
> Well don’t use Fargate, there’s your problem. Run programs on actual servers, not magical serverless bullshit.

That kind of absolutism misses the point of why serverless architectures like Fargate exist. It might feel satisfying, but it closes the door on understanding why stateless and ephemeral workloads exist in the first place.

I get the frustration, but dismissing a production architecture outright ignores the constraints and trade-offs that lead teams to adopt it in the first place. It's worth asking: if so many teams are using this shit in production, at scale, with real stakes, what do they know that might be missing from my current mental model?

Serverless, like any abstraction, isn't magic. It's a tool with defined trade-offs, and resource/process isolation is one of them. If you're running containerized workloads at scale, optimizing for organizational velocity, security boundaries, and multi-tenant isolation, these constraints aren't bullshit, they're literally design parameters and intentional boundaries.

It's easy to throw shade from a distance, but the operational reality of running modern systems, especially in regulated or high-scale environments, looks very different from a home lab or startup sandbox.