←back to thread

169 points hunvreus | 1 comments | | HN request time: 0.23s | source
Show context
pragma_x ◴[] No.43654222[source]
I'm starting to see a pattern here. This describes a technology that rapidly deploys "VM" instances in the cloud which support things like Lambda and single-process containers. At what point do we scale this all back to a more rudimentary OS that provides security and process management across multiple physical machines? Or is there already a Linux distro that does this?

I ask because watching cloud providers like AWS slowly reinvent mainframes just seems like the painful way around.

replies(5): >>43654263 #>>43654884 #>>43655058 #>>43656759 #>>43662668 #
1. zer00eyz ◴[] No.43655058[source]
> I ask because watching cloud providers like AWS slowly reinvent mainframes just seems like the painful way around.

When AWS was the hot new thing in town a server was coming in at 12/24 threads.

A modern AMD machine tops out at 700+ threads and 400gb QSFP interconnects. GO back to 2000 and the Dotcom boom and thats a whole mid sized company, in a 2u rack.

Finding single applications that can leverage all that horsepower is going to be a challenge... and thats before you layer in lift for redundancy.

Strip away all the bloat, all the fine examples of Conways law that organizations drag around (or inherit from other orgs) and compute is at a place where it's effectively free... With the real limits/costs being power and data (and these are driven by density).