I ask because watching cloud providers like AWS slowly reinvent mainframes just seems like the painful way around.
I ask because watching cloud providers like AWS slowly reinvent mainframes just seems like the painful way around.
When AWS was the hot new thing in town a server was coming in at 12/24 threads.
A modern AMD machine tops out at 700+ threads and 400gb QSFP interconnects. GO back to 2000 and the Dotcom boom and thats a whole mid sized company, in a 2u rack.
Finding single applications that can leverage all that horsepower is going to be a challenge... and thats before you layer in lift for redundancy.
Strip away all the bloat, all the fine examples of Conways law that organizations drag around (or inherit from other orgs) and compute is at a place where it's effectively free... With the real limits/costs being power and data (and these are driven by density).