I ask because watching cloud providers like AWS slowly reinvent mainframes just seems like the painful way around.
I ask because watching cloud providers like AWS slowly reinvent mainframes just seems like the painful way around.
BSD has had jails for a long time, which let you achieve isolation on a system in this manner, or at least close to it.
They are also missing an ergonomic tool like dockerfiles. The following file, plus a cli tool for “run N copies on my M machines” should be enough to run bsd in prod, and it is not:
“FROM openbsd:latest ; CMD pkg -i apache ; echo “apache=enabled >> /etc/rc.defaults ; COPY public_html /var/www/ ; CMD init”
I don’t think writing the tooling would be that difficult, but it was missing the last time I looked.
If there's any difference now versus the past, it is that I think right now pretty much every point on the wheel is available quite readily now. If you want a more "rudimentary OS" you don't need to wait for the next turning of the wheel, it's here now. Need full VMs? Still a practical technology. Containers enough? Actively in development and use. Mix & match? Around any sensible combination you can do it now. And so on.
When AWS was the hot new thing in town a server was coming in at 12/24 threads.
A modern AMD machine tops out at 700+ threads and 400gb QSFP interconnects. GO back to 2000 and the Dotcom boom and thats a whole mid sized company, in a 2u rack.
Finding single applications that can leverage all that horsepower is going to be a challenge... and thats before you layer in lift for redundancy.
Strip away all the bloat, all the fine examples of Conways law that organizations drag around (or inherit from other orgs) and compute is at a place where it's effectively free... With the real limits/costs being power and data (and these are driven by density).
https://en.wikipedia.org/wiki/Kerrighed https://sourceforge.net/projects/kerrighed/