I'm not saying everybody should do this. There are of-course a lot of services that can't afford even a minute of downtime. But there is also a lot of companies that would benefit from a simpler setup.
I'm not a better engineer, I just have drastically fewer failure modes.
I think you misread OP. "Single point of failure" doesn't mean the only failure modes are hardware failures. It means that if something happens to your nodes whether it's hardware failure or power outage or someone stumbling on your power/network cable, or even having a single service crashing, this means you have a major outage on your hands.
These types of outages are trivially avoided with a basic understanding of well-architected frameworks, which explicitly address the risk represented by single points of failure.
Sigh.
Either way, stuff happens, figuring out what your actual requirements around uptime, time to response, and time to resolution is important before you build a nine nines solution when eight eights is sufficient. :p
In all those years, I’ve had precisely one actual hardware failure: a PSU went out. They’re redundant, so nothing happened, and I replaced it.
Servers are remarkably resilient.
EDIT: 100% uptime modulo power failure. I have a rack UPS, and a generator, but once I discovered the hard way that the UPS batteries couldn’t hold a charge long enough to keep the rack up while I brought the generator online.
Today’s systems don’t fail nearly as often if you use high quality stuff and don’t beat the absolute hell out of SSD. Another trick is to overprovision SSD to allow wear leveling to work better and reduce overall write load.
Do that and a typical box will run years and years with no issues.
We had a rack in data center, and we wanted to put local UPS on critical machines in the rack.
But the data center went on and on about their awesome power grid (shared with a fire station, so no administrative power loss), on site generators, etc., and wouldn't let us.
Sure enough, one day the entire rack went dark.
It was the power strip on the data centers rack that failed. All the backups grids in the world can't get through a dead power strip.
(FYI, family member lost their home due to a power strip, so, again, anecdotally, if you have any older power strips (5-7+ years) sitting under your desk at home, you may want to consider swapping it out for a new one.)
You're not getting the point. The point is that if you use a single node to host your whole web app, you are creating a system where many failure modes, which otherwise could not even be an issue, can easily trigger high-severity outages.
> and even if, you could just run a provisioned secondary server (...)
Congratulations, you are no longer using "one big server", thus defeating the whole purpose behind this approach and learning the lesson that everyone doing cloud engineering work is already well aware.
Are you serious? Have you ever built/operated/wired rack scale equipment? You think the power cables for your "short" server (vs the longer one being put in) are just hanging out in the back of the rack?
Rack wiring has been done and done correctly for ages. Power cables on one side (if possible), data and other cables on the other side. These are all routed vertically and horizontally, so they land only on YOUR server.
You could put a Mercedes Maybach above/below your server and nothing would happen.
References to "elastic Kubernetes whatever" is a red herring. You can have a dead simple load balancer spreading traffic across multiple bare metal nodes.
The number of production incidents on our corporate mishmash of lambda, ecs, rds, fargate, ec2, eks etc? It’s a good week when something doesn’t go wrong. Somehow the logging setup is better on the personal stuff too.
Is that more, less than or about the same as having an AWS/Azure/GCP consultant?
What's the difference in labour per hour?
> the risk of having such single point of failure.
At the prices they charge I can have two hot failovers in two other datacenter and still come out ahead.
Re: power strips, thanks for the reminder. I’m usually diligent about that, but forgot about one my wife uses. Replacement coming today.
We were their largest customer and they seemed honest even when they made mistakes that seemed silly, so we rolled our eyes and moved on with life.
Managed hosting means accepting that you can't inspect the racks and chide people for not cabling to your satisfaction. And mistakes by the managed host will impact your availability.
Firing a host where you've got thousands of servers is easier said than done. We did do a quote exercise with another provider that could have supported us, and it didn't end up very competitive ... and it wouldn't have been worth the transition. Overall, there were some derpy moments, but I don't think we would have been happier anywhere else, and we didn't want to rent cages and run our own servers.