Most active commenters
  • motorest(4)
  • toast0(3)

←back to thread

Use One Big Server (2022)

(specbranch.com)
343 points antov825 | 13 comments | | HN request time: 0s | source | bottom
Show context
talles ◴[] No.45085392[source]
Don't forget the cost of managing your one big server and the risk of having such single point of failure.
replies(8): >>45085441 #>>45085488 #>>45085534 #>>45085637 #>>45086579 #>>45088964 #>>45090596 #>>45091993 #
Puts ◴[] No.45085534[source]
My experience after 20 years in the hosting industry is that customers in general have more downtime due to self-inflicted over-engineered replication, or split brain errors than actual hardware failures. One server is the simplest and most reliable setup, and if you have backup and automated provisioning you can just re-deploy your entire environment in less than the time it takes to debug a complex multi-server setup.

I'm not saying everybody should do this. There are of-course a lot of services that can't afford even a minute of downtime. But there is also a lot of companies that would benefit from a simpler setup.

replies(7): >>45085607 #>>45085628 #>>45085635 #>>45086355 #>>45088375 #>>45088512 #>>45091645 #
1. motorest ◴[] No.45085628[source]
> My experience after 20 years in the hosting industry is that customers in general have more downtime due to self-inflicted over-engineered replication, or split brain errors than actual hardware failures.

I think you misread OP. "Single point of failure" doesn't mean the only failure modes are hardware failures. It means that if something happens to your nodes whether it's hardware failure or power outage or someone stumbling on your power/network cable, or even having a single service crashing, this means you have a major outage on your hands.

These types of outages are trivially avoided with a basic understanding of well-architected frameworks, which explicitly address the risk represented by single points of failure.

replies(1): >>45086005 #
2. fogx ◴[] No.45086005[source]
don't you think it's highly unlikely that someone will stumble over the power cable in a hosted datacenter like hetzner? and even if, you could just run a provisioned secondary server that jumps in if the first becomes unavailable and still be much cheaper.
replies(3): >>45086298 #>>45086456 #>>45089501 #
3. icedchai ◴[] No.45086298[source]
It's unlikely, but it happens. In the mid 2000's I had some servers at a colo. They were doing electrical work and took out power to a bunch of racks, including ours. Those environments are not static.
4. toast0 ◴[] No.45086456[source]
I don't know about Hetzner, but the failure case isn't usually tripping over power plugs. It's putting a longer server in the rack above/below yours and pushing the power plug out of the back of your server.

Either way, stuff happens, figuring out what your actual requirements around uptime, time to response, and time to resolution is important before you build a nine nines solution when eight eights is sufficient. :p

replies(1): >>45090840 #
5. motorest ◴[] No.45089501[source]
> don't you think it's highly unlikely that someone will stumble over the power cable in a hosted datacenter like hetzner?

You're not getting the point. The point is that if you use a single node to host your whole web app, you are creating a system where many failure modes, which otherwise could not even be an issue, can easily trigger high-severity outages.

> and even if, you could just run a provisioned secondary server (...)

Congratulations, you are no longer using "one big server", thus defeating the whole purpose behind this approach and learning the lesson that everyone doing cloud engineering work is already well aware.

replies(1): >>45090616 #
6. juped ◴[] No.45090616{3}[source]
Do you actually think dead simple failover is comparable to elastic kubernetes whatever?
replies(1): >>45091327 #
7. kapone ◴[] No.45090840{3}[source]
> It's putting a longer server in the rack above/below yours and pushing the power plug out of the back of your server

Are you serious? Have you ever built/operated/wired rack scale equipment? You think the power cables for your "short" server (vs the longer one being put in) are just hanging out in the back of the rack?

Rack wiring has been done and done correctly for ages. Power cables on one side (if possible), data and other cables on the other side. These are all routed vertically and horizontally, so they land only on YOUR server.

You could put a Mercedes Maybach above/below your server and nothing would happen.

replies(1): >>45094297 #
8. motorest ◴[] No.45091327{4}[source]
> Do you actually think dead simple failover is comparable to elastic kubernetes whatever?

References to "elastic Kubernetes whatever" is a red herring. You can have a dead simple load balancer spreading traffic across multiple bare metal nodes.

replies(1): >>45092123 #
9. juped ◴[] No.45092123{5}[source]
Thanks for switching sides to oppose yourself, I guess?
replies(1): >>45099782 #
10. toast0 ◴[] No.45094297{4}[source]
Yes I'm serious. My managed host took several of our machines offline when racking machines under/over ours. And they said it was because the new machines were longer and knocked out the power cables on ours.

We were their largest customer and they seemed honest even when they made mistakes that seemed silly, so we rolled our eyes and moved on with life.

Managed hosting means accepting that you can't inspect the racks and chide people for not cabling to your satisfaction. And mistakes by the managed host will impact your availability.

replies(1): >>45101233 #
11. motorest ◴[] No.45099782{6}[source]
> Thanks for switching sides to oppose yourself, I guess?

I'm baffled by your comment. Are you sure you read what I wrote?

12. kapone ◴[] No.45101233{5}[source]
I hope that "managed host" got fired in a heartbeat and you moved elsewhere. Because they don't know WTF they're doing. As simple as that.
replies(1): >>45105768 #
13. toast0 ◴[] No.45105768{6}[source]
We did eventually move elsewhere because of acquisition. Of course those guys didn't even bother to run LACP and so our systems would regularly go offline for a bit whenever someone wanted to update a switch. I was a lot happier at the host that sometimes bumped the power cables.

Firing a host where you've got thousands of servers is easier said than done. We did do a quote exercise with another provider that could have supported us, and it didn't end up very competitive ... and it wouldn't have been worth the transition. Overall, there were some derpy moments, but I don't think we would have been happier anywhere else, and we didn't want to rent cages and run our own servers.