←back to thread

Use One Big Server (2022)

(specbranch.com)
343 points antov825 | 1 comments | | HN request time: 0s | source
Show context
runako ◴[] No.45085915[source]
One of the more detrimental aspects of the Cloud Tax is that it constrains the types of solutions engineers even consider.

Picking an arbitrary price point of $200/mo, you can get 4(!) vCPUs and 16GB of RAM at AWS. Architectures are different etc., but this is roughly a mid-spec dev laptop of 5 or so years ago.

At Hetzner, you can rent a machine with 48 cores and 128GB of RAM for the same money. It's hard to overstate how far apart these machines are in raw computational capacity.

There are approaches to problems that make sense with 10x the capacity that don't make sense on the much smaller node. Critically, those approaches can sometimes save engineering time that would otherwise go into building a more complex system to manage around artificial constraints.

Yes, there are other factors like durability etc. that need to be designed for. But going the other way, dedicated boxes can deliver more consistent performance without worries of noisy neighbors.

replies(11): >>45086252 #>>45086272 #>>45086760 #>>45087388 #>>45088476 #>>45089414 #>>45091154 #>>45091413 #>>45092146 #>>45092305 #>>45095302 #
shrubble ◴[] No.45086760[source]
It's more than that - it's all the latency that you can remove from the equation with your bare-metal server.

No network latency between nodes, less memory bandwidth latency/contention as there is in VMs, no caching architecture latency needed when you can just tell e.g. Postgres to use gigs of RAM and then let Linux's disk caching take care of the rest (and not need a separate caching architecture).

replies(1): >>45086889 #
matt-p ◴[] No.45086889[source]
The difference between a fairly expensive ($300) RDS instance + EC2 in the same region vs a $90 dedicated server with a NVME drive and postgres in a container is absolutely insane.
replies(2): >>45087248 #>>45088681 #
bspammer ◴[] No.45087248[source]
A fair comparison would include the cost of the DBA who will be responsible for backups, updates, monitoring, security and access control. That’s what RDS is actually competing with.
replies(9): >>45087378 #>>45087484 #>>45087756 #>>45088306 #>>45088314 #>>45090125 #>>45090795 #>>45091984 #>>45092441 #
shrubble ◴[] No.45087484[source]
Paying someone $2000 to set that up once should result in the costs being recovered in what, 18 months?

If you’re running Postgres locally you can turn off the TCP/IP part; nothing more to audit there.

SSH based copying of backups to a remote server is simple.

If not accessible via network, you can stay on whatever version of Postgres you want.

I’ve heard these arguments since AWS launched, and all that time I’ve been running Postgres (since 2004 actually) and have never encountered all these phantom issues that are claimed as being expensive or extremely difficult.

replies(2): >>45089151 #>>45092739 #
applied_heat ◴[] No.45089151[source]
$2k? That’s a $100k project for a medium size Corp
replies(2): >>45090118 #>>45091859 #
sysguest ◴[] No.45090118[source]
hmm where did you get the numbers?

(what's "medium-size corp" and how did you come up with $100k ?)

replies(1): >>45091127 #
1. Aeolun ◴[] No.45091127[source]
I’m assuming he’s talking about the corporate team of DBA’s that will spend weeks discussing the best way to copy a bunch of SQL files to S3