←back to thread

242 points panrobo | 3 comments | | HN request time: 0.004s | source
Show context
kuon ◴[] No.42055373[source]
You can have a 100Gb uplink on a dedicated fibre for less than 1000$/month now. Which is insanely less than cloud bandwidth. Of course there are tons of other costs, but that alone can suffice to justify moving out of the cloud for bandwidth intensive app.
replies(4): >>42055545 #>>42055719 #>>42056308 #>>42056610 #
Salgat ◴[] No.42055545[source]
We went to cloud because 1) we only need 3 infra guys to run our entire platform and 2) we can trivially scale up or down as needed. The first saves us hundreds of thousands in skilled labor and the second lets us take on new customers with thousands of agents in a matter of days without having to provision in advance.
replies(1): >>42055706 #
packetlost ◴[] No.42055706[source]
1) You may more than pay for that labor in cloud costs, but you can also pretty easily operate rented dedicated hardware with a 3-man team if they know how to do it, the tools to scale are there they're just different.

2) I don't know what your setup looks like, but renting a dedicated server off of Hetzner takes a few minutes, maybe hours at most.

My personal opinion is that most workloads that have a load balancer anyways would be best suited to a mix of dedicated/owned infrastructure for baseline operation and dynamic scaling to a cloud for burst. The downsides to that approach are it requires all of skillset A (systems administration, devops) and some amount of skillset B (public cloud), and the networking constraints can be challenging depending on how state is managed.

replies(2): >>42056937 #>>42057472 #
1. merb ◴[] No.42056937[source]
With 3 people it’s basically impossible to build a ha storage solution that can scale to a certain amount - it’s also impossible to keep that maintained.
replies(1): >>42057111 #
2. m1keil ◴[] No.42057111[source]
Can you give a ballpark figure of what scale you have in mind?
replies(1): >>42073004 #
3. packetlost ◴[] No.42073004[source]
Some distributed databases and file systems are notoriously finicky to operate; Ceph comes to mind in particular. Choice of technology and architecture matters here a lot. Content addressed storage using something like Minio with erasure codes should scale pretty easily and could be maintained by a small ops team. I personally know a couple of people that were effectively solo operations for 100PB Elasticsearch clusters, but I'd say they're more than a bit above average skill level and they actively despised Elasticsearch (and Java) coming out of it.