←back to thread

1101 points codesmash | 9 comments | | HN request time: 0.246s | source | bottom
1. rglover ◴[] No.45140785[source]
I may be the odd man out, but after getting unbelievably stressed out by containers, k8s, etc., I couldn't believe how zen just spinning up a new VPS and bootstrapping it with a bash script was. That combined with systemd scripts can get you relatively far without all of the (cognitive) overhead.

The best part? Whenever there's an "uh oh," you just SSH in to a box, patch it, and carry on about your business.

replies(4): >>45140913 #>>45141099 #>>45141505 #>>45151444 #
2. TrainedMonkey ◴[] No.45140913[source]
Containers and container orchestrators are complex tools. The constant cost of using them is pretty high compared to bash scripts. However the scale / maintenance factor is significantly lower, so for a 100 boxes simplicity of bash scripts might still win out over the containers. At 1000 machines it is highly likely that simplest and least maintenance overall solution will be using an orchestrator.
replies(1): >>45141554 #
3. lotyrin ◴[] No.45141099[source]
Well... yeah? If you exist in as an individual or part of a group which is integrated (shared trust, goals, knowledge etc.) then yeah, obviously you do not have the problem (tragedy of the commons) that splitting things up (including into containers, but literally any kind of boundary) solves for.

The container split is often introduced because you have product-a, product-b and infrastructure operations teams/individuals that all share responsibility for an OS user space (and therefore none are accountable for it). You instead structure things as: a host OS and container platform for which infra is responsible, and then product-a container(s) and product-b container(s) for which those teams are responsible.

These boundaries are placed (between networks, machines, hosts and guests, namespaces, users, processes, modules, etc. when needed due to trust or useful due to knowledge sharing and goal alignment.

When they are present in single-user or small highly-integrated team environments, it's because they've been cargo-culted there, yes, but I've seen an equal number of environments where effective and correct boundaries were missing as I've seen ones where they were superfluous.

4. madeofpalk ◴[] No.45141505[source]
> you just SSH in to a box, patch it

Oh god. I can’t imagine how I could build reliably software if this is what I was doing. How do you know what “patches” are needed to run your software?

replies(1): >>45144582 #
5. rglover ◴[] No.45141554[source]
That's what I found out, though: the footprint doesn't matter. I did have to write a simple orchestration system, but it's literally just me provisioning a VPS, bootstrapping it with deps, and pulling the code/installing its dependencies. Short of service or hardware limits, this can work for an unlimited number of servers.

I get the why most people think they need containers, but it really seems only suited for hyper-complex (ironically, Google) deployments with thousands of developers pushing code simultaneously.

replies(2): >>45142579 #>>45163698 #
6. chickensong ◴[] No.45142579{3}[source]
> it really seems only suited for hyper-complex (ironically, Google) deployments with thousands of developers pushing code simultaneously

There are many benefits to be had for individuals and small companies as well. The piece of mind that comes with immutable architecture is incredible.

While it's true that you can often get quite far with the old cowboy ways, particularly for competent solo devs or small teams, there's a point where it starts to unravel, and you don't need to be a hyper-complex mega-corp to see it happen. Once you stray from the happy path or have common business requirements related to operations and security, the old ways become a liability.

There are reasons ops people will accept the extra layers and complexity to enable container-based architecture. They're not thrilled to add more infrastructure, but it's better than the alternative.

7. rglover ◴[] No.45144582[source]
A staging server?
8. sroerick ◴[] No.45151444[source]
I couldn't agree more.

It's really not that hard, folks are just trading Linux knowledge for CI/CD knowledge.

Its React but for DevOps

9. mixmastamyk ◴[] No.45163698{3}[source]
Sounds like you reinvented ansible?