←back to thread

1101 points codesmash | 1 comments | | HN request time: 0.242s | source
Show context
rglover ◴[] No.45140785[source]
I may be the odd man out, but after getting unbelievably stressed out by containers, k8s, etc., I couldn't believe how zen just spinning up a new VPS and bootstrapping it with a bash script was. That combined with systemd scripts can get you relatively far without all of the (cognitive) overhead.

The best part? Whenever there's an "uh oh," you just SSH in to a box, patch it, and carry on about your business.

replies(4): >>45140913 #>>45141099 #>>45141505 #>>45151444 #
1. lotyrin ◴[] No.45141099[source]
Well... yeah? If you exist in as an individual or part of a group which is integrated (shared trust, goals, knowledge etc.) then yeah, obviously you do not have the problem (tragedy of the commons) that splitting things up (including into containers, but literally any kind of boundary) solves for.

The container split is often introduced because you have product-a, product-b and infrastructure operations teams/individuals that all share responsibility for an OS user space (and therefore none are accountable for it). You instead structure things as: a host OS and container platform for which infra is responsible, and then product-a container(s) and product-b container(s) for which those teams are responsible.

These boundaries are placed (between networks, machines, hosts and guests, namespaces, users, processes, modules, etc. when needed due to trust or useful due to knowledge sharing and goal alignment.

When they are present in single-user or small highly-integrated team environments, it's because they've been cargo-culted there, yes, but I've seen an equal number of environments where effective and correct boundaries were missing as I've seen ones where they were superfluous.