←back to thread

188 points psxuaw | 1 comments | | HN request time: 0.205s | source
Show context
whalesalad ◴[] No.43536630[source]

I notice FreeBSD admins tend to follow a 'pets not cattle' approach, carefully nurturing individual systems. Linux admins like myself typically prefer the 'cattle not pets' mindset—using infrastructure-as-code where if a server dies, no problem, just spin up another one. Leverage containers. Statelessness.

I don't want to spend time meticulously configuring things beyond the core infrastructure my services run on. I should probably explore FreeBSD more, but honestly, with containers being everywhere now, I'm not seeing a compelling reason to bother. I realize jails are a valid analogue, but broadly speaking the UX is not the same.

All this being said, I have this romantic draw to FreeBSD and want to play around with it more. But every time I set up a basic box I feel teleported back to 2007.

Are there any fun lab projects, posts, educational series targeted at FreeBSD?

replies(6): >>43536769 #>>43537039 #>>43537278 #>>43539063 #>>43541053 #>>43542740 #
1. Karrot_Kream ◴[] No.43541053[source]

Cattle vs pets always seemed like a silly distinction to me. Fundamentally they're about abstraction levels.

If you treat a server as a "pet", then you typically run multiple services through a service runner (systemd, runit, openrc, etc.) and do only a moderate amount of communication between servers. Here you treat the server as your scheduling substrate upon which your units of compute, services, run. In a "cattle" system, each server is interchangable and you run some isolated service, usually a container, on each of your servers. Here the unit of compute is a container and the compute substrate is the cluster, multiple servers.

In a "pets" system managing many servers is fraught and in a "cattle" system managing many clusters is fraught. So it's simply the abstraction level you want to work at. If you're at the level where your workload fits easily on a single server, or maybe a couple servers, then a pets-like system works fine. If you see the need to horizontally scale then a cattle-like system is what you need. There's a practical problem right now in that containerization ecosystems are usually heavy enough that it takes advanced tools (like bubblewrap or firejail) to isolate your services on a single service which offers the edge to cattle-like systems.

In my experience, many smaller services with non-critical uptime requirements can run just fine on a single server, maybe just moving the data store externally so that failure of the service and failure of the data store are independent.