Cattle vs pets always seemed like a silly distinction to me. Fundamentally they're about abstraction levels.
If you treat a server as a "pet", then you typically run multiple services through a service runner (systemd, runit, openrc, etc.) and do only a moderate amount of communication between servers. Here you treat the server as your scheduling substrate upon which your units of compute, services, run. In a "cattle" system, each server is interchangable and you run some isolated service, usually a container, on each of your servers. Here the unit of compute is a container and the compute substrate is the cluster, multiple servers.
In a "pets" system managing many servers is fraught and in a "cattle" system managing many clusters is fraught. So it's simply the abstraction level you want to work at. If you're at the level where your workload fits easily on a single server, or maybe a couple servers, then a pets-like system works fine. If you see the need to horizontally scale then a cattle-like system is what you need. There's a practical problem right now in that containerization ecosystems are usually heavy enough that it takes advanced tools (like bubblewrap or firejail) to isolate your services on a single service which offers the edge to cattle-like systems.
In my experience, many smaller services with non-critical uptime requirements can run just fine on a single server, maybe just moving the data store externally so that failure of the service and failure of the data store are independent.