But I just wanted to comment something similar. It's probably heavily dependend on how many services you self-host, but I have 6 services on my VPS and they are just simple podman containers that I just run. Some of them automatically, some of them manually. On top of that a very simple nginx configuration (mostly just subdomains with reverse proxy) and that's it. I don't need an extra container for my nginx, I think (or is there a very good security reason? I have "nothing to hide" and/or lose, but still). My lazy brain thinks as long as I keep nginx up to date with my package manager and my certbot running, ill be fine
I can see running something in a Docker container, and while I'd advise against containers what ships with EVERYTHING, I'd also advise against using Docker-compose to spin up an ungodly amount of containers for every service.
You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home. Find a service that can be installed using your operating systems package manager and set everything to auto-update.
Whatever you're self-hosting, be it for yourself, family or a few friends, there's absolutely nothing wrong with SQLite, files on disk or using the passwd file as your authentication backend.
If you are self hosting Kubernetes to learn Kubernetes, then by all means go ahead and do so. For actual use, stay away from anything more complex than a Unix around the year 2000.
"Premature clustering is the source of all evil" - or something like that.
Then they decided to port everything to K8 because of overblown internet drama and I lost all interest. Total shame that a great resource for Nix became yet another K8 fest.
Edit: anyone actually interested in such a post?
It's a shame I agree because it was nicely integrated with dockers own tooling. Plus I wouldn't have had to learn about k8s :)
Docker lets my OS be be boring (and lets me basically never touch it) while having up to date user-facing software. No “well, this new version fixes a bug that’s annoying me, but it’s not in Debian stable… do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?”
I just use shell scripts to launch the services, one script per service. Run the script once, forget about it until I want to upgrade it. Modify the version in the script, take the container down and destroy it (easily automated as part of the scripts, but I haven’t bothered), run the script. Done, forget about it again until next time.
Almost all the commands I run on my server are basic file management, docker stuff, or zfs commands. I could switch distros entirely and hardly even notice. Truly a boring OS.
Yup, it's basically like a "Docker Compose Manager" that lets you group containers more easily, since the manifest file format is basically Docker Compose's with just 1-2 tiny differences.
If there's one thing I would like Docker Swarm to have, is to not have to worry about which node creates a volume, I just want the service to always be deployed with the same volume without having to think about it.
That's the one weakness I see for multi-node stacks, the thing that prevents it from being "Docker Compose but distributed". So that's probably the point where I'd recommend maybe taking a look at Kubernetes.
Writing your first yaml or two is scary & seems intimidating at first .
But after that, everything is cut from the same cloth. Its an escape from the long dark age of every sysadmin forever cooking up whatever whimsy sort of served them at the time, escape from each service having very different management practices around it.
And there's no other community anywhere like Kubernetes. Unbelievably many very good quality very smart helm charts out there, such as https://github.com/bitnami/charts/tree/main/bitnami just ready to go. Really sweet home-ops setups like https://github.com/onedr0p/home-ops that show that once you have a platform under foot, adding more services is really easy, showing an amazing range of home-ops things you might be interested in.
> Last thing I need is Kubernetes at home
Last thing we need is incredibly shitty attitude. Fuck around and find out is the hacker spirit. Its actually not hard if you try, and actually having a base platform where things follow common patterns & practices & you can reuse existing skills & services is kind of great. Everything is amazing but the snivelling shitty whining without even making the tiniest little case for your unkind low-effort hating will surely continue. Low signal people will remain low signal, best avoid.
It's not uncommon with self-hosting services using docker. It makes it easier to try out a new stack and you can mix and match versions of postgresql according to the needs of the software. It's also easier to remove if you decide you don't like the bit of software that you're trying out.
In these rare cases I usually just compile a newer deb package myself and let the package manager deal with it as usual. If there are too many dependencies to update or it's unusually complex, then it's container time indeed - but I didn't have to go there on my server so far.
Not diverging from the distro packages lets me not worry about security updates; Debian handles that for me.
I think it's a great way to learn Kubernetes if you're interested in that.
After a few years you work out that holy shit we now have 15 people looking after everything instead of the previous 4 people and pods are getting a few hits an hour. Every HTTP request ends up costing $100 and then you wonder why the fuck your company is totally screwed financially.
But all the people who designed it have left for consultancy jobs with Kubernetes on their resume and now you've got an army of people left to juggle the YAML while the CEO hammers his fist on the table saying CUT COSTS. Well you hired those feckin plums!
etc etc.
Lots of them are on here. People have no idea how to solve problems any more, just create new ones out of the old ones.