←back to thread

221 points finnlab | 6 comments | | HN request time: 0.001s | source | bottom
1. seba_dos1 ◴[] No.43545656[source]
"apt-get install" tends to be enough once you stop chasing latest-and-greatest and start to appreciate things just running with low maintenance more.
replies(3): >>43546157 #>>43546335 #>>43548554 #
2. skydhash ◴[] No.43546157[source]
I totally agree! Containers are nice when your installation is ephemeral, deploying and updating several time in a short period. But using the package manager is as easy as you get.
3. alabastervlog ◴[] No.43546335[source]
I only host 3rd party daemons (nothing custom) and only on my local network (plus Tailscale) so Docker’s great for handling package management and init, since I get up-to-date versions of a far broader set of services than Debian or ubuntu’s repos, clean isolation for easy management, and init/restarts are even all free. Plus it naturally documents what I need to back up (any “mounted” directories)

Docker lets my OS be be boring (and lets me basically never touch it) while having up to date user-facing software. No “well, this new version fixes a bug that’s annoying me, but it’s not in Debian stable… do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?”

I just use shell scripts to launch the services, one script per service. Run the script once, forget about it until I want to upgrade it. Modify the version in the script, take the container down and destroy it (easily automated as part of the scripts, but I haven’t bothered), run the script. Done, forget about it again until next time.

Almost all the commands I run on my server are basic file management, docker stuff, or zfs commands. I could switch distros entirely and hardly even notice. Truly a boring OS.

replies(1): >>43551876 #
4. ryandrake ◴[] No.43548554[source]
Same here. I've had the same setup for decades: A "homelab" server on my LAN for internal hobby projects and a $5 VPS for anything that requires public access. On either of these, I just install the software I need through the OS's package manager. If I need a web server, I install it. If I need ssh, I install it. If I need nfs, I install it. I've never seen any reason to jump into containers or orchestration or any of that complex infrastructure. I know there are a lot of people very excited about adding all of that stuff into the mix, but I've never had a use case that prompted me to even consider it!
replies(1): >>43556121 #
5. seba_dos1 ◴[] No.43551876[source]
> do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?

In these rare cases I usually just compile a newer deb package myself and let the package manager deal with it as usual. If there are too many dependencies to update or it's unusually complex, then it's container time indeed - but I didn't have to go there on my server so far.

Not diverging from the distro packages lets me not worry about security updates; Debian handles that for me.

6. brulard ◴[] No.43556121[source]
I have very similar setup, with homelab and separate cheap VPS. In similar manner I have all the services installed directly on the OS but I'm starting to run into issues where I started to consider using docker. I run nginx with multiple (~10) node apps running through PM2. While this works ok-ish, I'm not happy that if for example one of my apps needs some packages installed, I need to do it for the whole server (ffmpeg, imageMagick, etc.). Other problem is that I can easily run into compatibility problems if I upgraded node.js for example. And if there was some vulnerability in some of the node_packages any of the projects use, the whole server is compromised. I think docker can be quite an easy solution to most of these problems.