←back to thread

221 points finnlab | 6 comments | | HN request time: 0.298s | source | bottom
1. kuon ◴[] No.43545673[source]
If you self host, do not use containers and all those things.

Just use a static site generator like zola or hugo and rsync to a small VPS running caddy or nginx. If you need dynamic thing, there are many frameworks you can just rsync too with little dependencies. Or use PHP, it's not that bad. Just restrict all locations except public ones to your ip in nginx config if you use something like wordpress and you should be fine.

If you have any critical stuff, create a zfs dataset and use that to backup to another VPS using zfs send, there are tools to make it easy, much easier than DB replication.

replies(2): >>43545888 #>>43549393 #
2. Aachen ◴[] No.43545888[source]
What I'm reading is not to use containers for a web server, which makes sense because web servers have had vhosts since forever and you can host any number of sites on there independently already

But what about other services, like if you want a database server as well, a mail server, etc.?

I started using containers when I last upgraded hardware and while it's not as beneficial as I had hoped, it's still an improvement to be able to clone one, do a test upgrade, and only then upgrade the original one, as well as being able to upgrade services one by one rather than committing to a huge project where you upgrade the host OS and everything has to come with to the new major version

replies(3): >>43546244 #>>43546621 #>>43547555 #
3. skydhash ◴[] No.43546244[source]
That’s when you favor stability and use an LTS OS. You can also isolate workload by using VMs. Containers is nice for the installation part, but the immutability can be a pain.
4. kuon ◴[] No.43546621[source]
I manage about 500 servers. Critical services like DNS, mail, tftp, monitoring, routing, firewall... are all running openbsd in N+1 configuration, and in 15 years we had zero issue with that.

Now most servers are app servers, and they all run archlinux. We prepare images and we run them with PXE.

Both those are out of scope for self host.

But, we also have about a dozen of staging, dev, playground servers. And those are just regular installs of arch. We run postgres, redis, apps in many languages... For all that we use systems packages and AUR. DB upgrade? Zfs snapshot, and I follow arch wiki postgres upgrade, takes a few minutes, there is downtime, but it is fine. You mess anything? Zfs rollback. You miss a single file? cd .zfs/snapshots and grab it. I get about 30minutes of cumulated downtime per year on those machines. That's way enough for any self host.

We use arch because we try the latest "toys" on those. If you self host take an LTS distribution and you'll be fine.

5. nijave ◴[] No.43547555[source]
Containers are fine. Run them on a Linux host to save yourself some headaches
6. auxym ◴[] No.43549393[source]
It seems you're talking about about self-hosting a website or web-app that you are developing for the public to use.

My vision of self-hosting is basically the opposite. I only self-host existing apps and services for my and my family's use. I have a TrueNAS box with a few disks, run Jellyfin for music and shows, run a Nextcloud instance, a restic REST server for backing up our devices, etc. I feel like the OP is more targeted this type of "self hosting".