Otherwise good article. If you want to go rootless (which you should!), Podman is the way to go; but Docker works rootless too, with some modifications [1]. I have found Docker rootless to be reliable and robust on both Debian and Ubuntu. It also solves permissions problems because your rootless user owns files inside and outside the container, whereas with rootful setups all files outside the container are owned by root, which can be a pain.
Also, you don't need Watchtower. Automatic `docker compose pull` can be setup using standard crontab, see [2].
[1]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...
[2]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...
While it's not nearly as powerful as say DataDog, it provides the core essentials of CPU, memory, disk, network, temperature and even GPU monitoring (via agent only).
1. RHEL 9 with Developer Subscription. Installed dnf-automatic, set `reboot = when-changed`, so it's zero effort to reliably apply all updates with daily reboots. One or two minutes of downtime, not a big deal.
2. For services: podman with quadlets. It's RH-flavoured replacement for docker-compose. Not sure, if I like it, but I guess that's the "future", so I'm embracing it. Every service is a custom-built image with common parent to reduce space waste (by reusing base OS layer).
So far I want to run static http (nginx), vaultwarden, postfix and some webmail. May be more in the future.
This setup wastes a lot of disk space for image data, so expect to order few more gigabytes of disk to pay for modern tech.
Bitnami PostgreSQL Helm chart - https://github.com/bitnami/charts/tree/main/bitnami/postgres...
On an unrelated note, an article of how to rent a VPS in China would be interesting :)
I'm curious as to what issues you might be alluding to!
Nix (and I recently adopted deploy-rs to ensure I keep SSH access across upgrades for rolling back or other troubleshooting) makes experimenting really just a breeze! Rolling back to a working environment becomes trivial, which frees you up to just try stuff. Plus things are reproducible so you can try something with a different set of machines before going to "prod" if you want.
Seems like Docker has won so comprehensively that even more convenient (But unfamiliar) options are pushed to use it.
It's stateful (cleans up things when they're no longer in your config), procedural (you control the flow and can trigger things as needed), and supports flexible deployment models (push or pull). Full disclosure, I created it and use it across my business and personal devices.
Given that apparently it's quite difficult to even get a WeChat account without a national ID, I suspect that step 1 is "learn mandarin" and step 2 is "get a Chinese national ID".
If you need more power: I had success with HP ProDesk Mini (or any other one-litre PC), you can get these second hand from like $150 and extend RAM and SSDs however you like. You can even pick processor / generation to fit your needs best. These can have consumption from like 30W if I'm not mistaken.
I have no experience with real and expensive server hardware, but most people don't need that for a homelab.
Self hosting to me is, at the very least having physical access to the machines.
I'm just a boomer (technically a millennial) who sticks to Arch Linux even when it comes to servers, I have zero friction, really. I have no issues self-hosting whatever me or a client requires, keeping it minimal and functional.
I self-host like it is 2000 (apart from a couple of some more modern stuff, if you could consider systemd and certbot, etc. modern). :D
But I just wanted to comment something similar. It's probably heavily dependend on how many services you self-host, but I have 6 services on my VPS and they are just simple podman containers that I just run. Some of them automatically, some of them manually. On top of that a very simple nginx configuration (mostly just subdomains with reverse proxy) and that's it. I don't need an extra container for my nginx, I think (or is there a very good security reason? I have "nothing to hide" and/or lose, but still). My lazy brain thinks as long as I keep nginx up to date with my package manager and my certbot running, ill be fine
Don't worry about the servers. Worry about mandated software on the client
You are right though, it gives significantly more control to users. It's just realising 100% of the benefits that might be trickier.
I can see running something in a Docker container, and while I'd advise against containers what ships with EVERYTHING, I'd also advise against using Docker-compose to spin up an ungodly amount of containers for every service.
You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home. Find a service that can be installed using your operating systems package manager and set everything to auto-update.
Whatever you're self-hosting, be it for yourself, family or a few friends, there's absolutely nothing wrong with SQLite, files on disk or using the passwd file as your authentication backend.
If you are self hosting Kubernetes to learn Kubernetes, then by all means go ahead and do so. For actual use, stay away from anything more complex than a Unix around the year 2000.
"Premature clustering is the source of all evil" - or something like that.
Things that haven't worked for me:
- Standalone Docker: Doesn't work great on its own. Containers often need to be recreated to modify immutable properties, like the specific image the container is running. To recreate the container, you need to store some state about how it _should_ work elsewhere.
- Quadlet: Too hard to manage clusters of services. Podman has subtle differences to Docker that occasionally cause problems and really tempting features (e.g. rootless) that cause more problems if you try to use them.
- Kubernetes: Waaaay too heavy. Even the "lightweight" distributions like k3s, k0s etc. embed large components of the official distribution, which are still heavy. Part of the embedded metric server for example periodically enumerates every single open file handle in every container. This leads to huge CPU spikes for a feature I don't care about.
With my setup now, I can more or less copy-paste a template into a new file, tweak some strings and have a HTTPS-enabled service available at https://thing.mydomain.mine. This works pretty painlessly even for services that need several volumes to maintain state or need several containers that work together.
What stops me is security. I simply do not know enough about securing a self-hosted site on real hardware in my home and despite actively continuing to learn, it seems like the more I learn about it, the more questions I have. My identity is fairly public at this point, so if I say the wrong thing to the wrong person on HN or whatever, do I need to worry about someone much smarter than me setting up camp on my home network and ruining my life? That may sound really stupid to many of you, but this is the type of anxiety that stops the under-informed from trying stuff like this and turning to services like Akamai/Linode or DO that make things fairly painless in terms of setup, monitoring and protection.
That said, I'm 110% open to reading/watching any resources people have that help teach newbies how to protect their assets when self-hosting.
Then they decided to port everything to K8 because of overblown internet drama and I lost all interest. Total shame that a great resource for Nix became yet another K8 fest.
Edit: anyone actually interested in such a post?
Just use a static site generator like zola or hugo and rsync to a small VPS running caddy or nginx. If you need dynamic thing, there are many frameworks you can just rsync too with little dependencies. Or use PHP, it's not that bad. Just restrict all locations except public ones to your ip in nginx config if you use something like wordpress and you should be fine.
If you have any critical stuff, create a zfs dataset and use that to backup to another VPS using zfs send, there are tools to make it easy, much easier than DB replication.
Block port 22, secure SSH with certificates only. Allow port 443 and configure your web server as a reverse proxy with a private backend.
You don't need an IDS, you don't need a WAF and you don't need Cloudflare.
Unless you become the next Facebook that's when you start to become concerned about security.
Would you please call this something else?
"It automatically reconciles", perhaps? I know that a multi-word phrase isn't nearly as snappy, but not only are "it's started" and "started" overloaded with a bunch of meanings, approximately zero of them mean what you want them to mean in this new context.
Wut? For many, a Raspberry Pi with 1 GB RAM and the regular sdcard can be enough, you really don't need to go fancy if you don't want to run anything particularly heavy. Or if it's cpu-intensive then you might need the newest Pi or something even beefier but still only the lowest RAM and smallest/slowest storage options (like for WordPress). As you say, it depends on needs
I always recommend using an old laptop to start out with because you've already got it anyway and it's already low power yet very powerful: if it can run graphical software from 2020 then it'll be fine as server until 2030 for anything standard like a web server (with half a dozen websites and databases, such as a link shortener, some data explorers, and my personal site), torrent box, VPN server, mail server, git server, IRC bouncer, Windows VM for some special software, chat bot, etc. all at once. At least, that's what I currently run on my 2012 laptop and the thing is idle nearly the whole time. Other advantages of a laptop include a built-in KVM console and UPS, at least while you still trust the old battery (one should detach and recycle that component after some years)
I've contented myself using TLS client certs on my family's Android phones (which do not work at all on iOS for something like Home Assistant).
So you don't self-host at home, right?
I have been considering setting up a physical DMZ at home, with two routers (each with its own firewall), such that my LAN stays unmodified and my server can run between both routers. Then it feels like it would be similar to having a VPS in terms of security, maybe?
With four jails, each running their own bHyve VMs they run another FreeBSD OS allowing me to host jails for different services. Email, web and game servers.
I'm not a fan of DMZ as they get messy as you then have to ensure your host is protected correctly. So I use bridges, I have two bridges an outer and inner.
Services requiring outbound internet access are tapped to the outer bridge which are throttled and if required can then load balance between and the inner bridge which is under control of deny all, allow some. To my own set of home IPs.
The outer bridge cannot contact services in the inner but the inner can contact the outer but can only host internally.
This all done with PF within each jail as each jail provides you with its own vnet adapter which can be applied to a bridge.
If you wish to learn further that is what you work up too But for the personal user who wishes self-host and to have internet presence a firewall is just fine.
My current setup is to rent a cheap $5/month VPS running nginx. I then reverse ssh from my home to the vps, with each app on a different port. It works great until my electric goes out and comes back on the apps become unavailable. I haven't gotten the restart script to work 100% of the time.
But, I'd love to hear thoughts on security of reverse SSH from those that know.
Cloudflare Tunnels is a step in the right direction, but it’s not end to end encrypted.
The question is then, how to secure self hosted apps with minimal configuration, in a way that is almost bulletproof?
Don't get me wrong I love some of the software suggested. However yet a another post that does not take backups as seriously as the rest of the self-hosting stack.
Backups are stuck in 2013. We need plug and play backups for containers! No more roll your own with zfs datasets, back up data on the filesystem level (using sanoid/syncoid to manage snapshots or any other alternatives.
Would highly recommend.
It's a shame I agree because it was nicely integrated with dockers own tooling. Plus I wouldn't have had to learn about k8s :)
I remember spending time on this as a teenager but I haven't touched my MariaDB config in a decade now probably. Ah no, one time a few years ago I turned off fsyncing temporarily to do a huge batch of insertions (helped a lot with qps, especially on the HDD I used at the time), but that's not something to leave permanently enabled so not really tuning it for production use
And I like that I can deploy images which basically don't have any requirement to be deployable to Docker Swarm. Is that also the case with Unraid?
I'm a security consultant so this is not a problem I have. To me it seems very straightforward and like most things are secure by default (with the exceptions being notorious enough that I'd know of it), so I'm interested in the other perspective
(Docker Swarm only for now, though I’m thinking about adding k8s later this year)
Sacrificing some convenience? Probably. But POSIX shell and coreutils is the last truly stable interface. After ~12 years of doing this I got sick of tool churn.
If the software you host constantly has vulnerabilities and something like apt install unattended-upgrades doesn't resolve them, maybe the software simply isn't fit for hosting no matter what team you put on it. That hired team might as well just spend some time making it secure rather than "keeping on top of vulnerabilities"
But what about other services, like if you want a database server as well, a mail server, etc.?
I started using containers when I last upgraded hardware and while it's not as beneficial as I had hoped, it's still an improvement to be able to clone one, do a test upgrade, and only then upgrade the original one, as well as being able to upgrade services one by one rather than committing to a huge project where you upgrade the host OS and everything has to come with to the new major version
One change I made that may help with this, is to not install crap on the host that I don't plan to use for a long time. Trying out a new database server or want to set up an Android IDE for a temporary project? Use a VM, don't clutter up random files all over the host. Is this what is happening on your Windows perhaps?
Nginx handles proxying and TLSing all HTTP traffic. It also enforces access rules: my services can only be reached from my home subnet or VPN subnet. Everywhere else gets a 403.
The end result is right clicking has noticeable delay in some directories. Boot takes a few seconds longer than it did the first time it was installed. Some applications, even Task Manager inexplicably hang with a white window for a few seconds.
The shell (part of Explorer) no longer displaying anything when searching in the start menu (because it tries to connect to the internet to search bing and that sometimes stops working rendering half the start menu useless).
The “modern” settings app hanging with its blue window and icon for anywhere from seconds to indefinitely.
It’s just a lot of this bullshit that adds up. Reinstalling Windows makes it like it was day one.
On Arch Linux, all system files are listed, along with their content hashes and expected permissions/ownership, in the installed package database. So it's possible to just list changed files in /etc or unexpected files in the system, or files with unexpected permissions, and do a manual cleanup/checkup if needed. No idea how I'd even approach that on Windows.
I guess the only time I'd need to re-install would be if I messed the system so bad that manual fixup would be too laborious over fresh setup and reconfiguration. (And I'd have to lose system backups too)
New laws comes to mind. If a government decides to try to outlaw encryption again, cloud/hosting companies located there wouldn't have a choice but to comply, or give up on the business. The laws could also be made in such way that individuals are responsible for avoiding it, even self-hosters, and if people are using it anyways, be legally held responsible for the potential harms of it.
Fully automated, incremental, verified backups, and restoring is one click of a button.
$ curl https://kiranet.org/self-hosting-like-its-2025/
<!DOCTYPE html>
<html lang="en">
<head>
<title>//localhost:1313/posts/self-hosting-like-its-2025/</title>
<link rel="canonical" href="//localhost:1313/posts/self-hosting-like-its-2025/">
<meta name="robots" content="noindex">
<meta charset="utf-8">
<meta http-equiv="refresh" content="0; url=//localhost:1313/posts/self-hosting-like-its-2025/">
</head>
</html>
I have been writing up my thoughts (and an example): https://andrewperkins.com.au/kamal/
The ability to deploy to both cloud servers and on-premises is a big win as I often work on projects that have a mix of both.
As the sibling comment says, it’s focussed on web servers. In my use case that is fine!
It's easy to manage and reason about.
WAF's etc just hide the fact the code in your service is full of holes.
I haven't needed to tune selfhosted databases. They do fine for low load on cheap hardware from 10 years ago.
[1] https://www.bbb.org/article/news-releases/20509-amazon-brush... [2] https://www.reddit.com/r/tulsa/comments/hpe8s1/just_got_a_su...
The solution is a secure software in front. It could be Wireguard, but sometimes you don’t know your users or they don’t want to install anything.
Docker lets my OS be be boring (and lets me basically never touch it) while having up to date user-facing software. No “well, this new version fixes a bug that’s annoying me, but it’s not in Debian stable… do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?”
I just use shell scripts to launch the services, one script per service. Run the script once, forget about it until I want to upgrade it. Modify the version in the script, take the container down and destroy it (easily automated as part of the scripts, but I haven’t bothered), run the script. Done, forget about it again until next time.
Almost all the commands I run on my server are basic file management, docker stuff, or zfs commands. I could switch distros entirely and hardly even notice. Truly a boring OS.
The package came from a US company in Texas not China. Not directly, the mask could have been made anywhere, but the package did not contain any other mail labels like when you get something from China. And never happened before, never happened again, and was literally only a single mask.
Still, seems to fit anyway because the brushing descriptions do vary in the details a little. My example still fits.
Or maybe it still was the hn guy and this just the method they used because they knew about it.
Anyway thank you.
Yup, it's basically like a "Docker Compose Manager" that lets you group containers more easily, since the manifest file format is basically Docker Compose's with just 1-2 tiny differences.
If there's one thing I would like Docker Swarm to have, is to not have to worry about which node creates a volume, I just want the service to always be deployed with the same volume without having to think about it.
That's the one weakness I see for multi-node stacks, the thing that prevents it from being "Docker Compose but distributed". So that's probably the point where I'd recommend maybe taking a look at Kubernetes.
If using Podman, should I use rootless containers (which IMO suck because you can't do macvlan so the container won't easily get its own IP on my home network)? Is it ok to just use rootful Podman with an idmapped user running without root privileges inside the container and drop all unneccessary capabilities? Should I set up another POSIX user before, such that breaking out of the container would in the end just yield access to an otherwise unused UID on the host?
If using systemd-nspawn, do all the above concerns about rootful / rootless hold? Is it a problem that one needs to run systemd-nspawn itself with root? The manpage itself mentions "Like all other systemd-nspawn features, this is not a security feature and provides protection against accidental destructive operations only.", so should I trust nspawn in general?
Or am I just being paranoid and everything should just be running YOLO-style with UID 1000 without any separation?
All of this makes me quite wary about running my paperless-ngx instance with all my important data next to my Vaultwarden with all of my passwords next to any Torrent clients or anything else with unencrypted open ports on the internet. Also keeping everything updated seems to be a full time job by itself.
Now most servers are app servers, and they all run archlinux. We prepare images and we run them with PXE.
Both those are out of scope for self host.
But, we also have about a dozen of staging, dev, playground servers. And those are just regular installs of arch. We run postgres, redis, apps in many languages... For all that we use systems packages and AUR. DB upgrade? Zfs snapshot, and I follow arch wiki postgres upgrade, takes a few minutes, there is downtime, but it is fine. You mess anything? Zfs rollback. You miss a single file? cd .zfs/snapshots and grab it. I get about 30minutes of cumulated downtime per year on those machines. That's way enough for any self host.
We use arch because we try the latest "toys" on those. If you self host take an LTS distribution and you'll be fine.
From first hearing about Sandstorm since the first open beta 10 years ago (https://news.ycombinator.com/item?id=10147774) and reading about it on/off since then, this is first time I hear anyone pitching it for "medical and other highly regulated industries". Where exactly does this come from?
> There's still nothing else quite like it
Plenty of other similar self-hosted platforms, YunoHost is probably the closest, most mature and most feature-packed alternative to Sandstorm, at least as far as I know,.
> I'm not a fan of DMZ as they get messy as you then have to ensure your host is protected correctly.
Could you elaborate on that? Specifically in my case I would have a perimeter router to which I would connect both my server and the inner router. My LAN would stay behind the inner router, so my understanding is that it still strictly has the same security as when my inner router was connected to the ISP; I just add a layer with the perimeter router.
Then the perimeter router opens the server (probably just chosen ports) to the public Internet, so that the server is reachable.
Wouldn't that mean that my host is protected correctly?
I find it strange that, especially with a docker which already knows your volumes, app data, and config, can't automatically backup and restore databases, and configs. Jeez, they could have built it right into docker.
I might have overstated the medical field- but they did pitch it as a product for enterprises with security requirements: "Sandstorm’s users included (and may still include – there’s no way for us to tell) companies, newspapers, educational institutions, research laboratories, and even government agencies. " (https://sandstorm.io/news/2024-01-14-move-to-sandstorm-org)
I consider hosting a system or service trivial ("just run the service and open its port to the public Internet"). Then the first question is: what if the service gets compromised (that seems like the most likely attack vector, right?)? Probably it should be sandboxed. Maybe in a container (not running as root inside the container, because I understand it makes it a lot easier to escape), better if it is in a VM (using Xen maybe?). What about jails?
Now say the services are running in VMs, and the "VM manager" (I don't know how to call it, I mean e.g. dom0 for Xen) is only accessible from my own IP (ideally over a VPN if it's running in a VPS, or just through the LAN if running at home?), the next question is: what happens if one of the services gets compromised? I assume the attacker can then compromise the VM, so now what are the risks for me? I probably should never ssh as a user and then login as root from there, because if it's compromised the attacker can probably read my password? Say I only ever login through ssh, either as root directly or as the user (but never promoting myself to root from the user), what could be vectors that would allow an attacker to compromise my host machine?
I listened to a lot of "Darknet Diaries" episodes, and the pentesters always say "I got in, and then moved laterally". So I'm super scared about that: if I run a service exposed to the Internet, I assume it may get compromised someday (though I'll do my best to protect it and keep it up-to-date). But then when it gets compromised, how can I prevent those "lateral moves"? I have no idea, as in "I don't know what I don't know".
All that to say, I would love to find a book or blog posts that explain those things. Tutorials I see usually teach how to run a service in docker and don't really talk about security.
One could set up a Docker Compose service that uses rclone to gzip and back up your docker volumes to something durable to get this done. An even more advanced version of this would automate testing the backups by restoring them into a clean environment and running some tests with BATS or whatever testing framework you want.
Each VM or container gets a data mount on a zvol. Containers go to OS mount and each OS has its own volume (so most VMs end up with 2 volumes attached)
Iirc default shared_mem is 128MB and it's usually recommended to set to 50-75% system RAM.
For most apps, the answer is usually "use a database" that correctly saves data.
Not sure on "easy" backups besides just running pg_dump on a cron but it's not very space efficient (each backup is a full backup, there's no incremental)
1. lighttpd exposing a website, using letsencrypt and a cron job to run certbot and restart lighttpd.
2. mox (https://www.xmox.nl) to run a mail server, with PTR records set up by my ISP. I am not with a CG-NAT ISP else none of this would be possible. mox makes it easy enough to set up DMARC and SPF etc. with appropriate output given that you can add to your DNS records.
3. I grab the list of IPs from https://github.com/herrbischoff/country-ip-blocks and add them to an iptables list (using ipset) every week so that I can block certain countries that have no legitimate reason to be connecting, with iptables just dropping the connection. I think I also use https://github.com/jhassine/server-ip-addresses to drop certain ranges from cloud servers to make annoying script kiddies go away.
4. peer-calls (https://github.com/peer-calls/peer-calls/) to be able to video call with my family and friends (with a small STUN server running locally for NAT traversal as I recall).
5. linx (https://github.com/andreimarcu/linx-server) to share single links to files (you can get an Android app to upload from your phone)
6. filebrowser for sharing blocks of files for users (https://github.com/filebrowser/filebrowser).
7. pihole runs on it so blocks adverts.
8. Wireguard runs on the Pi and I open the VPN ports on my router. I use the VPN on my phone so adverts are blocked when I am out and about (traffic gets routed through the Pi).
9. navidrome runs on it and I use subtracks on Android to stream (or just download albums for when I have spotty connection).
10. mpd runs on the Pi and it plays music to some speakers in the house, so I can control it with M.A.L.P on Android.
11. I use goaccess (https://goaccess.io) to look at my server logs and see what is hitting me.
12. I use maxmind geoip data so I know which countries are hitting me.
13. minidlna runs on the Pi so I can stream films to my TV.
14. I run CUPs on it too so that my rubbish wireless Samsung printer can be printed to from Android and my wife's Apple devices without having to buy an AirPlay-compatible printer.
15. xrdp running so I can log into a visual desktop on the Pi if required.
My router doesn't expose SSH ports, just appropriate ports for these web services and the VPN. SSH keys are useful. SSH is not open to the world anyway and you have to VPN into the network first.
This all sits happily and quietly in a cupboard and uses a feeble amount of power.
Without getting too deep into it, there are some things I know how to do with computers that I probably shouldn't, so my thought is this; if I, a random idiot who just happened to learn a few things, can do X, then someone smarter than me who learned how to attack a target in an organized way probably has methods that I cannot even conceive of, can do it easier, and possibly without me even knowing. It's this weird vacillation between paranoia and prudence.
For me, it's really about acknowledging what I know I don't know. I do some free courses, muck about with security puzzles, etc, even try my own experiments on my own machines, but the more I learn, the more I realize I don't know. I suppose that's the draw! The problem is when you learn these things in an unstructured way, it's hard to piece it all together and feel certain that you have covered all your vulnerable spots.
My thinking is this; if I'm willing to fork over dollars to a VPS hosting service for peace-of-mind, then paying for a service that helps me understand what I'm doing when it comes to self-hosting should also be on the table as an alternative.
That said, I have no idea how viable of a business model that would be, or if it would even be able to be developed and upkept with reliable info. Or, maybe it already exists, but on an enterprise level that I cannot afford for some dumb little blogs.
Ensuring your infra is built in a secure way is as important as ensuring your service is built in a secure way.
Writing your first yaml or two is scary & seems intimidating at first .
But after that, everything is cut from the same cloth. Its an escape from the long dark age of every sysadmin forever cooking up whatever whimsy sort of served them at the time, escape from each service having very different management practices around it.
And there's no other community anywhere like Kubernetes. Unbelievably many very good quality very smart helm charts out there, such as https://github.com/bitnami/charts/tree/main/bitnami just ready to go. Really sweet home-ops setups like https://github.com/onedr0p/home-ops that show that once you have a platform under foot, adding more services is really easy, showing an amazing range of home-ops things you might be interested in.
> Last thing I need is Kubernetes at home
Last thing we need is incredibly shitty attitude. Fuck around and find out is the hacker spirit. Its actually not hard if you try, and actually having a base platform where things follow common patterns & practices & you can reuse existing skills & services is kind of great. Everything is amazing but the snivelling shitty whining without even making the tiniest little case for your unkind low-effort hating will surely continue. Low signal people will remain low signal, best avoid.
My vision of self-hosting is basically the opposite. I only self-host existing apps and services for my and my family's use. I have a TrueNAS box with a few disks, run Jellyfin for music and shows, run a Nextcloud instance, a restic REST server for backing up our devices, etc. I feel like the OP is more targeted this type of "self hosting".
Right now my "servers" are Dell micro i5s. I've have used RPI 3 and 4 in the past. My initial foray into self-hosting were actual servers. Too hot, too noisy and too expensive to run continuously for my needs, but I did learn a lot. I still do even with the micros and pis.
It's not uncommon with self-hosting services using docker. It makes it easier to try out a new stack and you can mix and match versions of postgresql according to the needs of the software. It's also easier to remove if you decide you don't like the bit of software that you're trying out.
(As does the ever-present HN urge to downvote things for no comprehensible reason. Seriously, why the fuck would someone downvote an honest explanation for a typo? Animals, I guess.)
I've been running a DigitalOcean VPS for years hosting my personal projects. These include a static website, n8n workflows, and Umami analytics. I used manual Docker container management, Nginx, and manual Let's Encrypt certificate renewals. I was too lazy even to set up certbot.
I've migrated to a Portainer + Caddy setup. Now I have a UI for container management and automatic SSL certificate handling. It took about two hours.
Thanks for bringing me to 2025!
I haven't tried it myself yet. Has anybody else given it a spin?
[0]: https://canine.sh/
While home routers tend to set their rules as outbound allow and inbound denied. My DC just provides me with a network cable to the big pond of data.
How I secure that for my home network is using my personal rig with multiple network ports.
One port acts as a public bridge. And the 3rd and 4th network ports then are then assigned to the private bridges
The 2nd port then sits in a middle bridge where it communicates to both the public and private bridge.
So, if you're using filesystem snapshots as source of backups for database, then I agree, you _should_ be good. the regular pgdumps is a workaround for other cases for me.
In these rare cases I usually just compile a newer deb package myself and let the package manager deal with it as usual. If there are too many dependencies to update or it's unusually complex, then it's container time indeed - but I didn't have to go there on my server so far.
Not diverging from the distro packages lets me not worry about security updates; Debian handles that for me.
I think it's a great way to learn Kubernetes if you're interested in that.
Things I do:
* Make sure domain WHOIS does not point to me in any way, even if that means using some silly product like "WHOIS GUARD"
* Lock down any and all SSH access. Preferably only allow key-based authentication.
* Secure the communication substrate. For me this means running a Zerotier network which all dependent services listen on. I also try to use Unix sockets for any services colocated on the same operating system and restrict the service to only listen on sockets in a directory specifically accessible by the service.
* Try to control the permission surface of any service as much as possible. Containers can be a bit heavyweight for self-hosting but make this easy. There's alternatively like bubblewrap and firejail as well.
* Make use of services like fail2ban which can automate some of the hunting of bad actors for you.
* Consider hosting a listener for external traffic outside of your infra. For redundancy, load-shedding, and for security I have an external VPS that runs haproxy before routing over Zerotier to my home infrastructure. I enforce rate limits and fail2ban at the VPS so that bad actors get stopped upstream and use none of my home compute or bandwidth. (I also am setting up some redundant caches that live on the VPS so if my home network is down, one of my services can failover.)
* Segregate data into separate databases and make sure services only have access to databases that they need. With Postgres this is really simple with virtual databases being tied to different logins. I have some services that prune databases that run in a cron-like way (but using snooze instead) and they have no outbound net access.
If your network layer is secure and your services follow least-privilege, then you should be fairly in the clear.
I have now switched from some SaaS products to self-hosted alternatives:
- Feed reader: Feedly to FreshRSS.
- Photo management/repository: Google Photos to Imich or Imgich (don't remember).
- Bookmarks: Mozilla's Pocket to Hoarder.
And so far, my experience has been simple and awesome!
``` accessories:
db:
image: mysql:8.0
host: 170.64.156.161
env:
secret:
- MYSQL_ROOT_PASSWORD
options:
restart: always
directories:
- data:/var/lib/mysql
```For my other services I am just using sqlite combined with a volume for persistence (managed by kamal)
Of course there can be security issue on your webserver as well, but for a simple site this setup is learnable in an hour or two and you are ready to go.
You can hook that up on a pie attached to your router or pay a bit to have it hosted somewhere. Domain is perhaps 2-5$ and an TLS cert you can get from Let's Encrypt.
No idea how to put everything into a container that it makes sense. I just run this quite often on small hosted machines elsewhere. I just install everything manually because it takes 5 minutes if you have done it before.
https://docs.opnsense.org/manual/how-tos/wireguard-client.ht...
Then on my phone I just flick on the switch and can access all my home services. It's a smidge less convenient, but feels nice and secure.
After a few years you work out that holy shit we now have 15 people looking after everything instead of the previous 4 people and pods are getting a few hits an hour. Every HTTP request ends up costing $100 and then you wonder why the fuck your company is totally screwed financially.
But all the people who designed it have left for consultancy jobs with Kubernetes on their resume and now you've got an army of people left to juggle the YAML while the CEO hammers his fist on the table saying CUT COSTS. Well you hired those feckin plums!
etc etc.
Lots of them are on here. People have no idea how to solve problems any more, just create new ones out of the old ones.
I run it alongside portainer because exactly of the compose.yaml file I want to have control over