Most active commenters
  • nijave(12)
  • Aachen(8)
  • brulard(6)
  • MortyWaves(6)
  • (6)
  • finnlab(5)
  • candiddevmike(4)
  • palata(4)
  • vbezhenar(4)
  • diggan(4)

221 points finnlab | 236 comments | | HN request time: 2.564s | source | bottom
1. ValdikSS ◴[] No.43545265[source]
Nah, that doesn't look fun. Self-hosting in 2025 should look like Yunohost or Sandstorm.
replies(2): >>43545347 #>>43545968 #
2. Helmut10001 ◴[] No.43545273[source]
I tried portainer once, it looked nice and had a lot of features... for which I had no use. I always found `docker compose` much easier to use, it is often just an alias and a tab away, where for portainer I would have to open a browser tab and sometimes even touch my mouse!

Otherwise good article. If you want to go rootless (which you should!), Podman is the way to go; but Docker works rootless too, with some modifications [1]. I have found Docker rootless to be reliable and robust on both Debian and Ubuntu. It also solves permissions problems because your rootless user owns files inside and outside the container, whereas with rootful setups all files outside the container are owned by root, which can be a pain.

Also, you don't need Watchtower. Automatic `docker compose pull` can be setup using standard crontab, see [2].

[1]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...

[2]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...

replies(2): >>43545400 #>>43559147 #
3. lambdadelirium ◴[] No.43545282[source]
Muh kube
4. terminalbraid ◴[] No.43545330[source]
I am going to continue to stan for dokku for hosting web apps, docker images included

https://dokku.com/

replies(5): >>43545546 #>>43545548 #>>43546401 #>>43546941 #>>43555511 #
5. arnley ◴[] No.43545347[source]
You sir know how to appreciate the simplicity such service providers deliver.
replies(1): >>43545519 #
6. kalaksi ◴[] No.43545400[source]
Can I suggest Lightkeeper (I'm the maintainer): https://github.com/kalaksi/lightkeeper. I made it for my own needs to simplify repetitive tasks and provide an efficient view, and it has hotkeys and trying to stay "agile". You can drop to a terminal with a hotkey any time.
7. nodesocket ◴[] No.43545401[source]
I recently discovered Beszel (https://github.com/henrygd/beszel) for monitoring all my homelab servers. It's quick, easy, and has a very clean and intuitive interface. Works especially well running inside of containers on hosts. I also wrote a very quick guide on getting it running inside of Kubernetes at https://github.com/henrygd/beszel/discussions/431.

While it's not nearly as powerful as say DataDog, it provides the core essentials of CPU, memory, disk, network, temperature and even GPU monitoring (via agent only).

8. hankchinaski ◴[] No.43545422[source]
The only thing that holds me back for self hosting is Postgres. Has anyone managed to get a rock solid Postgres setup self managed? Backups + tuning?
replies(8): >>43545468 #>>43545490 #>>43545510 #>>43545550 #>>43545777 #>>43545820 #>>43546275 #>>43547434 #
9. bauerd ◴[] No.43545430[source]
Last thing I need is Kubernetes at home
replies(11): >>43545448 #>>43545480 #>>43545584 #>>43545606 #>>43545610 #>>43545648 #>>43545656 #>>43545661 #>>43548737 #>>43550543 #>>43552457 #
10. megous ◴[] No.43545436[source]
My peace of mind comes from my self hosting in 2025 looking the same as my self-hosting in 2005 (except a move to systemd). I haven't even re-installed my workstation OS since 2006 or so.
replies(3): >>43545580 #>>43545671 #>>43547521 #
11. vbezhenar ◴[] No.43545442[source]
I'm slowly configuring my new VPS. Here's approach I'm taking:

1. RHEL 9 with Developer Subscription. Installed dnf-automatic, set `reboot = when-changed`, so it's zero effort to reliably apply all updates with daily reboots. One or two minutes of downtime, not a big deal.

2. For services: podman with quadlets. It's RH-flavoured replacement for docker-compose. Not sure, if I like it, but I guess that's the "future", so I'm embracing it. Every service is a custom-built image with common parent to reduce space waste (by reusing base OS layer).

So far I want to run static http (nginx), vaultwarden, postfix and some webmail. May be more in the future.

This setup wastes a lot of disk space for image data, so expect to order few more gigabytes of disk to pay for modern tech.

12. nodesocket ◴[] No.43545448[source]
I run Kubernetes on my homelab on 4x Raspberry Pi's then use Portainer to manage it. Works quite well, can use Helm charts when available, otherwise I use Portainer and deploy container apps manually. It's really not that bad.
replies(1): >>43545614 #
13. nodesocket ◴[] No.43545468[source]
I run a few PostgreSQL instances in containers (Kubernetes via Bitnami Helm chart). I know running stateful databases is generally not best practice but for development/homelab and tinkering works great.

Bitnami PostgreSQL Helm chart - https://github.com/bitnami/charts/tree/main/bitnami/postgres...

14. ◴[] No.43545475[source]
15. rullopat ◴[] No.43545479[source]
What about Kamal? https://kamal-deploy.org/ Did anybody here use it for anything that is not a Ruby on Rails app?
replies(2): >>43545662 #>>43546166 #
16. import ◴[] No.43545480[source]
Exactly. I am hosting 30+ services using docker compose and very happy. I don’t want to troubleshoot k8s in the early morning because home assistant is down and light dimmers are not working for some random k8s reason.
replies(3): >>43545612 #>>43545699 #>>43546061 #
17. jagermo ◴[] No.43545488[source]
i really like this software-centric approach, but I am a bit stuck when it comes to the hardware. Are there some sources around servers that don't suck up too much energy and are fairly quiet? What CPU is a good all-round solution?
replies(1): >>43545543 #
18. candiddevmike ◴[] No.43545490[source]
What is your RTO/RPO?
replies(2): >>43545742 #>>43547488 #
19. TrayKnots ◴[] No.43545501[source]
I am actually worried about the self-hosting pandemic. We self-hosters will stop flying under the radar. Wonder how long it will take until our matrix instances require to be backdoored, our immich are scanning our pictures with AI.

On an unrelated note, an article of how to rent a VPS in China would be interesting :)

replies(5): >>43545532 #>>43545578 #>>43545582 #>>43545586 #>>43545788 #
20. lytedev ◴[] No.43545510[source]
I self-host Postgres at home and am probably screwing it up! I do at least have daily backups, but tuning is something I have given very little thought to. At home, traffic doesn't cause much load.

I'm curious as to what issues you might be alluding to!

Nix (and I recently adopted deploy-rs to ensure I keep SSH access across upgrades for rolling back or other troubleshooting) makes experimenting really just a breeze! Rolling back to a working environment becomes trivial, which frees you up to just try stuff. Plus things are reproducible so you can try something with a different set of machines before going to "prod" if you want.

21. pjc50 ◴[] No.43545519{3}[source]
I'd not heard of yunohost, so I went googling and found https://www.reddit.com/r/selfhosted/comments/1ey9ayp/what_ma...

Seems like Docker has won so comprehensively that even more convenient (But unfamiliar) options are pushed to use it.

22. candiddevmike ◴[] No.43545525[source]
If any self hosters are burnt out by the state of config management tools and YAML, consider giving Etcha a shot (https://etcha.dev).

It's stateful (cleans up things when they're no longer in your config), procedural (you control the flow and can trigger things as needed), and supports flexible deployment models (push or pull). Full disclosure, I created it and use it across my business and personal devices.

replies(1): >>43545682 #
23. pjc50 ◴[] No.43545532[source]
> an article of how to rent a VPS in China would be interesting

Given that apparently it's quite difficult to even get a WeChat account without a national ID, I suspect that step 1 is "learn mandarin" and step 2 is "get a Chinese national ID".

replies(4): >>43545545 #>>43545864 #>>43546015 #>>43546066 #
24. brulard ◴[] No.43545543[source]
Depends on your needs. For some Raspberry Pi (ideally with 8-16GB RAM) + SSD can be enough if you are after low power consumption.

If you need more power: I had success with HP ProDesk Mini (or any other one-litre PC), you can get these second hand from like $150 and extend RAM and SSDs however you like. You can even pick processor / generation to fit your needs best. These can have consumption from like 30W if I'm not mistaken.

I have no experience with real and expensive server hardware, but most people don't need that for a homelab.

replies(4): >>43545684 #>>43545956 #>>43546173 #>>43546323 #
25. SJC_Hacker ◴[] No.43545546[source]
Is that really self hosting though ?

Self hosting to me is, at the very least having physical access to the machines.

replies(2): >>43545560 #>>43545642 #
26. rubslopes ◴[] No.43545548[source]
coolify.io is also a great open-source alternative if someone wants a web interface.
replies(1): >>43545611 #
27. homebrewer ◴[] No.43545550[source]
Put it on a zfs dataset and back up data on the filesystem level (using sanoid/syncoid to manage snapshots, or any of their alternatives). It will be much more efficient compared to all other backup strategies with similar maintenance complexity.
replies(1): >>43545814 #
28. qudat ◴[] No.43545558[source]
I’ve been self hosting small web apps using an SSH service https://tuns.sh and it helped me replace my DO droplet pretty successfully. What’s nice is it also has built-in usage analytics, alert notifications when a tunnel goes down, and fully manageable with a remote TUI which means I don’t have to install anything to use it.
29. rubslopes ◴[] No.43545560{3}[source]
I think GP miswrote; Dokku does not host, it manages containers and makes deployment easier. It's like a self-hosted Heroku.
30. ciupicri ◴[] No.43545578[source]
Why would you rent a VPS in China?
replies(1): >>43545600 #
31. johnisgood ◴[] No.43545580[source]
I see a lot of people mention some technology under this submission that you do not even have to look at when you are self-hosting. It all depends what you want to self-host, from nginx, fcgiwrap (with certbot), to an e-mail server, or a Matrix instance, or Zulip, or what have you these days. I maintain quite many servers and I do not use these new and fancy technologies.

I'm just a boomer (technically a millennial) who sticks to Arch Linux even when it comes to servers, I have zero friction, really. I have no issues self-hosting whatever me or a client requires, keeping it minimal and functional.

I self-host like it is 2000 (apart from a couple of some more modern stuff, if you could consider systemd and certbot, etc. modern). :D

32. anticrymactic ◴[] No.43545582[source]
Isn't that the beauty of self hosting? How could anything be enforced on user-controlled servers? Practically everything self-hosted is open source and how would enforcing anything would even work?
replies(2): >>43545593 #>>43546084 #
33. dailykoder ◴[] No.43545584[source]
I guess it can be comfortable for some people.

But I just wanted to comment something similar. It's probably heavily dependend on how many services you self-host, but I have 6 services on my VPS and they are just simple podman containers that I just run. Some of them automatically, some of them manually. On top of that a very simple nginx configuration (mostly just subdomains with reverse proxy) and that's it. I don't need an extra container for my nginx, I think (or is there a very good security reason? I have "nothing to hide" and/or lose, but still). My lazy brain thinks as long as I keep nginx up to date with my package manager and my certbot running, ill be fine

34. Aachen ◴[] No.43545586[source]
Matrix server backdoors aren't an issue though? It's about the client where decryption happens. If those aren't required to upload decrypted contents, you can always overlay some encryption protocol like OTR over any chat mechanism. I remember using it on MSN via Pidgin

Don't worry about the servers. Worry about mandated software on the client

35. madeofpalk ◴[] No.43545593{3}[source]
The problem lies with people who are technical enough to self-host, but might not be confident enough to fork/make changes. Maybe you could switch services, but there's still just enough friction/soft-lock in to actually migrate.

You are right though, it gives significantly more control to users. It's just realising 100% of the benefits that might be trickier.

36. oliwary ◴[] No.43545595[source]
Does anyone have any experience with coolify? https://coolify.io/ I am considering switching the hosting of my online games to it.
37. James_K ◴[] No.43545596[source]
I tell you the real problem with self hosting: people using monospace fonts for body text.
replies(1): >>43545677 #
38. throwaway48476 ◴[] No.43545600{3}[source]
Jurisdictional arbitrage.
39. mrweasel ◴[] No.43545606[source]
It is still my opinion that most businesses do not need Kubernetes, and neither should anyone self-hosting a service at home.

I can see running something in a Docker container, and while I'd advise against containers what ships with EVERYTHING, I'd also advise against using Docker-compose to spin up an ungodly amount of containers for every service.

You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home. Find a service that can be installed using your operating systems package manager and set everything to auto-update.

Whatever you're self-hosting, be it for yourself, family or a few friends, there's absolutely nothing wrong with SQLite, files on disk or using the passwd file as your authentication backend.

If you are self hosting Kubernetes to learn Kubernetes, then by all means go ahead and do so. For actual use, stay away from anything more complex than a Unix around the year 2000.

replies(5): >>43545625 #>>43545652 #>>43546042 #>>43546178 #>>43549714 #
40. ◴[] No.43545610[source]
41. brulard ◴[] No.43545611{3}[source]
Anyone with experience to compare Coolify vs. Dokku? (and maybe something else?)
42. johnisgood ◴[] No.43545612{3}[source]
I am quite happy even without Docker, but I can see the appeal in some cases.
43. dailykoder ◴[] No.43545614{3}[source]
I'm sure you find some joy in it or just like to explore what's possible. But just a reminder: Pieter Levels is running his million dollar businesses on a single VPS (if that's still correct). But yeah, if you like it, why not.

"Premature clustering is the source of all evil" - or something like that.

44. lifestyleguru ◴[] No.43545617[source]
In my own kingdom I write bash on Linux. In my own kingdom I don't have to listen to caged monkeys on caffeine. Not even a single YAML in the codebase.
45. ◴[] No.43545625{3}[source]
46. Youden ◴[] No.43545629[source]
I've been through just about everything to get where I am and I've ended up with Hashicorp Nomad and Consul with Traefik, managed by OpenTofu (open-source Terraform).

Things that haven't worked for me:

- Standalone Docker: Doesn't work great on its own. Containers often need to be recreated to modify immutable properties, like the specific image the container is running. To recreate the container, you need to store some state about how it _should_ work elsewhere.

- Quadlet: Too hard to manage clusters of services. Podman has subtle differences to Docker that occasionally cause problems and really tempting features (e.g. rootless) that cause more problems if you try to use them.

- Kubernetes: Waaaay too heavy. Even the "lightweight" distributions like k3s, k0s etc. embed large components of the official distribution, which are still heavy. Part of the embedded metric server for example periodically enumerates every single open file handle in every container. This leads to huge CPU spikes for a feature I don't care about.

With my setup now, I can more or less copy-paste a template into a new file, tweak some strings and have a HTTPS-enabled service available at https://thing.mydomain.mine. This works pretty painlessly even for services that need several volumes to maintain state or need several containers that work together.

replies(2): >>43545658 #>>43545666 #
47. boxed ◴[] No.43545642{3}[source]
Yes, it's self hosting. You have access to the machine.
48. 0xEF ◴[] No.43545644[source]
I love the idea of self-hosting, especially since I keep a number of very tiny websites/projects going at any given time, so resources would not really be too much of an issue for me.

What stops me is security. I simply do not know enough about securing a self-hosted site on real hardware in my home and despite actively continuing to learn, it seems like the more I learn about it, the more questions I have. My identity is fairly public at this point, so if I say the wrong thing to the wrong person on HN or whatever, do I need to worry about someone much smarter than me setting up camp on my home network and ruining my life? That may sound really stupid to many of you, but this is the type of anxiety that stops the under-informed from trying stuff like this and turning to services like Akamai/Linode or DO that make things fairly painless in terms of setup, monitoring and protection.

That said, I'm 110% open to reading/watching any resources people have that help teach newbies how to protect their assets when self-hosting.

replies(13): >>43545681 #>>43545687 #>>43545733 #>>43545739 #>>43546101 #>>43546191 #>>43546239 #>>43546265 #>>43546590 #>>43552531 #>>43555038 #>>43555405 #>>43556435 #
49. MortyWaves ◴[] No.43545648[source]
There is a fairly well known/popular blogger who’s blog I was following because of their self hosting/homelab/nix adventures.

Then they decided to port everything to K8 because of overblown internet drama and I lost all interest. Total shame that a great resource for Nix became yet another K8 fest.

50. dailykoder ◴[] No.43545652{3}[source]
I shared this sentiment. But since I just host some personal fun projects and I got really lazy when it comes to self-hosting, I found great pleasure in just creating the simplest possible docker containers. It just keeps the system super clean and easy to wipe and setup again. My databases are usually just mounted volumes which do reside on the host system
51. seba_dos1 ◴[] No.43545656[source]
"apt-get install" tends to be enough once you stop chasing latest-and-greatest and start to appreciate things just running with low maintenance more.
replies(3): >>43546157 #>>43546335 #>>43548554 #
52. JojoFatsani ◴[] No.43545658[source]
Docker Compose is very suitable for the homelab scenario. I use it on my pi.
53. raphinou ◴[] No.43545661[source]
Exactly, my first reaction was "I should write a blog post about why I still use Docker Swarm". I deploy to single node swarms, and it's a zero boiler plate solution. I had to migrate services to another server recently, and it was really painless. Why oh why doesn't Docker Swarm get more love (from its owners/maintainers and users)?....

Edit: anyone actually interested in such a post?

replies(9): >>43545769 #>>43545842 #>>43546429 #>>43546455 #>>43546541 #>>43547576 #>>43549257 #>>43549367 #>>43549757 #
54. MortyWaves ◴[] No.43545662[source]
The thing that puts me off is its seemingly heavy focus on “web apps”. I have a bunch of services I either use or wrote myself and only a handful have anything to do with the web.
55. quickslowdown ◴[] No.43545666[source]
Do you run a Nomad cluster? Or just on a single host? This is my desired state, I've set up Nomad a number of times but always get stuck in one place or another. I've gotten much further with Nomad than Kubernetes, but I've kind of always gone back to ol' faithful, writing a docker compose file and running everything that way.
replies(1): >>43555894 #
56. MortyWaves ◴[] No.43545671[source]
2006 is impressive. I can’t even imagine that. I reinstall Windows every 18 months or so.
replies(2): >>43545954 #>>43545967 #
57. kuon ◴[] No.43545673[source]
If you self host, do not use containers and all those things.

Just use a static site generator like zola or hugo and rsync to a small VPS running caddy or nginx. If you need dynamic thing, there are many frameworks you can just rsync too with little dependencies. Or use PHP, it's not that bad. Just restrict all locations except public ones to your ip in nginx config if you use something like wordpress and you should be fine.

If you have any critical stuff, create a zfs dataset and use that to backup to another VPS using zfs send, there are tools to make it easy, much easier than DB replication.

replies(2): >>43545888 #>>43549393 #
58. MortyWaves ◴[] No.43545677[source]
But mono fonts let the reader know they are reading Serious Technical Content ™
replies(1): >>43547601 #
59. 0xCE0 ◴[] No.43545680[source]
This.
60. doublerabbit ◴[] No.43545681[source]
A VPS with a software firewall is more than enough.

Block port 22, secure SSH with certificates only. Allow port 443 and configure your web server as a reverse proxy with a private backend.

You don't need an IDS, you don't need a WAF and you don't need Cloudflare.

Unless you become the next Facebook that's when you start to become concerned about security.

replies(3): >>43545715 #>>43545717 #>>43545744 #
61. simoncion ◴[] No.43545682[source]
> It's started (cleans up things when they're no longer in your config)...

Would you please call this something else?

"It automatically reconciles", perhaps? I know that a multi-word phrase isn't nearly as snappy, but not only are "it's started" and "started" overloaded with a bunch of meanings, approximately zero of them mean what you want them to mean in this new context.

replies(1): >>43545795 #
62. Aachen ◴[] No.43545684{3}[source]
> For some Raspberry Pi (ideally with 8-16GB RAM) + SSD can be enough

Wut? For many, a Raspberry Pi with 1 GB RAM and the regular sdcard can be enough, you really don't need to go fancy if you don't want to run anything particularly heavy. Or if it's cpu-intensive then you might need the newest Pi or something even beefier but still only the lowest RAM and smallest/slowest storage options (like for WordPress). As you say, it depends on needs

I always recommend using an old laptop to start out with because you've already got it anyway and it's already low power yet very powerful: if it can run graphical software from 2020 then it'll be fine as server until 2030 for anything standard like a web server (with half a dozen websites and databases, such as a link shortener, some data explorers, and my personal site), torrent box, VPN server, mail server, git server, IRC bouncer, Windows VM for some special software, chat bot, etc. all at once. At least, that's what I currently run on my 2012 laptop and the thing is idle nearly the whole time. Other advantages of a laptop include a built-in KVM console and UPS, at least while you still trust the old battery (one should detach and recycle that component after some years)

replies(1): >>43550393 #
63. palata ◴[] No.43545687[source]
I agree with this: I personally don't need tutorials for hosting stuff, rather tutorials about securing it properly.
replies(1): >>43545821 #
64. seba_dos1 ◴[] No.43545699{3}[source]
All my "smart home" stuff needs is mosquitto on a OpenWrt router and bunch of cgi-bin scripts that can run anywhere. I already went through a phase of setting up tons of services which ended up being turned off when something changed in my life (moving, replacing equipment etc.) never to be resurrected as I couldn't be bothered to redo it without the novelty effect, so I learned from that.
65. XorNot ◴[] No.43545715{3}[source]
I'm less worried about SSH access then I am abotya vulnerability in some front-end web service though.

I've contented myself using TLS client certs on my family's Android phones (which do not work at all on iOS for something like Home Assistant).

66. palata ◴[] No.43545717{3}[source]
> A VPS with a software firewall is more than enough.

So you don't self-host at home, right?

I have been considering setting up a physical DMZ at home, with two routers (each with its own firewall), such that my LAN stays unmodified and my server can run between both routers. Then it feels like it would be similar to having a VPS in terms of security, maybe?

replies(1): >>43545723 #
67. doublerabbit ◴[] No.43545723{4}[source]
I colocate four servers in two DCs all running FreeBSD with PF. My main host is running a jail that hosts a bHyve VM

With four jails, each running their own bHyve VMs they run another FreeBSD OS allowing me to host jails for different services. Email, web and game servers.

I'm not a fan of DMZ as they get messy as you then have to ensure your host is protected correctly. So I use bridges, I have two bridges an outer and inner.

Services requiring outbound internet access are tapped to the outer bridge which are throttled and if required can then load balance between and the inner bridge which is under control of deny all, allow some. To my own set of home IPs.

The outer bridge cannot contact services in the inner but the inner can contact the outer but can only host internally.

This all done with PF within each jail as each jail provides you with its own vnet adapter which can be applied to a bridge.

If you wish to learn further that is what you work up too But for the personal user who wishes self-host and to have internet presence a firewall is just fine.

replies(1): >>43546716 #
68. segu ◴[] No.43545733[source]
You should encrypt and backup your assets regularly. We recently published a tutorial on how to to do so using B2 and Infisical so that your private key doesn't live on the server: https://infisical.com/blog/self-hosting-infisical-homelab
69. fm2606 ◴[] No.43545739[source]
I'm right there with you, except at times I have thrown caution to the wind and made my sites available.

My current setup is to rent a cheap $5/month VPS running nginx. I then reverse ssh from my home to the vps, with each app on a different port. It works great until my electric goes out and comes back on the apps become unavailable. I haven't gotten the restart script to work 100% of the time.

But, I'd love to hear thoughts on security of reverse SSH from those that know.

replies(3): >>43545787 #>>43546006 #>>43546477 #
70. orthoxerox ◴[] No.43545742{3}[source]
1s/0s
71. mhitza ◴[] No.43545744{3}[source]
> A VPS with a software firewall is more than enough.

You want VPS-provider firewall. Docker's going to punch holes through your software firewall.

72. aborsy ◴[] No.43545751[source]
I can self host many applications, but their security must be outsourced to a company. I don’t have time to keep on top of vulnerabilities.

Cloudflare Tunnels is a step in the right direction, but it’s not end to end encrypted.

The question is then, how to secure self hosted apps with minimal configuration, in a way that is almost bulletproof?

replies(2): >>43545854 #>>43546213 #
73. FloatArtifact ◴[] No.43545753[source]
Self-Hosting like it's 2025...uhhgg...

Don't get me wrong I love some of the software suggested. However yet a another post that does not take backups as seriously as the rest of the self-hosting stack.

Backups are stuck in 2013. We need plug and play backups for containers! No more roll your own with zfs datasets, back up data on the filesystem level (using sanoid/syncoid to manage snapshots or any other alternatives.

replies(3): >>43546151 #>>43547094 #>>43547135 #
74. zephyreon ◴[] No.43545756[source]
I host tons of web-facing apps behind a reverse proxy. I used to have it all running in a swarm cluster with several underlying vms but decided to move everything to Unraid a while back for simplicity.

Would highly recommend.

https://unraid.net

replies(1): >>43545799 #
75. mfashby ◴[] No.43545769{3}[source]
I moved us off docker swarm to GKE some years back. The multi node swarm was quite unstable, and none of the big cloud providers offered managed swarm in the same way they offer managed k8s.

It's a shame I agree because it was nicely integrated with dockers own tooling. Plus I wouldn't have had to learn about k8s :)

76. Aachen ◴[] No.43545777[source]
Why would tuning be necessary for a regular setup, does it come with such bad defaults? Why not upstream those tunes so it can work out of the box?

I remember spending time on this as a teenager but I haven't touched my MariaDB config in a decade now probably. Ah no, one time a few years ago I turned off fsyncing temporarily to do a huge batch of insertions (helped a lot with qps, especially on the HDD I used at the time), but that's not something to leave permanently enabled so not really tuning it for production use

replies(1): >>43547250 #
77. _mitterpach ◴[] No.43545787{3}[source]
Maybe try running your services in docker, I don't know how difficult that would be to implement for you, but if you run it in containers you can get it to start up after an outage pretty reliably.
replies(1): >>43545862 #
78. infecto ◴[] No.43545788[source]
I suspect you would have trouble hosting long term in China. I don’t recall the specifics now but IIRC every website hosted in China needs a special government ID which requires getting approval. My memory is hazy but it does feel like one of the poorer choices to host unless you live in mainland. There are many better options in the world that both do not restrict information as well as not requiring paperwork.
79. candiddevmike ◴[] No.43545795{3}[source]
Sorry, autocorrect changed stateful to started.
replies(1): >>43549955 #
80. raphinou ◴[] No.43545799[source]
As mentioned in another comment, I'm currently still happy with (single node) Docker Swarms (with the reverse proxy as described on https://dockerswarm.rocks/ ). I like that I can basically use the docker compose files published by a lot of project to deploy. How does Unraid compare in your experience?

And I like that I can deploy images which basically don't have any requirement to be deployable to Docker Swarm. Is that also the case with Unraid?

81. martin_a ◴[] No.43545812[source]
Still "rsync"ing the result of "hugo build" to a subfolder on a shared webhost. Works like a charm, hope it will do so forever. :-D
replies(1): >>43546517 #
82. candiddevmike ◴[] No.43545814{3}[source]
Filesystem backups may not be consistent and may lose transactions that haven't made it to the WAL. You should always try to use database backup tools like pgdump.
replies(4): >>43546121 #>>43546462 #>>43547389 #>>43551842 #
83. mfashby ◴[] No.43545820[source]
I've got an openbsd server, postgres installed from the package manager, and a couple of apps running with that as the database. My backup process just stops all the services, backs up the filesystem, then starts them again. Downtime is acceptable when you don't have many users!
84. Aachen ◴[] No.43545821{3}[source]
Could you give an example of a guide that helped you self host a system or service by telling you how to do the security? One that shows what information would be missing from a regular setup tutorial?

I'm a security consultant so this is not a problem I have. To me it seems very straightforward and like most things are secure by default (with the exceptions being notorious enough that I'd know of it), so I'm interested in the other perspective

replies(2): >>43546857 #>>43547817 #
85. notpushkin ◴[] No.43545836[source]
I feel like I’ve been plugging it way too many times... but if you’re looking for a more humane alternative to Portainer, check out my project, Lunni: https://lunni.dev/

(Docker Swarm only for now, though I’m thinking about adding k8s later this year)

replies(1): >>43546524 #
86. galbar ◴[] No.43545842{3}[source]
I just want to add that I also have a Docker Swarm running, with four small nodes for my personal stuff plus a couple of friends' companies.

No issues whatsoever and it is so easy to manage. It just works!

87. cullumsmith ◴[] No.43545851[source]
Still running everything from my basement using FreeBSD jails and shell scripts.

Sacrificing some convenience? Probably. But POSIX shell and coreutils is the last truly stable interface. After ~12 years of doing this I got sick of tool churn.

replies(2): >>43550576 #>>43550759 #
88. Aachen ◴[] No.43545854[source]
> security must be outsourced to a company. I don’t have time to keep on top of vulnerabilities.

If the software you host constantly has vulnerabilities and something like apt install unattended-upgrades doesn't resolve them, maybe the software simply isn't fit for hosting no matter what team you put on it. That hired team might as well just spend some time making it secure rather than "keeping on top of vulnerabilities"

replies(2): >>43546306 #>>43547318 #
89. fm2606 ◴[] No.43545862{4}[source]
Yeah, that is a good idea and as I have been doing a little bit of studying Kubernetes I thought about that too (overkill for sure).
replies(1): >>43546338 #
90. thenthenthen ◴[] No.43545864{3}[source]
Also your home modem/router is often tied to your ID and then there is ofc the firewall. IIRC You can get vos hosting and ICP code through Ali Cloud somewhat automagically. Agree it would be nice to give it a try some time.
91. Aachen ◴[] No.43545888[source]
What I'm reading is not to use containers for a web server, which makes sense because web servers have had vhosts since forever and you can host any number of sites on there independently already

But what about other services, like if you want a database server as well, a mail server, etc.?

I started using containers when I last upgraded hardware and while it's not as beneficial as I had hoped, it's still an improvement to be able to clone one, do a test upgrade, and only then upgrade the original one, as well as being able to upgrade services one by one rather than committing to a huge project where you upgrade the host OS and everything has to come with to the new major version

replies(3): >>43546244 #>>43546621 #>>43547555 #
92. ltr_ ◴[] No.43545889[source]
just want to say that, to self host feels like the 2000's, and that's a refreshing good thing, you feel in control, away for all the enshittification we are experiencing right know. I'm having a blast with a group of friends with netbird/tailscale, creating little projects in our forgejo instance with full CI/CD for the infra, and day by day we are adding more and more stuff, our own little internet. we learn things and talk about technical stuff while setting up and implementing them, its my favorite social network now. also f big tech.
93. Aachen ◴[] No.43545954{3}[source]
I don't think I've reinstalled since around 2014, when I switched to a different Linux distribution. It's just a pain and while it cleans up some unused packages, what's the point really? Why do you reinstall Windows every 18 months?

One change I made that may help with this, is to not install crap on the host that I don't plan to use for a long time. Trying out a new database server or want to set up an Android IDE for a temporary project? Use a VM, don't clutter up random files all over the host. Is this what is happening on your Windows perhaps?

replies(2): >>43546016 #>>43546083 #
94. Gracana ◴[] No.43545956{3}[source]
You can get Mini-ITX server motherboards with IPMI and ECC memory if you want something robust and remotely-manageable while staying small and low power. I have a SuperMicro A2SDi-4C-HLN4F, which has a 16W intel atom C3558 and is quite old at this point, but it's fine for my little home network tasks.
95. megous ◴[] No.43545967{3}[source]
Yeah, it's basically an Arch Linux install. I just rsync it to a new workstation every 5-7 years (or move the disk) and go on. :)
96. crabmusket ◴[] No.43545968[source]
I really like Sandstorm - it's a glimpse into an alternate world of how things could be.
replies(1): >>43556858 #
97. the_snooze ◴[] No.43546006{3}[source]
I do something similar with my home server, but with a WireGuard split tunnel. Much easier to set up and keep active all the time (i.e., on my phone).

Nginx handles proxying and TLSing all HTTP traffic. It also enforces access rules: my services can only be reached from my home subnet or VPN subnet. Everywhere else gets a 403.

replies(1): >>43552562 #
98. vbezhenar ◴[] No.43546015{3}[source]
I didn't have any problems creating wechat account. May be I was lucky, I don't know, I just typed my phone number and it went pretty smooth, like whatsapp. Also was able to connect my visa card. I did it in the Kazakhstan and then was able to pay in China, no problems. May be they got exception for Kazakhstan specifically, we recently got visa-free travels there.
99. MortyWaves ◴[] No.43546016{4}[source]
Windows seems to insistent on crappifying itself. Even if you have a small set of installed applications, many of them behave poorly. Lots and lots of small paper cuts add up. A few dozen registry entries here, a few files there, seven or eight Explorer right click menu extensions (like eg “Edit with X” instead of just using the normal “Open With”) setup by installers.

The end result is right clicking has noticeable delay in some directories. Boot takes a few seconds longer than it did the first time it was installed. Some applications, even Task Manager inexplicably hang with a white window for a few seconds.

The shell (part of Explorer) no longer displaying anything when searching in the start menu (because it tries to connect to the internet to search bing and that sometimes stops working rendering half the start menu useless).

The “modern” settings app hanging with its blue window and icon for anywhere from seconds to indefinitely.

It’s just a lot of this bullshit that adds up. Reinstalling Windows makes it like it was day one.

100. vbezhenar ◴[] No.43546042{3}[source]
I'd love to use Kubernetes for my self hosting. The only problem is it's too expensive.
replies(1): >>43546185 #
101. vbezhenar ◴[] No.43546061{3}[source]
Since I've migrated our company to Kubernetes, I almost stopped to worry about anything. It just works. Had much more troubles running it with spaghetti of docker containers and host-installed software on multiple servers, that setup definitely breaks like every week or every month. With Kubernetes I just press "update cluster" in some saturday evening once or twice a year and that's about it, pretty smooth sailing.
102. TobTobXX ◴[] No.43546066{3}[source]
Did you try? It's a few years ago when I had to create one, but it was just as simple as WhatsApp (just a few more CAPTCHAs). And no VPNs or whatever, straight from a Swiss IP.
103. megous ◴[] No.43546083{4}[source]
Windows is just less organized, with files appearing all over the place and without user having knowledge what's really needed.

On Arch Linux, all system files are listed, along with their content hashes and expected permissions/ownership, in the installed package database. So it's possible to just list changed files in /etc or unexpected files in the system, or files with unexpected permissions, and do a manual cleanup/checkup if needed. No idea how I'd even approach that on Windows.

I guess the only time I'd need to re-install would be if I messed the system so bad that manual fixup would be too laborious over fresh setup and reconfiguration. (And I'd have to lose system backups too)

104. diggan ◴[] No.43546084{3}[source]
> How could anything be enforced on user-controlled servers?

New laws comes to mind. If a government decides to try to outlaw encryption again, cloud/hosting companies located there wouldn't have a choice but to comply, or give up on the business. The laws could also be made in such way that individuals are responsible for avoiding it, even self-hosters, and if people are using it anyways, be legally held responsible for the potential harms of it.

105. nijave ◴[] No.43546101[source]
Don't expose anything to the Internet. Use a tunneling tool (Tailscale et al) or VPN
replies(2): >>43546370 #>>43546600 #
106. q2dg ◴[] No.43546113[source]
Cockpit-podman can be an interesting Podman-centered alternative to Portainer et al.
107. ◴[] No.43546121{4}[source]
108. marceldegraaf ◴[] No.43546151[source]
Best decision of last year for my homelab: run everything in Proxmox VMs/containers and back up to a separate Proxmox Backup Server instance.

Fully automated, incremental, verified backups, and restoring is one click of a button.

replies(1): >>43546756 #
109. skydhash ◴[] No.43546157{3}[source]
I totally agree! Containers are nice when your installation is ephemeral, deploying and updating several time in a short period. But using the package manager is as easy as you get.
110. TimPC ◴[] No.43546161[source]
I think the reasons not to self-host are the difficulty of implementing any type of dynamic scaling. For instance when clicking on this article I get a result that their site is down. Instead of benefiting from the traffic their engagement got them, they are crashing.
111. mr_mitm ◴[] No.43546163[source]
This site redirects to localhost:1313. Is this some sort of April fool's joke that I'm not getting?

    $ curl https://kiranet.org/self-hosting-like-its-2025/
    <!DOCTYPE html>
    <html lang="en">
      <head>
        <title>//localhost:1313/posts/self-hosting-like-its-2025/</title>
        <link rel="canonical" href="//localhost:1313/posts/self-hosting-like-its-2025/">
        <meta name="robots" content="noindex">
        <meta charset="utf-8">
        <meta http-equiv="refresh" content="0; url=//localhost:1313/posts/self-hosting-like-its-2025/">
      </head>
    </html>
replies(2): >>43546202 #>>43546251 #
112. echoperkins ◴[] No.43546166[source]
Yes, I have been using it and really enjoying it for deploying web apps. So far I have deployed web apps using: * fastapi (python) * Django ninja (python) * ghost cms (node)

I have been writing up my thoughts (and an example): https://andrewperkins.com.au/kamal/

The ability to deploy to both cloud servers and on-premises is a big win as I often work on projects that have a mix of both.

As the sibling comment says, it’s focussed on web servers. In my use case that is fine!

replies(1): >>43546936 #
113. jagermo ◴[] No.43546173{3}[source]
that sounds doable. I have a raspberry pi with an SSD for my pihole (bit of overkill, but i had stuff lying around) but for things like Immich it seems to be not beefy enough.
114. drKarl ◴[] No.43546177[source]
Lol that's hilarious, the url resolves to https://localhost:1313/posts/self-hosting-like-its-2025/, how ironic!!
replies(1): >>43546222 #
115. ohgr ◴[] No.43546178{3}[source]
If it wasn't for Kubernetes we'd need 1/3rd of our operations team. We're keeping unemployment down!
replies(1): >>43556005 #
116. captaincrunch ◴[] No.43546180[source]
It sure is self hosted! https://localhost:1313/posts/self-hosting-like-its-2025/
replies(1): >>43546193 #
117. k8sToGo ◴[] No.43546185{4}[source]
How is it too expensive? If you want to use the eco system you can still use something like k3s
118. Brian_K_White ◴[] No.43546191[source]
A few days after a remark on hn, while the thread was still active, I received a mysterious package I didn't order from a weird drop shipping service where the original sender is unknown and undiscoverable to you the recipient. It didn't contain anything bad just a single surgical mask (during covid, common valueless item basically). The message was just that they could find my home address. It was a stupid message since I obviously do not hide my identity on hn. But it means you're not wrong to be careful, both in general, and on hn in particular.
replies(2): >>43546286 #>>43547666 #
119. captaincrunch ◴[] No.43546193[source]
April fools I guess!!
120. spencerflem ◴[] No.43546199[source]
Sandstorm.org has picked back up the pace of development recently, and is an excellent platform wrt sharing securely between multiple users
121. k8sToGo ◴[] No.43546202[source]
Now it's suddenly returning 404
replies(1): >>43546282 #
122. interloxia ◴[] No.43546213[source]
I don't need public access to my stuff so my strategy is to use zerotier taking care that services are only able to use the virtual network.

It's easy to manage and reason about.

123. finnlab ◴[] No.43546222[source]
tried to push an update in production like a true self-hoster...works now
124. SuperSandro2000 ◴[] No.43546227[source]
some notes: - uptime kuma scales very bad, even with just 20 entries - Docker/Kubernetes is a trap, use something more suitable for a home lab like NixOS
125. spencerflem ◴[] No.43546239[source]
Take a look at sandstorm.org - its set of apps is fairly limited compared to the docker based options but it goes incredibly far wrt security. It was designed by the now head of Cloudflare Workers and pitched as a selfhosting platform for medical and other highly regulated industries. There's still nothing else quite like it
replies(1): >>43546627 #
126. drKarl ◴[] No.43546243[source]
I found this which is kind of hosted self-hosting: https://www.pikapods.com/
127. skydhash ◴[] No.43546244{3}[source]
That’s when you favor stability and use an LTS OS. You can also isolate workload by using VMs. Containers is nice for the installation part, but the immutability can be a pain.
128. montroser ◴[] No.43546251[source]
Yeah... Is this performance art? Intentional or otherwise? It's a jungle out there on the Internet -- we should take self-hosting as an opportunity to _simplify_. It doesn't probably need to scale way up, and doesn't need to scale across many teams of people, so it's a good time to shed a few of those layers of abstraction and get back to basics.
129. dismalpedigree ◴[] No.43546253[source]
Proxmox on a NUC. Separate RPI running HaProxy to route requests. Public 443 forwards to haproxy. All on separate vlan from home network. Router allows ssh across vlan for specific IPs. Ssh only available from the specific IPs. Some of the VPS on proxmox run Nebula protocol (like tailscale but self hosted) and there is a lighthouse on a $2 VPS. This allows me to access specific resources only from mesh network when away from home.
130. UK-Al05 ◴[] No.43546265[source]
Isn't 95% of it just blocking every port except the service you want to expose, and then making sure everything is up to date and the service is built in a secure way.

WAF's etc just hide the fact the code in your service is full of holes.

replies(1): >>43547852 #
131. swizzler ◴[] No.43546275[source]
I was using straight filesystem backups for a while, but I knew they could be inconsistent. Since then, I've setup https://github.com/prodrigestivill/docker-postgres-backup-lo..., which regularly dumps a snapshot to the filesystem, which regular filesystem backups can consume. The README has restore examples, too

I haven't needed to tune selfhosted databases. They do fine for low load on cheap hardware from 10 years ago.

replies(1): >>43547472 #
132. finnlab ◴[] No.43546282{3}[source]
my bad, works now. Don't test in production...
133. raphman ◴[] No.43546286{3}[source]
Hmm, my first guess would have been that you have been a target of "brushing" [1]. In a Reddit thread from 2020 [2], multiple people mention that they received surgical masks they did not order.

[1] https://www.bbb.org/article/news-releases/20509-amazon-brush... [2] https://www.reddit.com/r/tulsa/comments/hpe8s1/just_got_a_su...

replies(1): >>43546499 #
134. aborsy ◴[] No.43546306{3}[source]
The concern is zero days. There are probably lots of easy zero days, patched across a host of software, once discovered in one.

The solution is a secure software in front. It could be Wireguard, but sometimes you don’t know your users or they don’t want to install anything.

135. skydhash ◴[] No.43546323{3}[source]
I use an old mac mini. The two times the fan has come up was with me building ffmpeg and transcoding my music library. I us it as a file server, music server, amd jellyfin. And trying stuff.
136. alabastervlog ◴[] No.43546335{3}[source]
I only host 3rd party daemons (nothing custom) and only on my local network (plus Tailscale) so Docker’s great for handling package management and init, since I get up-to-date versions of a far broader set of services than Debian or ubuntu’s repos, clean isolation for easy management, and init/restarts are even all free. Plus it naturally documents what I need to back up (any “mounted” directories)

Docker lets my OS be be boring (and lets me basically never touch it) while having up to date user-facing software. No “well, this new version fixes a bug that’s annoying me, but it’s not in Debian stable… do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?”

I just use shell scripts to launch the services, one script per service. Run the script once, forget about it until I want to upgrade it. Modify the version in the script, take the container down and destroy it (easily automated as part of the scripts, but I haven’t bothered), run the script. Done, forget about it again until next time.

Almost all the commands I run on my server are basic file management, docker stuff, or zfs commands. I could switch distros entirely and hardly even notice. Truly a boring OS.

replies(1): >>43551876 #
137. qskousen ◴[] No.43546338{5}[source]
If you need a middle ground between docker and k8s, you might have a look at nomad. Definitely a learning curve, and I find the docs lacking, but easier to set up and maintain than k8s.
138. smjburton ◴[] No.43546343[source]
This is a great introduction to self-hosting, good job OP. As some of the other comments mentioned, discussion about self-hosted security and the importance of back-ups would be good to include. Also, you link to some great resources for discovering self-hosted applications, but it would be interesting to hear some of the software you enjoy self-hosting outside of core infrastructure. As I'm sure you're aware, self-hosters are always looking for new ideas. :)
139. crtasm ◴[] No.43546370{3}[source]
I think they want to host public websites.
140. ◴[] No.43546377[source]
141. oulipo ◴[] No.43546401[source]
Dokploy is really nice
142. gmm1990 ◴[] No.43546429{3}[source]
I'd be interested. Might be a strange question but I'll throw it out there, I seem to have a hard time finding a good way to define my self hosted infrastructure nodes and which containers can run on them, have you run into/have a solution for this? Like I want my database to run on my two beefier machines but some of the other services could run on the mini pcs.
replies(1): >>43547536 #
143. resiros ◴[] No.43546455{3}[source]
yes, please.
144. tpetry ◴[] No.43546462{4}[source]
Transactions that haven‘t been written to the WAL yet are also lost when the server crashes or you run pgdump. Stuff not in WAL is not safe in any means, its still a transaction in progress.
145. cenamus ◴[] No.43546477{3}[source]
I suppose also no public IP on your home connection?

Because since my new provider only provides cg-nat, I've been using a cheap server, but actually having the server at home would be nice.

replies(1): >>43549426 #
146. Brian_K_White ◴[] No.43546499{4}[source]
Interesting! I never heard of that.

The package came from a US company in Texas not China. Not directly, the mask could have been made anywhere, but the package did not contain any other mail labels like when you get something from China. And never happened before, never happened again, and was literally only a single mask.

Still, seems to fit anyway because the brushing descriptions do vary in the details a little. My example still fits.

Or maybe it still was the hn guy and this just the method they used because they knew about it.

Anyway thank you.

147. finnlab ◴[] No.43546517[source]
I'll eventually set this up to automatically deploy from git, don't worry haha
replies(1): >>43548609 #
148. finnlab ◴[] No.43546524[source]
this looks really cool, I love to see some competition in this space
replies(1): >>43546999 #
149. quectophoton ◴[] No.43546541{3}[source]
> I deploy to single node swarms, and it's a zero boiler plate solution.

Yup, it's basically like a "Docker Compose Manager" that lets you group containers more easily, since the manifest file format is basically Docker Compose's with just 1-2 tiny differences.

If there's one thing I would like Docker Swarm to have, is to not have to worry about which node creates a volume, I just want the service to always be deployed with the same volume without having to think about it.

That's the one weakness I see for multi-node stacks, the thing that prevents it from being "Docker Compose but distributed". So that's probably the point where I'd recommend maybe taking a look at Kubernetes.

150. nosebear ◴[] No.43546590[source]
I agree - I always wonder should I go overkill and put everything in its own VM for separation? Is it ok to just use containers?

If using Podman, should I use rootless containers (which IMO suck because you can't do macvlan so the container won't easily get its own IP on my home network)? Is it ok to just use rootful Podman with an idmapped user running without root privileges inside the container and drop all unneccessary capabilities? Should I set up another POSIX user before, such that breaking out of the container would in the end just yield access to an otherwise unused UID on the host?

If using systemd-nspawn, do all the above concerns about rootful / rootless hold? Is it a problem that one needs to run systemd-nspawn itself with root? The manpage itself mentions "Like all other systemd-nspawn features, this is not a security feature and provides protection against accidental destructive operations only.", so should I trust nspawn in general?

Or am I just being paranoid and everything should just be running YOLO-style with UID 1000 without any separation?

All of this makes me quite wary about running my paperless-ngx instance with all my important data next to my Vaultwarden with all of my passwords next to any Torrent clients or anything else with unencrypted open ports on the internet. Also keeping everything updated seems to be a full time job by itself.

151. diggan ◴[] No.43546600{3}[source]
You'll have a hard time hosting websites/projects meant for the public to view, if you don't allow public internet traffic :)
replies(2): >>43550354 #>>43555064 #
152. kuon ◴[] No.43546621{3}[source]
I manage about 500 servers. Critical services like DNS, mail, tftp, monitoring, routing, firewall... are all running openbsd in N+1 configuration, and in 15 years we had zero issue with that.

Now most servers are app servers, and they all run archlinux. We prepare images and we run them with PXE.

Both those are out of scope for self host.

But, we also have about a dozen of staging, dev, playground servers. And those are just regular installs of arch. We run postgres, redis, apps in many languages... For all that we use systems packages and AUR. DB upgrade? Zfs snapshot, and I follow arch wiki postgres upgrade, takes a few minutes, there is downtime, but it is fine. You mess anything? Zfs rollback. You miss a single file? cd .zfs/snapshots and grab it. I get about 30minutes of cumulated downtime per year on those machines. That's way enough for any self host.

We use arch because we try the latest "toys" on those. If you self host take an LTS distribution and you'll be fine.

153. diggan ◴[] No.43546627{3}[source]
> pitched as a selfhosting platform for medical and other highly regulated industries

From first hearing about Sandstorm since the first open beta 10 years ago (https://news.ycombinator.com/item?id=10147774) and reading about it on/off since then, this is first time I hear anyone pitching it for "medical and other highly regulated industries". Where exactly does this come from?

> There's still nothing else quite like it

Plenty of other similar self-hosted platforms, YunoHost is probably the closest, most mature and most feature-packed alternative to Sandstorm, at least as far as I know,.

replies(1): >>43546784 #
154. espdev ◴[] No.43546704[source]
Funny how the author doesn't give a single link in the post. The reader has to go searching, spend time to find the things the author writes about. Well, a simple example: Awesome-Selfhosted. Is it that hard to give a link? Is it some kind of phobia or religion that doesn't allow direct links on the internet? Really? Come on, it's hypertext! Where are the hyperlinks?
replies(1): >>43546840 #
155. palata ◴[] No.43546716{5}[source]
This is very interesting! Have you considered writing a blog post explaining that kind of setup? I would love that! In the meantime, thanks a lot for the insights, that's a good starting point!

> I'm not a fan of DMZ as they get messy as you then have to ensure your host is protected correctly.

Could you elaborate on that? Specifically in my case I would have a perimeter router to which I would connect both my server and the inner router. My LAN would stay behind the inner router, so my understanding is that it still strictly has the same security as when my inner router was connected to the ISP; I just add a layer with the perimeter router.

Then the perimeter router opens the server (probably just chosen ports) to the public Internet, so that the server is reachable.

Wouldn't that mean that my host is protected correctly?

replies(1): >>43551315 #
156. FloatArtifact ◴[] No.43546756{3}[source]
Yes, I'm considering that if I can't find a solution that is plug-and-play for containers Independent of the OS and file system. Although I don't mind something abstracting on top of ZFS. ZFS Mental overhead through the snapshot paradigm can lead to its own complexities. A traditional backup and restorer front end would be great.

I find it strange that, especially with a docker which already knows your volumes, app data, and config, can't automatically backup and restore databases, and configs. Jeez, they could have built it right into docker.

157. spencerflem ◴[] No.43546784{4}[source]
There's nothing else like its security model - YunoHost has a similar user-facing experience. (Better IMO).

I might have overstated the medical field- but they did pitch it as a product for enterprises with security requirements: "Sandstorm’s users included (and may still include – there’s no way for us to tell) companies, newspapers, educational institutions, research laboratories, and even government agencies. " (https://sandstorm.io/news/2024-01-14-move-to-sandstorm-org)

158. TobTobXX ◴[] No.43546840[source]
The headers are hyperlinks.
replies(1): >>43547012 #
159. palata ◴[] No.43546857{4}[source]
I haven't seen such a guide, unfortunately :-).

I consider hosting a system or service trivial ("just run the service and open its port to the public Internet"). Then the first question is: what if the service gets compromised (that seems like the most likely attack vector, right?)? Probably it should be sandboxed. Maybe in a container (not running as root inside the container, because I understand it makes it a lot easier to escape), better if it is in a VM (using Xen maybe?). What about jails?

Now say the services are running in VMs, and the "VM manager" (I don't know how to call it, I mean e.g. dom0 for Xen) is only accessible from my own IP (ideally over a VPN if it's running in a VPS, or just through the LAN if running at home?), the next question is: what happens if one of the services gets compromised? I assume the attacker can then compromise the VM, so now what are the risks for me? I probably should never ssh as a user and then login as root from there, because if it's compromised the attacker can probably read my password? Say I only ever login through ssh, either as root directly or as the user (but never promoting myself to root from the user), what could be vectors that would allow an attacker to compromise my host machine?

I listened to a lot of "Darknet Diaries" episodes, and the pentesters always say "I got in, and then moved laterally". So I'm super scared about that: if I run a service exposed to the Internet, I assume it may get compromised someday (though I'll do my best to protect it and keep it up-to-date). But then when it gets compromised, how can I prevent those "lateral moves"? I have no idea, as in "I don't know what I don't know".

All that to say, I would love to find a book or blog posts that explain those things. Tutorials I see usually teach how to run a service in docker and don't really talk about security.

160. rullopat ◴[] No.43546936{3}[source]
Great! Are you also deploying DB servers or any other kind of additional servers that are dependencies of those webapps?
replies(1): >>43553193 #
161. thebiglebrewski ◴[] No.43546941[source]
Aw yes Dokku is still great in 2025! Hear hear!
162. notpushkin ◴[] No.43546999{3}[source]
Thank you so much!
163. espdev ◴[] No.43547012{3}[source]
Yes, the links are now in the headers. The author has updated the post.
164. nunez ◴[] No.43547094[source]
rclone is great for this.

One could set up a Docker Compose service that uses rclone to gzip and back up your docker volumes to something durable to get this done. An even more advanced version of this would automate testing the backups by restoring them into a clean environment and running some tests with BATS or whatever testing framework you want.

replies(1): >>43547276 #
165. nijave ◴[] No.43547135[source]
Why not zfs snapshots? Besides using Hyper-V machine snapshots, that's been the easiest way, by far, for me. No need to worry about the 20 different proprietary tools that go with each piece of software.

Each VM or container gets a data mount on a zvol. Containers go to OS mount and each OS has its own volume (so most VMs end up with 2 volumes attached)

replies(1): >>43573855 #
166. zrail ◴[] No.43547250{3}[source]
PostgreSQL defaults (last I looked, it's been a few years) are/were set up for spinning storage and very little memory. They absolutely work for tiny things like what self-hosting usually implies, but for production workloads tuning the db parameters to match your hardware is essential.
replies(1): >>43547339 #
167. nijave ◴[] No.43547276{3}[source]
Rclone won't take a consistent snapshot so you either need to shutdown the thing or use some other tool to export the data first
replies(1): >>43549442 #
168. nijave ◴[] No.43547318{3}[source]
There's only a handful of web apps packaged in the OS repo. Even wildly popular software like WordPress and Drupal you need to use their built in facilities or manually apply outside the OS update manager
169. nijave ◴[] No.43547339{4}[source]
Correct, they're designed for maximum compatibility. Postgres doesn't even do basic adjustments out of the box and defaults are designed to work on tiny machines.

Iirc default shared_mem is 128MB and it's usually recommended to set to 50-75% system RAM.

170. nijave ◴[] No.43547389{4}[source]
If a filesystem backup isn't consistent, the app isn't using sync correctly and needs a bug report. No amount of magic can work around an app that wants to corrupt data.

For most apps, the answer is usually "use a database" that correctly saves data.

171. nijave ◴[] No.43547434[source]
https://pgtune.leopard.in.ua/ is a pretty good start. There's a couple other web apps I've seen that do something similar.

Not sure on "easy" backups besides just running pg_dump on a cron but it's not very space efficient (each backup is a full backup, there's no incremental)

172. nijave ◴[] No.43547472{3}[source]
Inconsistent how? Postgres can recover from a crash or loss of power which is more-or-less the same as a filesystem snapshot
replies(1): >>43551781 #
173. nijave ◴[] No.43547488{3}[source]
RTO - best effort RPO - wait? You guys have backups and test then??
174. tvaughan ◴[] No.43547513[source]
For our startup, I decided we’d self-host as much as possible. We have one server in our office with Traefik for “firewall” and SSL termination, and our backend services are handled by Incus. Our only paid services are Tailscale and Postmark. We run things like forgejo and mattermost for our internal development work, as well as things like elasticsearch and kibana for observability into our products (100ish embedded devices distributed globally). So far so good.
175. nijave ◴[] No.43547521[source]
I'm glad I'm not configuring php-fpm and compiling PHP add-ons from source. Someone else can shove that in a tarball (Docker image) for me
replies(1): >>43562146 #
176. raphinou ◴[] No.43547536{4}[source]
I am running one-node swarms, so everything I deploy is running on the same node. But from my understanding you can apply labels to the nodes, and limit the placement of containers. See here for an example (I am not affiliated to this site): https://www.sweharris.org/post/2017-07-30-docker-placement/
177. nijave ◴[] No.43547555{3}[source]
Containers are fine. Run them on a Linux host to save yourself some headaches
178. StrLght ◴[] No.43547576{3}[source]
I am very interested. I tried to migrate to Swarm, got annoyed at incompatibility with tons of small Docker Compose things, and decided against that. I'd love to read about your setup.
179. sceptic123 ◴[] No.43547601{3}[source]
Why is the snark here downvoted, but not the original message?
replies(1): >>43549651 #
180. 72deluxe ◴[] No.43547603[source]
This seems overkill. You only need a Pi. I have a Pi4 fanless running:

1. lighttpd exposing a website, using letsencrypt and a cron job to run certbot and restart lighttpd.

2. mox (https://www.xmox.nl) to run a mail server, with PTR records set up by my ISP. I am not with a CG-NAT ISP else none of this would be possible. mox makes it easy enough to set up DMARC and SPF etc. with appropriate output given that you can add to your DNS records.

3. I grab the list of IPs from https://github.com/herrbischoff/country-ip-blocks and add them to an iptables list (using ipset) every week so that I can block certain countries that have no legitimate reason to be connecting, with iptables just dropping the connection. I think I also use https://github.com/jhassine/server-ip-addresses to drop certain ranges from cloud servers to make annoying script kiddies go away.

4. peer-calls (https://github.com/peer-calls/peer-calls/) to be able to video call with my family and friends (with a small STUN server running locally for NAT traversal as I recall).

5. linx (https://github.com/andreimarcu/linx-server) to share single links to files (you can get an Android app to upload from your phone)

6. filebrowser for sharing blocks of files for users (https://github.com/filebrowser/filebrowser).

7. pihole runs on it so blocks adverts.

8. Wireguard runs on the Pi and I open the VPN ports on my router. I use the VPN on my phone so adverts are blocked when I am out and about (traffic gets routed through the Pi).

9. navidrome runs on it and I use subtracks on Android to stream (or just download albums for when I have spotty connection).

10. mpd runs on the Pi and it plays music to some speakers in the house, so I can control it with M.A.L.P on Android.

11. I use goaccess (https://goaccess.io) to look at my server logs and see what is hitting me.

12. I use maxmind geoip data so I know which countries are hitting me.

13. minidlna runs on the Pi so I can stream films to my TV.

14. I run CUPs on it too so that my rubbish wireless Samsung printer can be printed to from Android and my wife's Apple devices without having to buy an AirPlay-compatible printer.

15. xrdp running so I can log into a visual desktop on the Pi if required.

My router doesn't expose SSH ports, just appropriate ports for these web services and the VPN. SSH keys are useful. SSH is not open to the world anyway and you have to VPN into the network first.

This all sits happily and quietly in a cupboard and uses a feeble amount of power.

181. 0xEF ◴[] No.43547666{3}[source]
It's always scary, no matter how innocuous. I'm glad it did not escalate into something else for you!

Without getting too deep into it, there are some things I know how to do with computers that I probably shouldn't, so my thought is this; if I, a random idiot who just happened to learn a few things, can do X, then someone smarter than me who learned how to attack a target in an organized way probably has methods that I cannot even conceive of, can do it easier, and possibly without me even knowing. It's this weird vacillation between paranoia and prudence.

For me, it's really about acknowledging what I know I don't know. I do some free courses, muck about with security puzzles, etc, even try my own experiments on my own machines, but the more I learn, the more I realize I don't know. I suppose that's the draw! The problem is when you learn these things in an unstructured way, it's hard to piece it all together and feel certain that you have covered all your vulnerable spots.

182. 0xEF ◴[] No.43547817{4}[source]
I'm not the person you asked, but if some security researcher such as yourself needs a million-dollar service to sell, I'll offer that I would pay decent money for a webapp or something where I can list all the things in my stack or project and it spits out a list of known and possible vulnerabilities that I should check default configs for, update, patch, etc.

My thinking is this; if I'm willing to fork over dollars to a VPS hosting service for peace-of-mind, then paying for a service that helps me understand what I'm doing when it comes to self-hosting should also be on the table as an alternative.

That said, I have no idea how viable of a business model that would be, or if it would even be able to be developed and upkept with reliable info. Or, maybe it already exists, but on an enterprise level that I cannot afford for some dumb little blogs.

replies(1): >>43551860 #
183. sceptic123 ◴[] No.43547852{3}[source]
What's the 5% that's not blocking ports for services you want to expose?

Ensuring your infra is built in a secure way is as important as ensuring your service is built in a secure way.

replies(1): >>43551171 #
184. ryandrake ◴[] No.43548554{3}[source]
Same here. I've had the same setup for decades: A "homelab" server on my LAN for internal hobby projects and a $5 VPS for anything that requires public access. On either of these, I just install the software I need through the OS's package manager. If I need a web server, I install it. If I need ssh, I install it. If I need nfs, I install it. I've never seen any reason to jump into containers or orchestration or any of that complex infrastructure. I know there are a lot of people very excited about adding all of that stuff into the mix, but I've never had a use case that prompted me to even consider it!
replies(1): >>43556121 #
185. martin_a ◴[] No.43548609{3}[source]
I wrote a deploy.sh to help me with the parameters, almost feels like automatic deployment... ;-)
186. jauntywundrkind ◴[] No.43548737[source]
K3s installs super fast.

Writing your first yaml or two is scary & seems intimidating at first .

But after that, everything is cut from the same cloth. Its an escape from the long dark age of every sysadmin forever cooking up whatever whimsy sort of served them at the time, escape from each service having very different management practices around it.

And there's no other community anywhere like Kubernetes. Unbelievably many very good quality very smart helm charts out there, such as https://github.com/bitnami/charts/tree/main/bitnami just ready to go. Really sweet home-ops setups like https://github.com/onedr0p/home-ops that show that once you have a platform under foot, adding more services is really easy, showing an amazing range of home-ops things you might be interested in.

> Last thing I need is Kubernetes at home

Last thing we need is incredibly shitty attitude. Fuck around and find out is the hacker spirit. Its actually not hard if you try, and actually having a base platform where things follow common patterns & practices & you can reuse existing skills & services is kind of great. Everything is amazing but the snivelling shitty whining without even making the tiniest little case for your unkind low-effort hating will surely continue. Low signal people will remain low signal, best avoid.

187. opsdisk ◴[] No.43549257{3}[source]
Would love a blog post on how you're using Docker Swarm.
188. kiney ◴[] No.43549367{3}[source]
bugs in it's infancy is what killed swarm for users.
189. auxym ◴[] No.43549393[source]
It seems you're talking about about self-hosting a website or web-app that you are developing for the public to use.

My vision of self-hosting is basically the opposite. I only self-host existing apps and services for my and my family's use. I have a TrueNAS box with a few disks, run Jellyfin for music and shows, run a Nextcloud instance, a restic REST server for backing up our devices, etc. I feel like the OP is more targeted this type of "self hosting".

190. fm2606 ◴[] No.43549426{4}[source]
Correct, there is no public IP address exposed to my home.

Right now my "servers" are Dell micro i5s. I've have used RPI 3 and 4 in the past. My initial foray into self-hosting were actual servers. Too hot, too noisy and too expensive to run continuously for my needs, but I did learn a lot. I still do even with the micros and pis.

replies(1): >>43582771 #
191. auxym ◴[] No.43549442{4}[source]
zfs/btrfs snapshot and then rclone that snapshot?
replies(1): >>43551993 #
192. MortyWaves ◴[] No.43549651{4}[source]
Doesn’t seem downvoted to me
replies(1): >>43567601 #
193. ndsipa_pomu ◴[] No.43549714{3}[source]
> You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home.

It's not uncommon with self-hosting services using docker. It makes it easier to try out a new stack and you can mix and match versions of postgresql according to the needs of the software. It's also easier to remove if you decide you don't like the bit of software that you're trying out.

194. ndsipa_pomu ◴[] No.43549757{3}[source]
Yep. I run a small swarm at work and have a 5-node RPi-4 swarm at home. Interested in why you'd run a single-node swarm instead of stand-alone docker.
195. simoncion ◴[] No.43549955{4}[source]
Ye olde autodefect makes fools of us all, eventually.

(As does the ever-present HN urge to downvote things for no comprehensible reason. Seriously, why the fuck would someone downvote an honest explanation for a typo? Animals, I guess.)

196. gaeb69 ◴[] No.43550325[source]
Nice blog! following.
197. arevno ◴[] No.43550354{4}[source]
We've been running production traffic via Cloudflare Tunnels for over a year with no problems. Ngrok and tailscale both run similar services, too.
replies(1): >>43550735 #
198. brulard ◴[] No.43550393{4}[source]
I don't know. I had run into problems building some packages for node and builing another tools with Cargo (i think) with 4 GB RAM. And if you going to experiment with containers, etc. I would definitely recommend starting at least at 8GB for a peace of mind. It doesn't cost a kidney nowadays.
199. bitsandboots ◴[] No.43550543[source]
I clicked expecting a list of cool things to self host. Instead I got a list of ways I would never want to host. Mankind invented BSD jails so that I do not have to tie myself in a knot of container tooling and abstraction.
replies(1): >>43550654 #
200. bitsandboots ◴[] No.43550576[source]
Not just stable - also easy to understand when, if ever, something goes wrong. There's very little magic, very little layers of complexity.
201. Gud ◴[] No.43550654{3}[source]
Indeed. I run a setup as you mentioned, with the various daemons in their own jail. Super simple set up, easy to maintain.

Lord knows why people overcomplicate things with docker/kubernetes/etc.

202. ghoshbishakh ◴[] No.43550735{5}[source]
There is also https://pinggy.io which is even simpler to use. Just paste one command like ssh -p 443 -R0:localhost:8000 qr@a.pinggy.io
203. Gud ◴[] No.43550759[source]
Same. Why add the complexity of docker/kubernetes/?

FreeBSD and jails is so easy to maintain its unbelievable.

204. crivlaldo ◴[] No.43550786[source]
This article motivated me to upgrade my hosting approach.

I've been running a DigitalOcean VPS for years hosting my personal projects. These include a static website, n8n workflows, and Umami analytics. I used manual Docker container management, Nginx, and manual Let's Encrypt certificate renewals. I was too lazy even to set up certbot.

I've migrated to a Portainer + Caddy setup. Now I have a UI for container management and automatic SSL certificate handling. It took about two hours.

Thanks for bringing me to 2025!

205. johnmw ◴[] No.43550883[source]
I recently came across another new one that looks really nice - Canine [0].

I haven't tried it myself yet. Has anybody else given it a spin?

[0]: https://canine.sh/

206. majewsky ◴[] No.43551171{4}[source]
Part of it is that you may get (D)DoSed and then your ISP may be any amount of pissed at you for taking on significant ingress traffic on a residential network.
207. doublerabbit ◴[] No.43551315{6}[source]
That sounds pretty reasonable.

While home routers tend to set their rules as outbound allow and inbound denied. My DC just provides me with a network cable to the big pond of data.

How I secure that for my home network is using my personal rig with multiple network ports.

One port acts as a public bridge. And the 3rd and 4th network ports then are then assigned to the private bridges

The 2nd port then sits in a middle bridge where it communicates to both the public and private bridge.

208. pedantsamaritan ◴[] No.43551781{4}[source]
Getting my backup infrastructure to behave they way I'd want with filesystem snapshot (e.g. zfs or btrfs snapshot) was not trivial. (I think the hurdle was my particularity about the path prefix that was getting backed up.) write once pg_dumps could still have race conditions, but considerably fewer.

So, if you're using filesystem snapshots as source of backups for database, then I agree, you _should_ be good. the regular pgdumps is a workaround for other cases for me.

209. nz ◴[] No.43551842{4}[source]
Entire companies have been built around synchronizing the WAL with ZFS actions like snapshot and clone (i.e. Delphix and probably others). Would be cool to have `zpgdump` (single-purpose, ZFS aware equivalent).
210. Aachen ◴[] No.43551860{5}[source]
The CVE database is free. Or maybe NVD are the ones publishing this mapping of CVEs to software packages and versions, but either way, a site like cvedetails will give you this information. I'm less sure where you could subscribe to these for all software thingies you run (maybe cvedetails already has that)
211. seba_dos1 ◴[] No.43551876{4}[source]
> do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?

In these rare cases I usually just compile a newer deb package myself and let the package manager deal with it as usual. If there are too many dependencies to update or it's unusually complex, then it's container time indeed - but I didn't have to go there on my server so far.

Not diverging from the distro packages lets me not worry about security updates; Debian handles that for me.

212. nijave ◴[] No.43551993{5}[source]
I think that'd break deleting incremental snapshots unless you tried uploading a gigantic blob of the entire filesystem, wouldn't it?

Meaning you'd need to upload full snapshots on a fixed interval

213. shepherdjerred ◴[] No.43552457[source]
I'm very happy with Kubernetes at home. Everything just works at this point, though it did take a fair bit of fiddling at first.

I think it's a great way to learn Kubernetes if you're interested in that.

214. Karrot_Kream ◴[] No.43552531[source]
This is a good concern to have. I feel like the emotional currency around self-hosting on tech forums makes too many people excited to talk about self-hosting and forget about practical things like security. Remember: defense in layers.

Things I do:

* Make sure domain WHOIS does not point to me in any way, even if that means using some silly product like "WHOIS GUARD"

* Lock down any and all SSH access. Preferably only allow key-based authentication.

* Secure the communication substrate. For me this means running a Zerotier network which all dependent services listen on. I also try to use Unix sockets for any services colocated on the same operating system and restrict the service to only listen on sockets in a directory specifically accessible by the service.

* Try to control the permission surface of any service as much as possible. Containers can be a bit heavyweight for self-hosting but make this easy. There's alternatively like bubblewrap and firejail as well.

* Make use of services like fail2ban which can automate some of the hunting of bad actors for you.

* Consider hosting a listener for external traffic outside of your infra. For redundancy, load-shedding, and for security I have an external VPS that runs haproxy before routing over Zerotier to my home infrastructure. I enforce rate limits and fail2ban at the VPS so that bad actors get stopped upstream and use none of my home compute or bandwidth. (I also am setting up some redundant caches that live on the VPS so if my home network is down, one of my services can failover.)

* Segregate data into separate databases and make sure services only have access to databases that they need. With Postgres this is really simple with virtual databases being tied to different logins. I have some services that prune databases that run in a cron-like way (but using snooze instead) and they have no outbound net access.

If your network layer is secure and your services follow least-privilege, then you should be fairly in the clear.

replies(1): >>43552878 #
215. Karrot_Kream ◴[] No.43552562{4}[source]
Why not just have nginx listen on the Wireguard interface itself? That way you drop all traffic coming inbound from sources not on your Wireguard network and you don't even have to send packets in response nor let external actors know you have a listener on that port.
216. 3abiton ◴[] No.43552878{3}[source]
Beside fail2ban, I also recommend endlessh. Simple yet beautiful piece of software.
217. pentagrama ◴[] No.43552947[source]
As a non-developer, I find it difficult to step into the self-hosting world. What I recommend for people like me is the service PikaPods [1], which takes care of the hard part of self-hosting.

I have now switched from some SaaS products to self-hosted alternatives:

- Feed reader: Feedly to FreshRSS.

- Photo management/repository: Google Photos to Imich or Imgich (don't remember).

- Bookmarks: Mozilla's Pocket to Hoarder.

And so far, my experience has been simple and awesome!

[1] https://www.pikapods.com

218. echoperkins ◴[] No.43553193{4}[source]
Yes, I am deploying mysql with ghost

``` accessories:

  db:
    image: mysql:8.0
    host: 170.64.156.161
    env:
      secret:
        - MYSQL_ROOT_PASSWORD
    options:
      restart: always
    directories:
      - data:/var/lib/mysql
```

For my other services I am just using sqlite combined with a volume for persistence (managed by kamal)

219. raxxorraxor ◴[] No.43555038[source]
I think a normal patched Debian/Ubuntu with ufw rule for port 80/443 and 22, ssh certificate auth only and a simple nginx configuration is still very safe.

Of course there can be security issue on your webserver as well, but for a simple site this setup is learnable in an hour or two and you are ready to go.

You can hook that up on a pie attached to your router or pay a bit to have it hosted somewhere. Domain is perhaps 2-5$ and an TLS cert you can get from Let's Encrypt.

No idea how to put everything into a container that it makes sense. I just run this quite often on small hosted machines elsewhere. I just install everything manually because it takes 5 minutes if you have done it before.

220. raxxorraxor ◴[] No.43555064{4}[source]
But you don't have as many security issues as well :)
replies(1): >>43555590 #
221. ◴[] No.43555405[source]
222. sujaldev ◴[] No.43555511[source]
There's caprover too: https://caprover.com/
223. diggan ◴[] No.43555590{5}[source]
This is why I never leave my house too :)
224. Youden ◴[] No.43555894{3}[source]
Just a single host. The main thing that I couldn't figure out is how to turn off "bootstrap" mode, so I've just left it on.
225. brulard ◴[] No.43556005{4}[source]
Is this a joke? I don't know much about Kubernetes, but I've heard from devops people it's quite helpful for bigger scale infrastructures.
replies(1): >>43558799 #
226. brulard ◴[] No.43556121{4}[source]
I have very similar setup, with homelab and separate cheap VPS. In similar manner I have all the services installed directly on the OS but I'm starting to run into issues where I started to consider using docker. I run nginx with multiple (~10) node apps running through PM2. While this works ok-ish, I'm not happy that if for example one of my apps needs some packages installed, I need to do it for the whole server (ffmpeg, imageMagick, etc.). Other problem is that I can easily run into compatibility problems if I upgraded node.js for example. And if there was some vulnerability in some of the node_packages any of the projects use, the whole server is compromised. I think docker can be quite an easy solution to most of these problems.
227. loughnane ◴[] No.43556435[source]
Since my setup is for personal use I just use a VPN. My home router is running OPNsense and this setup wasn't too bad. I also pay my ISP for a static IP address.

https://docs.opnsense.org/manual/how-tos/wireguard-client.ht...

Then on my phone I just flick on the switch and can access all my home services. It's a smidge less convenient, but feels nice and secure.

228. brulard ◴[] No.43556858{3}[source]
I tried the demo, but I still don't get it. What is it? Is it supposed to be like a collection of productivity apps, like google docs/drive ? The demo worked poorly, I waited very long time for many interactions, some didn't work at all (like changing font on etherpad).
replies(1): >>43566813 #
229. ohgr ◴[] No.43558799{5}[source]
Unfortunately no it's not a joke. It's really fine for big infrastructure companies, think Google etc. But a lot of people will design complicated shit due to architectural ignorance or to make their resume look good and this will result in Kubernetes looking like a good idea to run the resulting large amounts of complicated shit. Then you realise it's complicated and lots of people need to look after it which escalates the problem.

After a few years you work out that holy shit we now have 15 people looking after everything instead of the previous 4 people and pods are getting a few hits an hour. Every HTTP request ends up costing $100 and then you wonder why the fuck your company is totally screwed financially.

But all the people who designed it have left for consultancy jobs with Kubernetes on their resume and now you've got an army of people left to juggle the YAML while the CEO hammers his fist on the table saying CUT COSTS. Well you hired those feckin plums!

etc etc.

Lots of them are on here. People have no idea how to solve problems any more, just create new ones out of the old ones.

230. BrandoElFollito ◴[] No.43559147[source]
You will love dockge then.

I run it alongside portainer because exactly of the compose.yaml file I want to have control over

231. megous ◴[] No.43562146{3}[source]
I'm glad I don't have to deal with docker's bloat and networking to avoid writing a 8 line php-fpm config file. :)
232. crabmusket ◴[] No.43566813{4}[source]
I agree the current experience is sonewhat unpolished. The docs explain the vision best: https://sandstorm.org/how-it-works
233. sceptic123 ◴[] No.43567601{5}[source]
enough people enjoyed it to upvote it again I guess, weird that I got a downvote here too
234. FloatArtifact ◴[] No.43573855{3}[source]
Well, one argument not to use ZFS is simply the resources it takes. It eats up a lot of ram. Also I'm under the impression that one should never live-snapshot a database without risk of corruption.
235. cenamus ◴[] No.43582771{5}[source]
What do you use for your remote server? Because even a VPS seems kinda overkill, if all it's doing is some redirecting. I guess you could do TLS termination there aswell...