Most active commenters
  • dailykoder(3)
  • seba_dos1(3)

←back to thread

221 points finnlab | 40 comments | | HN request time: 1.376s | source | bottom
1. bauerd ◴[] No.43545430[source]
Last thing I need is Kubernetes at home
replies(11): >>43545448 #>>43545480 #>>43545584 #>>43545606 #>>43545610 #>>43545648 #>>43545656 #>>43545661 #>>43548737 #>>43550543 #>>43552457 #
2. nodesocket ◴[] No.43545448[source]
I run Kubernetes on my homelab on 4x Raspberry Pi's then use Portainer to manage it. Works quite well, can use Helm charts when available, otherwise I use Portainer and deploy container apps manually. It's really not that bad.
replies(1): >>43545614 #
3. import ◴[] No.43545480[source]
Exactly. I am hosting 30+ services using docker compose and very happy. I don’t want to troubleshoot k8s in the early morning because home assistant is down and light dimmers are not working for some random k8s reason.
replies(3): >>43545612 #>>43545699 #>>43546061 #
4. dailykoder ◴[] No.43545584[source]
I guess it can be comfortable for some people.

But I just wanted to comment something similar. It's probably heavily dependend on how many services you self-host, but I have 6 services on my VPS and they are just simple podman containers that I just run. Some of them automatically, some of them manually. On top of that a very simple nginx configuration (mostly just subdomains with reverse proxy) and that's it. I don't need an extra container for my nginx, I think (or is there a very good security reason? I have "nothing to hide" and/or lose, but still). My lazy brain thinks as long as I keep nginx up to date with my package manager and my certbot running, ill be fine

5. mrweasel ◴[] No.43545606[source]
It is still my opinion that most businesses do not need Kubernetes, and neither should anyone self-hosting a service at home.

I can see running something in a Docker container, and while I'd advise against containers what ships with EVERYTHING, I'd also advise against using Docker-compose to spin up an ungodly amount of containers for every service.

You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home. Find a service that can be installed using your operating systems package manager and set everything to auto-update.

Whatever you're self-hosting, be it for yourself, family or a few friends, there's absolutely nothing wrong with SQLite, files on disk or using the passwd file as your authentication backend.

If you are self hosting Kubernetes to learn Kubernetes, then by all means go ahead and do so. For actual use, stay away from anything more complex than a Unix around the year 2000.

replies(5): >>43545625 #>>43545652 #>>43546042 #>>43546178 #>>43549714 #
6. ◴[] No.43545610[source]
7. johnisgood ◴[] No.43545612[source]
I am quite happy even without Docker, but I can see the appeal in some cases.
8. dailykoder ◴[] No.43545614[source]
I'm sure you find some joy in it or just like to explore what's possible. But just a reminder: Pieter Levels is running his million dollar businesses on a single VPS (if that's still correct). But yeah, if you like it, why not.

"Premature clustering is the source of all evil" - or something like that.

9. ◴[] No.43545625[source]
10. MortyWaves ◴[] No.43545648[source]
There is a fairly well known/popular blogger who’s blog I was following because of their self hosting/homelab/nix adventures.

Then they decided to port everything to K8 because of overblown internet drama and I lost all interest. Total shame that a great resource for Nix became yet another K8 fest.

11. dailykoder ◴[] No.43545652[source]
I shared this sentiment. But since I just host some personal fun projects and I got really lazy when it comes to self-hosting, I found great pleasure in just creating the simplest possible docker containers. It just keeps the system super clean and easy to wipe and setup again. My databases are usually just mounted volumes which do reside on the host system
12. seba_dos1 ◴[] No.43545656[source]
"apt-get install" tends to be enough once you stop chasing latest-and-greatest and start to appreciate things just running with low maintenance more.
replies(3): >>43546157 #>>43546335 #>>43548554 #
13. raphinou ◴[] No.43545661[source]
Exactly, my first reaction was "I should write a blog post about why I still use Docker Swarm". I deploy to single node swarms, and it's a zero boiler plate solution. I had to migrate services to another server recently, and it was really painless. Why oh why doesn't Docker Swarm get more love (from its owners/maintainers and users)?....

Edit: anyone actually interested in such a post?

replies(9): >>43545769 #>>43545842 #>>43546429 #>>43546455 #>>43546541 #>>43547576 #>>43549257 #>>43549367 #>>43549757 #
14. seba_dos1 ◴[] No.43545699[source]
All my "smart home" stuff needs is mosquitto on a OpenWrt router and bunch of cgi-bin scripts that can run anywhere. I already went through a phase of setting up tons of services which ended up being turned off when something changed in my life (moving, replacing equipment etc.) never to be resurrected as I couldn't be bothered to redo it without the novelty effect, so I learned from that.
15. mfashby ◴[] No.43545769[source]
I moved us off docker swarm to GKE some years back. The multi node swarm was quite unstable, and none of the big cloud providers offered managed swarm in the same way they offer managed k8s.

It's a shame I agree because it was nicely integrated with dockers own tooling. Plus I wouldn't have had to learn about k8s :)

16. galbar ◴[] No.43545842[source]
I just want to add that I also have a Docker Swarm running, with four small nodes for my personal stuff plus a couple of friends' companies.

No issues whatsoever and it is so easy to manage. It just works!

17. vbezhenar ◴[] No.43546042[source]
I'd love to use Kubernetes for my self hosting. The only problem is it's too expensive.
replies(1): >>43546185 #
18. vbezhenar ◴[] No.43546061[source]
Since I've migrated our company to Kubernetes, I almost stopped to worry about anything. It just works. Had much more troubles running it with spaghetti of docker containers and host-installed software on multiple servers, that setup definitely breaks like every week or every month. With Kubernetes I just press "update cluster" in some saturday evening once or twice a year and that's about it, pretty smooth sailing.
19. skydhash ◴[] No.43546157[source]
I totally agree! Containers are nice when your installation is ephemeral, deploying and updating several time in a short period. But using the package manager is as easy as you get.
20. ohgr ◴[] No.43546178[source]
If it wasn't for Kubernetes we'd need 1/3rd of our operations team. We're keeping unemployment down!
replies(1): >>43556005 #
21. k8sToGo ◴[] No.43546185{3}[source]
How is it too expensive? If you want to use the eco system you can still use something like k3s
22. alabastervlog ◴[] No.43546335[source]
I only host 3rd party daemons (nothing custom) and only on my local network (plus Tailscale) so Docker’s great for handling package management and init, since I get up-to-date versions of a far broader set of services than Debian or ubuntu’s repos, clean isolation for easy management, and init/restarts are even all free. Plus it naturally documents what I need to back up (any “mounted” directories)

Docker lets my OS be be boring (and lets me basically never touch it) while having up to date user-facing software. No “well, this new version fixes a bug that’s annoying me, but it’s not in Debian stable… do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?”

I just use shell scripts to launch the services, one script per service. Run the script once, forget about it until I want to upgrade it. Modify the version in the script, take the container down and destroy it (easily automated as part of the scripts, but I haven’t bothered), run the script. Done, forget about it again until next time.

Almost all the commands I run on my server are basic file management, docker stuff, or zfs commands. I could switch distros entirely and hardly even notice. Truly a boring OS.

replies(1): >>43551876 #
23. gmm1990 ◴[] No.43546429[source]
I'd be interested. Might be a strange question but I'll throw it out there, I seem to have a hard time finding a good way to define my self hosted infrastructure nodes and which containers can run on them, have you run into/have a solution for this? Like I want my database to run on my two beefier machines but some of the other services could run on the mini pcs.
replies(1): >>43547536 #
24. resiros ◴[] No.43546455[source]
yes, please.
25. quectophoton ◴[] No.43546541[source]
> I deploy to single node swarms, and it's a zero boiler plate solution.

Yup, it's basically like a "Docker Compose Manager" that lets you group containers more easily, since the manifest file format is basically Docker Compose's with just 1-2 tiny differences.

If there's one thing I would like Docker Swarm to have, is to not have to worry about which node creates a volume, I just want the service to always be deployed with the same volume without having to think about it.

That's the one weakness I see for multi-node stacks, the thing that prevents it from being "Docker Compose but distributed". So that's probably the point where I'd recommend maybe taking a look at Kubernetes.

26. raphinou ◴[] No.43547536{3}[source]
I am running one-node swarms, so everything I deploy is running on the same node. But from my understanding you can apply labels to the nodes, and limit the placement of containers. See here for an example (I am not affiliated to this site): https://www.sweharris.org/post/2017-07-30-docker-placement/
27. StrLght ◴[] No.43547576[source]
I am very interested. I tried to migrate to Swarm, got annoyed at incompatibility with tons of small Docker Compose things, and decided against that. I'd love to read about your setup.
28. ryandrake ◴[] No.43548554[source]
Same here. I've had the same setup for decades: A "homelab" server on my LAN for internal hobby projects and a $5 VPS for anything that requires public access. On either of these, I just install the software I need through the OS's package manager. If I need a web server, I install it. If I need ssh, I install it. If I need nfs, I install it. I've never seen any reason to jump into containers or orchestration or any of that complex infrastructure. I know there are a lot of people very excited about adding all of that stuff into the mix, but I've never had a use case that prompted me to even consider it!
replies(1): >>43556121 #
29. jauntywundrkind ◴[] No.43548737[source]
K3s installs super fast.

Writing your first yaml or two is scary & seems intimidating at first .

But after that, everything is cut from the same cloth. Its an escape from the long dark age of every sysadmin forever cooking up whatever whimsy sort of served them at the time, escape from each service having very different management practices around it.

And there's no other community anywhere like Kubernetes. Unbelievably many very good quality very smart helm charts out there, such as https://github.com/bitnami/charts/tree/main/bitnami just ready to go. Really sweet home-ops setups like https://github.com/onedr0p/home-ops that show that once you have a platform under foot, adding more services is really easy, showing an amazing range of home-ops things you might be interested in.

> Last thing I need is Kubernetes at home

Last thing we need is incredibly shitty attitude. Fuck around and find out is the hacker spirit. Its actually not hard if you try, and actually having a base platform where things follow common patterns & practices & you can reuse existing skills & services is kind of great. Everything is amazing but the snivelling shitty whining without even making the tiniest little case for your unkind low-effort hating will surely continue. Low signal people will remain low signal, best avoid.

30. opsdisk ◴[] No.43549257[source]
Would love a blog post on how you're using Docker Swarm.
31. kiney ◴[] No.43549367[source]
bugs in it's infancy is what killed swarm for users.
32. ndsipa_pomu ◴[] No.43549714[source]
> You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home.

It's not uncommon with self-hosting services using docker. It makes it easier to try out a new stack and you can mix and match versions of postgresql according to the needs of the software. It's also easier to remove if you decide you don't like the bit of software that you're trying out.

33. ndsipa_pomu ◴[] No.43549757[source]
Yep. I run a small swarm at work and have a 5-node RPi-4 swarm at home. Interested in why you'd run a single-node swarm instead of stand-alone docker.
34. bitsandboots ◴[] No.43550543[source]
I clicked expecting a list of cool things to self host. Instead I got a list of ways I would never want to host. Mankind invented BSD jails so that I do not have to tie myself in a knot of container tooling and abstraction.
replies(1): >>43550654 #
35. Gud ◴[] No.43550654[source]
Indeed. I run a setup as you mentioned, with the various daemons in their own jail. Super simple set up, easy to maintain.

Lord knows why people overcomplicate things with docker/kubernetes/etc.

36. seba_dos1 ◴[] No.43551876{3}[source]
> do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?

In these rare cases I usually just compile a newer deb package myself and let the package manager deal with it as usual. If there are too many dependencies to update or it's unusually complex, then it's container time indeed - but I didn't have to go there on my server so far.

Not diverging from the distro packages lets me not worry about security updates; Debian handles that for me.

37. shepherdjerred ◴[] No.43552457[source]
I'm very happy with Kubernetes at home. Everything just works at this point, though it did take a fair bit of fiddling at first.

I think it's a great way to learn Kubernetes if you're interested in that.

38. brulard ◴[] No.43556005{3}[source]
Is this a joke? I don't know much about Kubernetes, but I've heard from devops people it's quite helpful for bigger scale infrastructures.
replies(1): >>43558799 #
39. brulard ◴[] No.43556121{3}[source]
I have very similar setup, with homelab and separate cheap VPS. In similar manner I have all the services installed directly on the OS but I'm starting to run into issues where I started to consider using docker. I run nginx with multiple (~10) node apps running through PM2. While this works ok-ish, I'm not happy that if for example one of my apps needs some packages installed, I need to do it for the whole server (ffmpeg, imageMagick, etc.). Other problem is that I can easily run into compatibility problems if I upgraded node.js for example. And if there was some vulnerability in some of the node_packages any of the projects use, the whole server is compromised. I think docker can be quite an easy solution to most of these problems.
40. ohgr ◴[] No.43558799{4}[source]
Unfortunately no it's not a joke. It's really fine for big infrastructure companies, think Google etc. But a lot of people will design complicated shit due to architectural ignorance or to make their resume look good and this will result in Kubernetes looking like a good idea to run the resulting large amounts of complicated shit. Then you realise it's complicated and lots of people need to look after it which escalates the problem.

After a few years you work out that holy shit we now have 15 people looking after everything instead of the previous 4 people and pods are getting a few hits an hour. Every HTTP request ends up costing $100 and then you wonder why the fuck your company is totally screwed financially.

But all the people who designed it have left for consultancy jobs with Kubernetes on their resume and now you've got an army of people left to juggle the YAML while the CEO hammers his fist on the table saying CUT COSTS. Well you hired those feckin plums!

etc etc.

Lots of them are on here. People have no idea how to solve problems any more, just create new ones out of the old ones.