←back to thread

1101 points codesmash | 1 comments | | HN request time: 0s | source
Show context
ttul ◴[] No.45142410[source]
Back in 2001/2002, I was charged with building a WiFi hotspot box. I was a fan of OpenBSD and wanted to slim down our deployment, which was running on Python, to avoid having to copy a ton of unnecessary files to the destination systems. I also wanted to avoid dependency-hell. Naturally, I turned to `chroot` and the jails concept.

My deployment code worked by running the software outside of the jail environment and monitoring the running processes using `ptrace` to see what files it was trying to open. The `ptrace` output generated a list of dependencies, which could then be copied to create a deployment package.

This worked brilliantly and kept our deployments small and immutable and somewhat immune to attack -- not that being attacked was a huge concern in 2001 as it is today. When Docker came along, I couldn't help but recall that early work and wonder whether anyone has done a similar thing to monitor file usage within Docker containers and trim them down to size after observing actual use.

replies(5): >>45142674 #>>45142868 #>>45143085 #>>45144228 #>>45149147 #
sroerick ◴[] No.45142674[source]
The best CI/CD pipeline I ever used was my first freelance deployment using Django. I didn't have a clue what I was doing and had to phone a friend.

We set up a git post receive hook which built static files and restarted httpd on a git receive. Deployment was just 'git push live master'.

While I've used Docker a lot since then, that remains the single easiest deployment I've ever had.

I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

replies(15): >>45142780 #>>45143044 #>>45143149 #>>45143733 #>>45143788 #>>45144390 #>>45146716 #>>45147245 #>>45147489 #>>45147573 #>>45147740 #>>45148831 #>>45149230 #>>45149460 #>>45165839 #
1. antihero ◴[] No.45147573[source]
> Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

Yes. Everything on my box is ephemeral and can be deleted and recreated or put on another box with little-to-no thought. Infrastructure-as-code means my setup is immutable and self-documented.

It's a little more time to set up initially, but now I know exactly what is running.

I don't really understand the 24/7 comment, now that it is set up there's very very little maintenance. Sometimes an upgrade might go askew but that is rare.

Any change to it is recorded as a git commit, I don't have to worry about logging what I've done ever because it's done for me.

Changes are handled by a GitHub action, all I have to do to change what is running is commit a file, and the infra will update itself.

I don't use docker-compose, I use a low-overhead microk8s single-node cluster that I don't think about at all really, I just have changes pushed to it directly with Pulumi (in a real environment I'd use something like ArgoCD) and everything just works nicely. Ingress to services is done through Cloudflare tunnels so I don't even have to port-forward or think about NAT or anything like this.

To update my personal site, I just do a git commit/push, the it's CI/CD builds builds a container and then updates the Pulumi config in the other repo to point to the latest hash, which then kicks off an action in my infra repo to do a Pulumi apply.

Currently it runs on Ubuntu but I'm thinking of using Talos (though it's still nice to be able to just SSH to the box and mess around with files).

I'm not sure why people struggle with this, or the benefits of this approach, so much? It seems like a lot of complexity if you're inexperienced, but if you've been working with computers for a long time, it isn't particularly difficult—there are far more complicated things that computers do.

I could throw the box (old macbook) in a lake and be up and running with every service on a new box in an hour or so. Or I could run it on the cloud. Or a VPS, or metal, or whatever really, it's a completely portable setup.