←back to thread

1101 points codesmash | 1 comments | | HN request time: 0.416s | source
Show context
ttul ◴[] No.45142410[source]
Back in 2001/2002, I was charged with building a WiFi hotspot box. I was a fan of OpenBSD and wanted to slim down our deployment, which was running on Python, to avoid having to copy a ton of unnecessary files to the destination systems. I also wanted to avoid dependency-hell. Naturally, I turned to `chroot` and the jails concept.

My deployment code worked by running the software outside of the jail environment and monitoring the running processes using `ptrace` to see what files it was trying to open. The `ptrace` output generated a list of dependencies, which could then be copied to create a deployment package.

This worked brilliantly and kept our deployments small and immutable and somewhat immune to attack -- not that being attacked was a huge concern in 2001 as it is today. When Docker came along, I couldn't help but recall that early work and wonder whether anyone has done a similar thing to monitor file usage within Docker containers and trim them down to size after observing actual use.

replies(5): >>45142674 #>>45142868 #>>45143085 #>>45144228 #>>45149147 #
sroerick ◴[] No.45142674[source]
The best CI/CD pipeline I ever used was my first freelance deployment using Django. I didn't have a clue what I was doing and had to phone a friend.

We set up a git post receive hook which built static files and restarted httpd on a git receive. Deployment was just 'git push live master'.

While I've used Docker a lot since then, that remains the single easiest deployment I've ever had.

I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?

replies(15): >>45142780 #>>45143044 #>>45143149 #>>45143733 #>>45143788 #>>45144390 #>>45146716 #>>45147245 #>>45147489 #>>45147573 #>>45147740 #>>45148831 #>>45149230 #>>45149460 #>>45165839 #
rcv ◴[] No.45143733[source]
> I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.

Sounds great if you're only running a single web server or whatever. My team builds a fairly complex system that's comprised of ~45 unique services. Those services are managed by different teams with slightly different language/library/etc needs and preferences. Before we containerized everything it was a nightmare keeping everything in sync and making sure different teams didn't step on each others dependencies. Some languages have good tooling to help here (e.g. Python virtual environments) but it's not so great if two services require a different version of Boost.

With Docker, each team is just responsible for making sure their own containers build and run. Use whatever you need to get your job done. Our containers get built in CI, so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine. And if it runs on my machine, I have very good confidence it will run on production.

replies(3): >>45144166 #>>45145846 #>>45148886 #
sroerick ◴[] No.45145846[source]
OK, this seems like an absolutely valid use case. Big enterprise microservice architecture, I get it. If you have islands of dev teams, and a dedicated CI/CD dev ops team, then this makes more sense.

But this puts you in a league with some pretty advanced deployment tools, like high level K8, Ansible, cloud orchestration work, and nobody thinks those tools are really that appropriate for the majority of devteams.

People are out here using docker for like... make install.

replies(3): >>45145979 #>>45146762 #>>45148187 #
rglullis ◴[] No.45148187[source]
Imagine you have a team of devs, some using macOS, some using Debian, some using NixOS, some on Windows + WSL. Go ahead and try to make sure that everyone's development environment by simply running "git pull" and "make dev"
replies(1): >>45148272 #
1. Fanmade ◴[] No.45148272[source]
Ha, I've written a lot of these Makefiles and the "make dev" command even became a personal standard that I added to each project. I don't know if I read about that, or if it just developed into that because it just makes sense. In the last few years, these commands very often started a docker container, though. I do tend to work on Windows with WSL and I most of my colleagues use macOS or Linux, so that's definitely one of the reasons why docker is just easier there.