←back to thread

1101 points codesmash | 4 comments | | HN request time: 1.222s | source
Show context
tomrod ◴[] No.45137625[source]
Most of my containers end up on k8s clusters as pods. What else would one use podman or docker for beyond local dev or maybe running a local containerized service?
replies(3): >>45137640 #>>45138052 #>>45138277 #
jeffhuys ◴[] No.45137640[source]
For a while we used it for scalable preview environments: specify the branch, hit deploy, and have a QA-able environment, with full database (anonymized) ready to go in 15 minutes (DB was time bottleneck).

We ditched it for EC2s which were faster and more reliable while being cheaper, but that's beside the point.

Locally I use OrbStack by the way, much less intrusive than Docker Desktop.

replies(1): >>45137684 #
spicyusername ◴[] No.45137684[source]
EC2 and containers are orthogonal technologies, though.

Containers are the packaging format, EC2 is the infrastructure. (docker, crio, podman, kata, etc are the runtime)

When deploying on EC2, you still need to deploy your software, and when using containers you still need somewhere to deploy to.

replies(1): >>45138085 #
1. jeffhuys ◴[] No.45138085[source]
True; I conflate the two often. The EC2s run on an IAM image, same as production does, which before was a docker image.
replies(1): >>45138517 #
2. spicyusername ◴[] No.45138517[source]
Arguably it would still be beneficial to use container images when building your AMIs (vs installing use apt or copying your binaries), since using container images still solves the "How do I get my software to the destination?" and the "How do I run my software and give it the parameters it needs?" problems in a universal way.
replies(1): >>45142056 #
3. jeffhuys ◴[] No.45142056[source]
In what way does you mean this? I’ve built two jobs for the preview envs: DeployEnvironment (runs the terraform stuff that starts the ec2/makes s3 buckets/creates api gateway/a lot of other crap) and then ProvisionEnvironment (zips the local copy of the branch and rsyncs it to the environment, and some other stuff). I build the .env file in ProvisionEnvironment, which accounts for the parameters. I’d love to get your point of view here!
replies(1): >>45143327 #
4. spicyusername ◴[] No.45143327{3}[source]
Using a container image as your "artifact" is often a good approach to distributing your software.

    zips the local copy of the branch and rsyncs it to the environment, and some other stuff
This would happen in your Dockerfile, and then the process of actually "installing" your application is just docker run (or kubectl apply, etc), which is an industry standard requiring no specialized knowledge about your application (since that is abstracted away in your Dockerfile).

You're basically splitting the process of building and distributed your application into: write the software, build the image, deploy the image.

Everyone who uses these tools, which is most people by this point, will understand these steps. Additionally, any framework, cloud provider, etc that speaks container images, like ECS, Kubernetes, Docker Desktop, etc can manage your deployments for you, since they speak container images. Also, the API of your container image (e.g. the environment variables, entrypoint flags, and mounted volumes it expects) communicate to those deploying your application what things you expect for them to provide during deployment.

Without all this, whoever or whatever is deploying your application has to know every little detail and you're going to spend a lot of time writing custom workflows to hook into every different kind of infrastructure you want to deploy to.