We ditched it for EC2s which were faster and more reliable while being cheaper, but that's beside the point.
Locally I use OrbStack by the way, much less intrusive than Docker Desktop.
Containers are the packaging format, EC2 is the infrastructure. (docker, crio, podman, kata, etc are the runtime)
When deploying on EC2, you still need to deploy your software, and when using containers you still need somewhere to deploy to.
zips the local copy of the branch and rsyncs it to the environment, and some other stuff
This would happen in your Dockerfile, and then the process of actually "installing" your application is just docker run (or kubectl apply, etc), which is an industry standard requiring no specialized knowledge about your application (since that is abstracted away in your Dockerfile).You're basically splitting the process of building and distributed your application into: write the software, build the image, deploy the image.
Everyone who uses these tools, which is most people by this point, will understand these steps. Additionally, any framework, cloud provider, etc that speaks container images, like ECS, Kubernetes, Docker Desktop, etc can manage your deployments for you, since they speak container images. Also, the API of your container image (e.g. the environment variables, entrypoint flags, and mounted volumes it expects) communicate to those deploying your application what things you expect for them to provide during deployment.
Without all this, whoever or whatever is deploying your application has to know every little detail and you're going to spend a lot of time writing custom workflows to hook into every different kind of infrastructure you want to deploy to.