←back to thread

726 points psviderski | 3 comments | | HN request time: 0.662s | source

I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.

In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.

So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.

  docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.

I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.

Would love to hear your thoughts and use cases!

[1]: https://github.com/psviderski/unregistry

[2]: https://github.com/psviderski/uncloud

Show context
jdsleppy ◴[] No.44317574[source]
I've been very happy doing this:

DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d

It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.

Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.

replies(2): >>44317858 #>>44318663 #
kosolam ◴[] No.44317858[source]
This approach is akin to the prod server pulling an image from a registry. The op method is push based.
replies(1): >>44320179 #
jdsleppy ◴[] No.44320179[source]
No, in my example the docker-compose.yml would exist alongside your application's source code and you can use the `build` directive https://docs.docker.com/reference/compose-file/services/#bui... to instruct the remote host (Hetzner VPS, or whatever else) to build the image. That image does not go to an external registry, but is used internal to that remote host.

For 3rd party images like `postgres`, etc., then yes it will pull those from DockerHub or the registry you configure.

But in this method you push the source code, not a finished docker image, to the server.

replies(1): >>44320457 #
1. quantadev ◴[] No.44320457[source]
Seems like it makes more sense to build on the build machine, and then just copy images out to PROD servers. Having source code on PROD servers is generally considered bad practice.
replies(1): >>44322525 #
2. jdsleppy ◴[] No.44322525[source]
The source code does not get to the filesystem on the prod server. It is sent to the Docker daemon when it builds the image. After the build ends, there's only the image on the prod server.

I am now convinced that this is a hidden docker feature that too many people aren't aware of and do not understand.

replies(1): >>44323576 #
3. quantadev ◴[] No.44323576[source]
Yeah, I definitely didn't understand that! Thanks for explaining. I've bookmarked this thread, because there's several commands that look more powerful and clean than what I'm currently doing which is to "docker save" to TAR, copy the TAR up to prod and then "docker load".