←back to thread

726 points psviderski | 6 comments | | HN request time: 2.094s | source | bottom

I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.

In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.

So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.

  docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.

I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.

Would love to hear your thoughts and use cases!

[1]: https://github.com/psviderski/unregistry

[2]: https://github.com/psviderski/uncloud

Show context
revicon ◴[] No.44319604[source]
Is this different from using a remote docker context?

My workflow in my homelab is to create a remote docker context like this...

(from my local development machine)

> docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"

Then I can do...

> docker context use mylinuxserver

> docker compose build

> docker compose up -d

And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.

No fuss, registry, no extra applications needed.

Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.

replies(2): >>44319744 #>>44321814 #
1. matt_kantor ◴[] No.44319744[source]
Assuming I understand your workflow, one difference is that unregistry works with already-built images. They aren't built on the remote host, just pushed there. This means you can be confident that the image on your server is exactly the same as the one you tested locally, and also will typically be much faster (assuming well-structured Dockerfiles with small layers, etc).
replies(1): >>44320198 #
2. pbh101 ◴[] No.44320198[source]
This is probably an anti-feature in most contexts.
replies(1): >>44320724 #
3. akovaski ◴[] No.44320724[source]
The ability to push a verified artifact is an anti-feature in most contexts? How so?
replies(1): >>44324397 #
4. pbh101 ◴[] No.44324397{3}[source]
It is fine if you are just working by yourself on non-prod things and you’re happy with that.

But if you are working with others on things that matter, then you’ll find you want your images to have been published from a central, documented location, where it is verified what tests they passed, the version of the CI pipeline, the environment itself, and what revision they were built on. And the image will be tagged with this information, and your coworkers and you will know exactly where to look to get this info when needed.

This is incompatible with pushing an image from your local dev environment.

replies(1): >>44328117 #
5. matt_kantor ◴[] No.44328117{4}[source]
With that sort of setup you'd run `docker pussh` from your build server, not your local machine (really though you'd probably want a non-ephemeral registry, so wouldn't use unregistry at all).

Other than "it's convenient and my use case is low-stakes enough for me to not care", I can't think of any reason why one would want to build images on their production servers.

replies(1): >>44333687 #
6. pbh101 ◴[] No.44333687{5}[source]
Agreed.