←back to thread

726 points psviderski | 10 comments | | HN request time: 0.943s | source | bottom

I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.

In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.

So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.

  docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.

I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.

Would love to hear your thoughts and use cases!

[1]: https://github.com/psviderski/unregistry

[2]: https://github.com/psviderski/uncloud

1. jlhawn ◴[] No.44314256[source]
A quick and dirty version:

    docker -H host1 image save IMAGE | docker -H host2 image load
note: this isn't efficient at all (no compression or layer caching)!
replies(4): >>44314454 #>>44314642 #>>44314686 #>>44314973 #
2. rgrau ◴[] No.44314454[source]
I use a variant with ssh and some compression:

    docker save $image | bzip2 | ssh "$host" 'bunzip2 | docker load'
replies(1): >>44314605 #
3. selcuka ◴[] No.44314605[source]
If you are happy with bzip2-level compression, you could also use `ssh -C` to enable automatic gzip compression.
4. selcuka ◴[] No.44314642[source]
That method is actually mentioned in their README:

> Save/Load - `docker save | ssh | docker load` transfers the entire image, even if 90% already exists on the server

5. ◴[] No.44314686[source]
6. alisonatwork ◴[] No.44314973[source]
On podman this is built in as native command podman-image-scp[0], which perhaps could be more efficient with SSH compression.

[0] https://docs.podman.io/en/stable/markdown/podman-image-scp.1...

replies(2): >>44315491 #>>44315631 #
7. travisgriggs ◴[] No.44315491[source]
So with Podman, this exists already, but for docker, this has to be created by the community.

I am a bystander to these technologies. I’ve built and debug’ed the rare image, and I use docker desktop on my Mac to isolate db images.

When I see things like these, I’m always curious why docker, which seems so much more beaurecratic/convoluted, prevails over podman. I totally admit this is a naive impression.

replies(2): >>44315648 #>>44316828 #
8. psviderski ◴[] No.44315631[source]
Ah neat I didn't know that podman has 'image scp'. Thank you for sharing. Do you think it was more straightforward to implement this in podman because you can easily access its images and metadata as files on the file system without having to coordinate with any daemon?

Docker and containerd also store their images using a specific file system layout and a boltdb for metadata but I was afraid to access them directly. The owners and coordinators are still Docker/containerd so proper locks should be handled through them. As a result we become limited by the API that docker/containerd daemons provide.

For example, Docker daemon API doesn't provide a way to get or upload a particular image layer. That's why unregistry uses the containerd image store, not the classic Docker image store.

9. password4321 ◴[] No.44315648{3}[source]
> why docker, which seems so much more beaurecratic/convoluted, prevails over podman

First mover advantage and ongoing VC-funded marketing/DevRel

10. djfivyvusn ◴[] No.44316828{3}[source]
Something that took me 20 years to learn: Never underestimate the value of a slick gui.