Remove layers, keep things simple.
That being said, it is here to stay. So any alternative tooling that forces Docker to get it's act together is welcome.
Remove layers, keep things simple.
That being said, it is here to stay. So any alternative tooling that forces Docker to get it's act together is welcome.
> Remove layers, keep things simple.
Due to the first line above, I'm not sure if I'm reading the second line correctly. But I'm going to assume that you're referring to the OCI image layers. I feel your pain. But honestly, I don't think that image layers are such a bad idea. It's just that the best practices for those layers are not well defined and some of the early tooling propagated some sub-optimal uses of those layers.
I'll just start with when you might find layers useful. Flatpak's sandboxing engine is bubblewrap (bwrap). It's also a container runtime that uses namespaces, cgroups and seccomp like OCI runtimes do. The difference is that it has more secure seccomp defaults and it doesn't use layers (though mounts are available). I have a tool that uses bwrap to create isolated build and packaging environments. It has a single root fs image (no layers). There are two annoyances with a single layer like this:
1. If you have separate environments for multiple applications/packages, you may want to share the base OS filesystem. You instead end up replicating the same file system redundantly.
2. If you want to collect the artifacts from each step (like source download, extract and build, 'make install', etc) into a separate directory/archive, you'll find yourself reaching out for layers.
I have implemented this and the solutions look almost identical to what OCI runtimes do with OCI image layers - use either overlayfs or btrfs/zfs subvolume mounts.
So if that's the case, then what's the problem with layers? Here are a few:
1. Some tools like the image builders that use Dockerfile/Containerfile create a separate layer for every operation. Some layers are empty (WORKDIR, CMD, etc). But others may contain the results of a single RUN command. This is very unnecessary and the work-arounds are inelegant. You'll need to use caches to remove temporary artifacts, and chain shell commands into a single RUN command (using semicolons).
2. You can't manage layers like files. The chain of layers are managed by manifests and the entire thing needs a protocol, servers and clients to transfer images around. (There are ways to archive them. But it's so hackish.)
So, here are some solutions/mitigations:
1. There are other build tools like buildah and packer that don't create additional layers unless specified. Buildah, a sister project of Podman, is a very interesting tool. It uses regular (shell) commands to build the image. However, those commands closely resemble the Dockerfile commands, making it easy to learn. Thus you can write a shell script to build an image instead of a Dockerfile. It won't create additional layers unless you specify. It also has some nifty features not found in Dockerfiles.
Newer Dockerfile builders (I think buildkit) have options to avoid creating additional layers. Another option is to use dedicated tools to inspect those layers and split/merge them on demand.
2. While a protocol and client/servers are rather inconvenient for lugging images around, they did make themselves useful in other ways too. Container registries these days don't host just images. They can host any OCI artifact. And you can practically pack any sort of data into such an artifact. They are also used for hosting/transferring a lot of other artifacts like helm charts, OPA policies, kubectl plugins, argo templates, etc.
> So any alternative tooling that forces Docker to get it's act together is welcome
What else do you consider as some bad/sub-optimal design choices of Docker? (including those already solved by podman)