I’ve been running varnish/tinykvm using podman by using passing /dev/kvm into the container and adding myself to the kvm group. https://github.com/lrowe/deno_varnish?tab=readme-ov-file#run...
Maybe you would be better off with something like krun which is built to run OCI containers in a full Linux kvm guest. https://josecastillolema.github.io/podman-wasm-libkrun/
Tip, Although not entirely what you asked, but related: what about using more caving in your CI/CD Pipeline. Customers see incredible time savings when using Varnish on that context (mostly with Enterprise w/MSE4 as you will need a massive cache, but it can be useful even with Varnish Cache grounding on your pipeline and workflow). If you are interested, read more here: https://www.varnish-software.com/solutions/data-ai-accelerat...
No worries, I know how it is! :)
> But good to see you got your answer in the end.
Well, almost. Even outside that above usecase I'd still be interested in the capabilities TinyKVM needs and its overall security model & properties! There are far too many Github projects out there these days that claim to do sandboxing and for an outsider it's very difficult to compare them security-wise.
> what about using more caving in your CI/CD Pipeline.
The caching itself is not the issue. We already heavily cache image layers when building container images. The issue (one of them) is that on our platform AppArmor prevents containers from mounting anything, including overlayfs file systems. The latter, however, are needed for Docker/Podman to do proper image layering. The only non-mount alternative I'm aware of, Kaniko, avoids overlayfs but at the cost of severe I/O and performance impact AFAIU this is because it manually detects changes in a given image layer by walking the directory tree. See also https://github.com/GoogleContainerTools/kaniko/issues/875