We ditched it for EC2s which were faster and more reliable while being cheaper, but that's beside the point.
Locally I use OrbStack by the way, much less intrusive than Docker Desktop.
Containers are the packaging format, EC2 is the infrastructure. (docker, crio, podman, kata, etc are the runtime)
When deploying on EC2, you still need to deploy your software, and when using containers you still need somewhere to deploy to.
Remove layers, keep things simple.
That being said, it is here to stay. So any alternative tooling that forces Docker to get it's act together is welcome.
Firstly, podman had a much worse performance compared to docker on my small cloud vps. Can't really go into details though.
Secondly, the development ecosystem isn't really fully there yet. Many tools utilizing Docker via its socket, fail to work reliably with podman. Either because the API differs or because of permission limitations. Sure, the tools could probably work around those limitations, but they haven't and podman isn't a direct 1:1 drop in replacement.
I often try to run something using podman, then find strange errors, then switch back to docker. Typically this is with some large container, like gitlab, which probably relies on the entirety of the history of docker and its quirks. When I build something myself, most of the time I can get it working under podman.
This situation where any random container does not work has forced me to spin up a VM under incus and run certain troublesome containers inside that. This isn't optimal, but keeps my sanity. I know incus now permits running docker containers and I wonder if you can swap in podman as a replacement. If I could run both at the same time, that would be magical and solve a lot of problems.
There definitely is no consistency regarding GPU access in the podman and docker commands and that is frustrating.
But, all in all, I would say I do prefer podman over docker and this article is worth reading. Rootless is a big deal.
Podman rocks for me!
I find docker hard to use and full of pitfalls and podman isn't any worse. On the plus side, any company I work for doesn't have to worry about licences. Win win!
I write programs that run on the target OS again. It's much easier, turnaround time is much quicker, it's faster. Even battery lasts longer on my laptop. What the hell have we done to ourselves with these numerous layers of abstraction?!?
Are you using rootless podman? Then network redirection is done using user more networking, which has two modes: slirp4netns is very slow, pasta is the newer and good one.
Docker is always set up from the privileged daemon; if you're running podman from the root user there should be no difference.
```
> kubectl port-forward svc/argocd-server -n argocd 8080:443
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080
Handling connection for 8080
Handling connection for 8080
E0815 09:12:51.276801 27142 portforward.go:413] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod 87b32b48e6c729565b35ea0cefe9e25d8f0211cbefc0b63579e87a759d14c375, uid : failed to execute portforward in network namespace "/var/run/netns/cni-719d3bfa-0220-e841-bd35-fe159b48f11c": failed to connect to localhost:8080 inside namespace "87b32b48e6c729565b35ea0cefe9e25d8f0211cbefc0b63579e87a759d14c375", IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused
error: lost connection to pod
```
People had other issues also. It looks nice and I would love to use it, but it just currently isn't mature/stable enough.
But Docker Engine, the core component which works on Linux, Mac and Windows through WSL2, that is completely and 1000% free to use.
Recently I did the GitLab Runner migration for a company and switched to rootless docker. Works perfectly, all devs did not notice all there runs now use rootless docker and buildkit for builds. All thanks to rootless kit. No podman problems, more secure and no workflow change needed
Plus, I don’t see the point in babysitting a separate copy of a user space if systemd has `DynamicUser`.
I find that kubernetes yaml are a lot more complex than docker compose. And while I do, no, not everybody uses kubernetes.
Was this a deal breaker for any company?
I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.
If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license. So $90 / year for everyone, but if you have US developers your all-in payroll is probably going to be over $200,000 per developer or roughly $2 million dollars. In that context $90 is practically nothing. A single lunch for the dev team could cost almost double that.
To me that is a bargain, you're getting an officially supported tool that "just works" on all operating systems.
Which is probably one of the motivations for the blog post. Compatibility will only be there once a large enough share of users use podman that it becomes something that is checked before publish.
You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.
Too many problems with things that worked out of the box with docker.
I don't have time to waste on troubleshooting yet another issue that can be solved by simply using the thing that just works.
rootless is not an argument for me, since the hosts are dedicated docker hosts anyway.
>This section describes how to install Docker Engine on Linux, also known as Docker CE. Docker Engine is also available for Windows, macOS, and Linux, through Docker Desktop.
https://docs.docker.com/engine/install/
I'm not an expert but everything I read online says that Docker runs on Linux so with Mac you need a virtual environment like Docker Desktop, Colima, or Podman to run it.
Podman pods have been super useful, and the nature of my workload is such that we just run a full pod on every host, so it's actually removed the need for an orchestrator like Kubernetes. I manage everything via Ansible and it has been great.
Podman with pods is a better experience than docker-compose. It's easy to interactively create a pod and add containers to it. The containers ports will behave as if they were on the same machine. Then `podman generate kube` and you have a yaml file that you can run with `podman kube play`.
Rootless networking is very slow unless you install `passt`. With Debian, you probably should install every optional package that podman recommends.
The documentation is lacking. Officially, it's mostly man pages, with a few blog posts announcing features, though the posts are often out of date.
Podman with its docker socket is often compatible with Docker. Even docker-compose can (usually) work with podman. I've had a few failures, though.
Gitlab-runner can use podman instead of docker, but in this case the is no network aliases. So it's useless if the runner needs to orchestrate several images (e.g. code and db).
By default, root in the container maps to the user running the podman container on the host. Over the years, applications have adopted patterns where containers run as non-root users, for example www-data aka UID 33 (Debian) or just 1000. Those no longer map to your own user on the host, but subordinate IDs. I wish there was an easy way to just say "ALL container UIDs map to single host user". The uidmap and userns options did not work for me (crun has failed executing those containers).
I don’t see the use case for mapping to subordinate IDs. It means those files are orphaned on the host and do not belong to anyone, when used via volume mapping?
An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.
Even if it's not automated, it's normal for a team to email IT / HR with new hire requirements. Having a list of tools that need licenses in that email is something I've seen at plenty of places.
I would say there's lots of other tools where onboarding is more complicated from a license perspective because it might depend on if a developer wants to use that tool and then keeping tabs on if they are still using it. At least with Docker Desktop it's safe to say if you're on macOS you're using it.
I guess I'm not on board with this being a major conflict point.
Of course if you have really large / complex compose files or just don't feel like learning something else / aren't using k8s, stick with docker.
https://github.com/containers/buildah/issues/4325#issuecomme...
/s
You might expect that setting User=foo via systemd would enable seamless rootless containers, but it turns out to be a hard problem without a seamless solution.
Instead, there's this discussion thread with 86 comments and counting to wade through to find some solutions that have worked for some people in some cases.
https://github.com/containers/podman/discussions/20573#discu...
If you use k8s for anything, podman might help you avoid remembering yet another iac format.
- https://www.redhat.com/en/blog/user-namespaces-selinux-rootl... - https://www.redhat.com/en/blog/sudo-rootless-podman
I'd summarize these posts as "very carefully explaining how to solve insane problems."
[1]: https://github.com/microsoft/winget-pkgs/tree/master/manifes...
But Docker is simply a non-starter. It's based on a highly privileged daemon with an enormous, hyper-complicated attack surface. It's a fundamentally bad architecture, and as far as I've been able to tell, it also comes from a project that's always shown an "Aw, shucks" attitude toward security. Nobody should be installing that anywhere, not even if there weren't an alternative.
I don't quite get this argument. How is that different from any piece of software that an employee will want in any sort of enterprise setting? From an IT operations perspective it is true that Docker Desktop on Windows is a little more annoying than something like an Adobe product, because Docker Desktop users need their local user to be part of their local docker security group on their specific machine. Aside from that I would argue that Docker Desktop is by far one of the easiest developer tools (and do note that I said developer tools) to track licenses for.
In non-enterprise setups I can see why it would be annoying but I suspect that's why it's free for companies with fewer than 250 people and 10 million in revenue.
https://www.redhat.com/en/blog/generate-selinux-policies-con...
(base) kord@DESKTOP-QPLEI6S:/mnt/wsl/docker-desktop-bind-mounts/Ubuntu/37c7f28..blah..blah$ podman
Command 'podman' not found, but can be installed with:
sudo apt install podman
them: not so fast here's glib
me: great can use debian for stuff
them: not so fast, here's rpm
me: great can use docker for "abstracting" over Linux diversity
them: not so fast, here's podman
Taking this further (self-plug), you can automatically map your Compose config into a NixOS config that runs your Compose project on systemd!
It takes forever, so long that I'll forget that I asked for something. Then later when they do get around to it, they'll take up more of my time than it's worth on documentation, meetings, and other bullshit (well to me it's bullshit, I'm sure they have their reasons). Then when they are finally convinced that yes a Webstorm license is acceptable, they'll spend another inordinate amount of time trying to negotiate some deal with Jetbrains. Meanwhile I gave up 6 months ago and have been paying the $5 a month myself.
Though as someone who's used a lot of Azure infrastructure as code with Bicep and also done the K8s YAML's I'm not sure which is more complicated at this point to be honest. I suspect that depends on your k8s setup of course.
As others have said, depending on an LLM for this is a disaster because you don’t engage your brain with the manifest, so you aren’t immediately or at least subconsciously aware of what is in that manifest, for good or for ill. This is how bad manifest configurations can drift into codebases and are persisted with cargo-cult coding.
[edit: edit]
But it is not cross-platform, so we settled on Podman instead, which came (distant) second in my tests. The UI is horrible, IMO but hey… compromises.
I use OrbStack for my personal stuff, though.
One pain point for me is rootless mode: my Podman containers tend to stop randomly for no obvious reason. I tried the recommended “enable user lingering” fix and it didn’t help. I’ve never run into this with Docker.
I get the theoretical advantages, daemonless architecture, better systemd integration, rootless by default, podman generate kube, etc. But if you’re just using containers for development or straightforward deployments, Docker feels smoother and more reliable. Maybe if you’re in a security-sensitive environment or need tighter system integration Podman shines, but for my use cases I’m still not convinced.
It is at the company I currently work for. We moved to Rancher Desktop or Podman (individual choice, both are Apache licensed) and blocked Docker Desktop on IT's device management software. Much easier than going through finance and trying to keep up with licenses.
When Docker Desktop changed licensing I tried to switch to Podman and it was a disaster, Podman was brand new and despite many blog posts that claimed it was the perfect replacement it did not work for me, and I have very simple requirements. So I ended up using Rancher Desktop instead, which was also very unstable but better.
Fast forward 1 year, Rancher was pretty good and Podman still did not work reliably on my mac.
Fast forward another year or so and I switched to colima.
I tried podman last time about one year ago and I still had issues on my old mac. So far colima has been good enough for my needs although at least two times a brew update broke colima and I had to reinstall from scratch.
[1] Tool for reference: https://github.com/data-catering/insta-infra
I use WSL for work because we have no linux client options. It's generally fine, but both forced windows update reboots as well as seemingly random wsl reboots (assuming because of some component update?) can really bite you if you're in the middle of something.
Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.
All the same stuff can easily be done from cli.
Open source is different in exactly that, no procurement.
Finance makes procurement annoying so people are not motivated to go through it.
The usual way that procurement is handled, for the sake of everybody's sanity, is to sign a flat-rate / tiered contract, often with some kind of true-up window. That way the team that's trying to buy software licenses doesn't have their invoices swinging up/down every time headcount or usage patterns shifts, and they don't have to go back to the well every time they need more seats.
This is a reasonably well-oiled machine, but it does take fuel: setting up a new enterprise agreement like that takes humans and time, both of which are not free. So companies are incentivized to be selective in when they do it. If there's an option that requires negotiating a license deal, and an option that does not, there's decent inertia towards the latter.
All of which is a long way to say: many large enterprises are "good" at knowing how many of their endpoints are running what software, either by making getting software a paperwork process or by tracking with some kind of endpoint management (though it's noteworthy that there are also large enterprises that suck at endpoint management and have no clue what's running in their fleet). The "hard" part (where "hard" means "requires the business to expend energy they'd rather not) is getting a deal that doesn't involve the license seat counter / invoice details having to flex for each individual.
Rather _declaratively_ define configuration with nix. Deploy nixOS to machines (rpi4/5, x86, arm) and vms (proxmox) and manage remotely with nixos-anywhere.
One of these days, I’ll get around to doing a write up.
So, how are you supposed to run the proxy inside the container? Traefik for example? Genuinely curious.
It can be quite difficult to get this kind of money for such a nominal tool that has a lot of free competition. Docker was very critical a few years ago, but “why not use podman or containerd or…” makes it harder to stand up for.
You can run your own VM via any number of tools, or you can use WSL now on Windows, etc etc. But Docker Desktop was one of the first push-button ways to say "I have a Mac and I want to run Docker containers and I don't want to have to moonlight as a VM gardener to do it.
User=per-service-user
ExecStart=!podman-wrapper ...
where podman-wrapper passes `--user=1000:1000 --userns=auto:uidmapping=1000:$SERVICE_UID:1,gidmapping=1000:$SERVICE_GID:1` (where the UID/GID are set based on the $USER environment variable). Each container runs as 1000:1000 inside the container, which is mapped to the correct user on the host.It's much more than a gui for it supports running k8s locally, managing custom vm instances, resource monitoring of containers, built in local domain name support with ssl mycontainer.orb, a debug shell that gives you ability to install packages that are not available in the image by default, much better and automated volume mounting and view every container in finder, ability to query logs, an amazing ui, plus it is much, much faster and more resource efficient.
I am normally with you that terminal is usually enough, but the above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.
It's not the case that they've maximised the utility of the core build / publish container nor the acquire / run container workflows and but they're prioritising fluff around the edges of the core problem.
Podman for its various issues is a whole lot more focussed.
Setting this environment variable helped a lot: KUBECTL_PORT_FORWARD_WEBSOCKETS=true
Note: because Google's quality is falling you won't be able to find this variable using their search, but you can read about it by searching Bing or asking an LLM.
And of course, Easy =/= Simple, nor the other way around.
Big companies are made of teams of teams.
The little teams don't really get to make purchasing decisions.
If there's a free alternative, little teams just have to suck it up and try to make it work.
---
Also consider that many of these expenses are born by the 'cost center' side of the house, that is, the people who don't make money for the company.
If you work in a cost center, the name of the game is saving money by cutting expenses.
If technology goes into the actual product, the cost for that is accounted for differently.
I have been using it by years. Tested it in Win11 and Linux Mint. I can have even a local kubernetes.
The above features really do make it worth it especially when using existing services that have complicated failure logs or are resource intensive like redis, postgres, livekit, etc or you have a lot of ports running and want to call your service without having to worry about remembering port numbers or complicated docker network configuration.
Check it out https://docs.orbstack.dev/
atlassian and google and okta and ghe and this and that (claude code?). that eventually starts to stack up.
https://docs.orbstack.dev/features/debug
Let alone the local resource monitor, increased, performance, automated local domains (no more complicated docker network settings to get your app working with local host), and more.
There's just not much money to be made there, especially considering that docker is a pretty thin wrapper built on top of a bunch of other free technology.
When somebody can make a meaningful (though limited) clone of your core product in 100 lines of bash, you can't make much of a business on top of it [1]
Docker suffers from being useful in the open source space but having no reasonable method to make revenue.
It costs about $100/year per seat for commercial use, IIRC. But it is significantly faster than Docker Desktop at literally everything, has a way better UI, and a bunch of QoL features that are nice. Plus Linux virtualization that is both better and (repeating on this theme) significantly more performant than Parallels or VMWare Fusion or UTM.
To draw a parallel: imagine a large open source project with a large userbase. The users interact with the project and a bunch of them have ideas for how to make it better! So they each cut feature requests against the project. The maintainers look at them. Some of the feature requests they'll work on, some of them they'll take well-formed pull requests. But some they'll say "look, we get that this is helpful for you, but we don't think this aligns with the direction we want the project to go".
A good procurement team realizes that every time the business inks a purchase agreement with a vendor, the company's portfolio has become incrementally more costly. For massive deals, most of that cost is paid in dollars. For cheaper software, the sticker price is low but there's still the cost of having one more plate to juggle for renewals / negotiations / tracking / etc.
So they're incentivized to be polite but firm and push back on whether there's a way to get the outcome in another way.
(this isn't to suggest that all or even most procurement teams are good, but there is a kernel of sanity in the concept even though it's often painful for the person who wants to buy something)
Comparing root docker with rootless podman performance is apples to oranges. However, even for rootless pasta does have good performance.
Rootless works great, though there are some (many) images that will need to be tweaked out of the box.
Daemonless works great as well. You can still mount podman.sock like you can with Docker's docker.sock, but systemd handles dynamically generating the UNIX socket on connect() which is a much better solution than having the socket live persistently.
The only thing that I prefer Docker for is Compose. Podman has podman-compose, which works well and is much leaner than the incumbent, but it's kind of a reverse-engineered version of Docker Compose that doesn't support the full spec. (I had issues with service conditions, for example).
1. ssh orb (or machine name if you have multiple) 2. sudo apk add docker docker-cli-compose (install docker) 3. sudo addgroup <username> docker (add user to docker group) 4. sudo rc-update add docker default (set docker to start on startup)
Bonus, add lazydocker to manage your docker containers in a console
1. sudo apk add lazydocker
What you can do if you don't want to use Docker and don't want to maintain these images yourself is have two Podman machines running: one in rootful mode and another in rootless mode. You can, then, use the `--connection` global flag to specify the machine you want your container to run in. Podman can also create those VMs for you if you want it to (I use lima and spin them myself). I recommend using --capabilities to set limits on these containers namespaces out of caution.
Podman Desktop also installs a Docker compatibility layer to smooth over these incompatibilities.
Also, I don't want to have to troubleshoot why the docker daemon isn't running every time I need it
Either that or you have a massive process to acquire said licenses with multiple reporting requirements. So, you manager doesn’t need the headache and says just use the free stuff and move on.
I used to use docker. I use podman now. Are there teams in my enterprise who have docker licenses - maybe. But tracking them down and dealing with the process of adding myself to that “list” isn’t worth the trouble.
Colima would/should never be used in production for a number of reasons, but yeah it's great for local only development on a laptop.
And usually the need is coming from someone below C-level. So you have to: convince your manager and his manager convince procurement team it has to be in a budget (and usually it's much easier to convince to pay for the dinner) than you have a procurement team than you need to go through vendor review process (or at least chase execution)
This is reality in all big companies that this rule applies to. It's at least a quarter project.
Once I tried to buy a $5k/yr software license. The Sidekiq founder told me (after two months of back and forth) that he's done and I have to pay by CC (which I didn't had as miserable team lead).
(Most people use containers in a limited way, where they should do just one thing and shouldn't require systemd. OTOH I run them as isolated developer containers, and it's just so much easier to run systemd in the container as the OS expects.)
I guess that depends on how many you need to do
BTW, I'm talking about docker/compose files. kubectl doesn't have a conversion there. When converting from podman, it's super simple.
Docker would be wise to release their own similar tool.
compose syntax isn't that complex, nor would it take advtange of many k8s features out of the box, but it's a good start for a small team looking to start to transition platforms
(Have been running k8s clusters for 5+ years)
Podman has a number of caveats that make it not a drop in replacement out of the box, but they are mostly few and far between. Once you've learned to recognize a handful of common issues, it's quite simple to migrate.
This might sound like me trying to explain away some of the shortcomings of podman, but honestly, from my point of view, podman does it better, and the workarounds and fixes I make to our scripts and code for podman are backwards compatible and I view them as improvements.
That's good to know it works well for you, because I would prefer not to use docker.
It feels a little hypocritical for us to feed our families through our tech talent and then complain that someone else is doing the same.
The WSL backend is the pain point, which doesn't go away with Docker or Podman or anything else.
I'd guess that's because "the spec" is more .jsonschema than a spec about what behaviors any random version should do. And I say "version" because they say "was introduced in version $foo" but they also now go out of their way to say that the file declaring what version it conforms to is a warning
and sharing files from the host, ide integration, etc.
Not that it can't be done. But doing it is not just, 'run it'. Now you manage a vm, change your workflow, etc.
A second challenge with the particular setup I’m trying is peer authentication with Postgres, running bare metal on the host. I mount the Unix socket into the container, and on the host Postgres sees the Podman user and permits access to the corresponding DB.
Works really well but only if the container user is root so maps natively. I ended up patching the container image which was the path of least resistance.
(the company I work for uses them, our licensing used to be a mess similar to what's described here)
Use Restrictions. Customer and its Users may not and may not allow any third party to: [...] 10. Access the Service for the purpose of developing or operating products or services intended to be offered to third parties in competition with the Services[...]
Emphasis mine on 'operating'.
So I cannot use Docker Desktop to operate, for example: ECR, GCR or Harbor?
On my dev machine I do `docker compose up -d --build` in the directory of the Dockerfile, and it builds, uploads, and restarts the service on the server. In the podman world you're supposed to use Quadlets, which can be rsynced to the server, but I haven't found something simple for the build-step that doesn't involve an external registry or manually transferring the image.
What's the end-to-end solution for this?
Correct, but every additional software package and each additional license adds more to track.
Every new software license requires legal to review it.
These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)
Then they start investigating how often people use software packages and realize most people aren't actually using most software they have seats for. This happens because when software feels 'free' people request it for one-time use for a thing or to try it out and then forget about it, so you have low utilization across the board.
So they start making it harder to add new software. They start auditing usage. They may want reports on why software is still needed and who uses it.
It all adds up. I understand you don't think it should be this way, but it is at big companies. You're right that that the $24/user per month isn't much, but it's one of dozens of fees that get added, multiplied by every employee in the company, and now they need someone to maintain licenses, get them reviewed, interact with the rep every year, do the negotiation battles, and so on. It adds up fast.
Maybe a couple quirks with TCP port access, but a quick convo with gemini helped me
I could get over this. But, IMO, it lends itself to asking the "why" question. Why wouldn't Podman make installing it easier? And the only thing that makes sense to me is that RedHat doesn't want their dev effort supporting their competitor's products.
That's a perfectly reasonable stance, they owe me nothing. But, it does make me feel that anything not in the RH ecosystem is going to be treated as a second-class citizen. That concerns me more than having to build my own debs.
Costs and management grow in an O(n*m) manner where n is employees and m is numbers of licenses per employee. It seems like nothing when you're small and people only need a couple licenses, but a few years in the aggregate bills are eye-popping and you realize the majority of people don't use most of the licenses they've requested (it really happens).
Contrast this with what it takes for an engineer to use a common, free tool: They can just use it. No approval process. No extra management steps for anyone. Nothing to argue that you need to use it every year at license audit time. Just run with it.
> Unregistry is a lightweight container image registry that stores and serves images directly from your Docker daemon's storage. > > The included docker pussh command (extra 's' for SSH) lets you push images straight to remote Docker servers over SSH. It transfers only the missing layers, making it fast and efficient.
But, given that podman rootless doesn't have a daemon like Docker, I think using Podman in a push-to-remote scenario is just going to have more pieces for you to manually manage.
There are PaaS solutions out there, like Dokku, that would give you a better devx but will also bring additional setup and complexity.
Yes I could use flatpack on ubuntu, however I feel like this is partly something Ubuntu/Debian should provide out-of-the-box
The business world is full of things that "should" be a certain way, but aren't.
For the technology world, double the number.
We'd all like to live in some magical imaginary HN "should" world, but none of us do. We all work in companies that are flawed, and sometimes those flaws get in the way of our work.
If you've never run into this, buy a lottery ticket.
Or when your IT department is prohibited from purchasing anything that doesn't come from Microsoft or CDW.
It is for now, but I can't think of a player as large as Docker that hasn't pulled the rug out from under deals like this. And for good reason, that deal is probably a loss leader and if they want to continue they need to convert those free customers into paying.
But I have to feed my family.
If you use rootless Podman on a Redhat-derived distribution (which means Selinux), along with a non-root user in your container itself, you're in for a world of pain.
I came across just how slow recently:
- Container -> host: 0.398 Gbps vs. 42.2 Gbps
- host -> container: 20.6 Gbps vs 47.4 Gbps
Source: https://github.com/containerd/nerdctl/blob/main/docs/rootles...
Also, fuck them: https://github.com/hashicorp/vagrant/blob/v2.4.9/LICENSE who the fuck are they expecting to pay for Vagrant, or that "AWS gonna steal our ... vagrant?"
[1]. https://docs.podman.io/en/latest/markdown/podman-system-serv...
What else can they do then having a package for every distro?
https://podman.io/docs/installation#installing-on-linux
Including instructions to build from source (including for Debian and Ubuntu):
https://podman.io/docs/installation#building-from-source
I don't know about this specific case but Debian and or Ubuntu having outdated software is a common Debian/Ubuntu problem which nearly always is cause by Debian/Ubuntu itself (funnily if it's outdated in Ubuntu doesn't mean it's outdated in Debian and the other way around ;=) ).
I'm conflicted about whether or not it's better to run a root daemon that can launch unprivileged non-root containers or run rootless containers launched by a non-root user.
Anyone have thoughts or more definitive resources they could point to that discuss the tradeoffs?
A huge pain was when I used "podman-compose" with a custom podman system storage location, two times it ended corrupted when doing an "up" and I had to completely scratch my podman system storage.
I must have missed something though ...
They can do what Docker and many other software providers do that are committed to cross OS functionality. They could build packages for those OSes. Example:
https://docs.docker.com/engine/install/ubuntu/#install-using...
The install instructions you link to are relying on the OS providers to build/package Podman as part of their OS release process. But that is notoriously out-of-date.
You could argue, "Not Podman's Problem", and, in one sense, you'd be right. But, again, it leads to the question "Why wouldn't they make it their problem like so many other popular projects have?" and I believe I answered that previously.
With Ubuntu at least, some upstreams publish official PPAs so that you aren't stuck on the rapidly aging versions that Canonical picks when they cut an LTS release.
Debian I found out recently has something similar now via "extrepo".
In general we do actually try to provide full context for errors from dockerd. Some things can be cryptic because, frankly, they are cryptic and require digging into what really happened (typical of errors from runc), but we do tend to wrap things so at least you know where the call site was.
There's also tracing data you can hook into, which could definitely be improved (some legacy issues around context propagation that need to be solved).
I've definitely seen, in the past, my fair share of errors that simply say "invalid argument" (typically this is a kernel message) without any context but have worked to inject context everywhere or do better at handling errors that we can.
So definitely interested in anything you've seen that could be improved because no one likes to get an error message that you can't understand.
Either the machine is a single security domain, in which case running as root is no issue, or it's not and you need actual isolation in which case run VMs with Firecracker/Kata containers/etc.
Rootless is indeed a world of pain for dubious security promises.
- podman having a more consistent CLI API/more parameters (but I think docker did at least partially catch up)
- user-ns container allow mounting the context instead of copying it, this means that if you somehow end up with a huge build context user-ns build can be _way_ faster then "classical" docker builds (might also apply to rootless docker, idk.). We ran into that when that one person in the team using Mac+Docker asked if we can do something about the unbearable slow docker build times (no one else on the team experienced :/)
- docker implicitly always has the docker Hub as configured as source which resolves "unqualified", this might not be true for your podman default setup so some scripts etc. which should work with both might fail (but it's easy to fix, preferable always qualify your images as there are increasingly more image hosts, in worst add docker hub in the config file).
- "podman compose" supports some less feature, this might seem like a huge issue but compose doesn't seem the best choice for deploying software and if I look how it turned out in dev setups the moment things became larger/more complicated I came to the conclusion that docker/podman compose is one of the easy to start with then get trapped in a high complexity/maintenance cost "bad" corner technologies. But I'm still looking for alternatives.
- podman sometimes missing some resource management features, but also docker sometimes does have differences in how it effectively enforces them not just between version but also with the same version between OSes, this had lead to issues before where docker-rootless kills a container and docker on Mac doesn't because on Mac it didn't notice spikes in resource usage (in that specific case).
I'm in IT consulting. If most companies could even get the basic best practices of the field implemented, I wouldn't have a job.
If I just want to run a random Docker container, I'm grateful I can get at least "some security" without paying as much in setup/debugging/performance.
Of course, ideally I wouldn't have to choose and the thing that runs the container would be able to run it perfectly securely without me having to know that. But I appreciate any movement in that direction, even if it's not perfect.
But Apple Container is another option with direct IP network support on macOS Tahoe, not possible with macOS Sequoia.
we had some similar issues and it was due to containers running out of resources (mainly RAM/memory, by a lot, but only for a small amount of time). And it happens that in rootless this was correctly detected and enforced, but on non rootless docker (in that case on a Mac dev laptop) it didn't detect this resource spikes and hence "happened to work" even through it shouldn't have.
There are existing tools that fill this gap (Singularity/Apptainer). But, there is always friction when you have to use a specialized tool versus the default. For me, this is a core usecase for rootless containers.
For the reduced feature set we need from containers in bioinformatics, rootless is pretty straightforward. You could get largely the same benefits from chroots.
Where I think the issues start is when you start to use networking, subuids, or other features that require root-level access. At this level, rootless because a tedious exercise in configuration that probably isn’t worth the effort. The problem is, the features I need will be different from the features you need. Satisfying all users in a secure was may not be worth it.
The benefit is that, Alpine has access to all your local and network drives so you can use them. You can sandbox them as well. It's not a big learning curve, just a good VM with access to all drives but isolated to local only.
- rootless docker, works fine, not fully sure why it's not the default by now (I do have issues form time to time but I had the same issues with root docker)
- rootfull podman
- running docker/podman daemon as a different and non root user (but have fun trying to mount anything !?)
I have also had no issues with networking, permissions or volumes while running as non-root user. Are you simply facing issues setting it up, or are you hitting some bugs or missing features?
[1] https://github.com/containers/podman-compose
[3] https://github.com/docker/compose
[4] https://github.com/docker/compose/tree/v1
[5] https://www.devopsroles.com/how-to-use-docker-compose-with-p...
It's not just that you need a licence now, it's that even if we took it to procurement, until it actually got done we'd be at risk of them turning up with a list of IP addresses and saying "are you going to pay for all of these installs, then?". It's just a stupid position to get into. The Docker of today might not have a record of doing that, but I wouldn't rule out them getting bought by someone like Oracle who absolutely, definitely would.
providing duplicate/additional non official builds for other OS is
- undermining the OSes package curation
- confusing for the user
- cost additional developer time, which for most OSS is fairly limited
- for non vendorable system dependencies this additional dev time cost can be way higher in all kinds of surprising ways
- obfuscate if a Linux distro is in-cable of properly maintaining their packages
- lead to a splitting of the target OS specific eco system of software using this as a dependency
etc.
it's a lose lose lose for pretty much everyone involved
so as long as you don't have a have a monetary reason that you must do it (like e.g. docker has) it's in my personal opinion a very dump thing to do
I apologize for being a bit blunt but in the end why not use a Linux distribution which works with modern software development cycles?
Blaming others for problems with the OS you decided to use when there are alternatives seems not very productive.
Most likely gp is having issues with volumes and hasn’t figured out how to mix the :z and :Z attribute to bind mounts. Or the containers are trying to do something that security-wise is a big no-no.
In my experience SELinux defaults have been much wiser than me and every time i had issues i ended up learning a better way to do what i wanted to do.
Other than that… it essentially just works.
It runs qemu under the hood if you want to run x86 (or sparc or mips!) instead of arm on a newer mac.
As much as I like Podman (and I really do), Docker has supported rootless mode for a long time and it's not any harder to set up than Podman.
> Use podman-compose as a drop-in replacement
Oh, if only it were a drop-in replacement. There are so many ways in which it is not exactly compatible with docker-compose, especially when it comes to the network setup. I have wasted more hours on this than I can count.
i've been using an archlinux vm for everything development over the past year and a half and i couldn't be happier.
The one thing I don't necessarily agree:
"Privileged ports in rootless mode not working? Good! That's security working as intended. A reverse proxy setup is a better architecture anyway."
I usually use Ngix as a reverse proxy - why not have it set up in the exact same way as the rest of your apps? That's a simplicity advantage. So with Podman, I would just run this one exact container in root mode - that's still better than all of them, but quite.
I am not a fan of docker-compose - a classic example of a tool trying to do too much for me, so the lack of something similar in Podman is not a drawback for me :)
Not sure about tooling around logs and monitoring though - there is plenty for Docker.
The best part? Whenever there's an "uh oh," you just SSH in to a box, patch it, and carry on about your business.
Have a look at https://kompose.io/ as well.
Because they just want their software package to run and they have been given some magic docker incantation that, if they are lucky, actually launches everything correctly.
The first time I used Docker I had so many damn issues getting anything to work I was put off of it for a long time. Heck even now I am having issues getting GPU pass through working, but only for certain containers, other containers it is working fine for. No idea what I am even supposed to do about that particular bit of joy in my life.
> All the same stuff can easily be done from cli.
If a piece of technology is being forced down a user's throat, users just wants it to work and go out of their way so they can get back to doing their actual job.
One thing which just occurred to me, maybe it's possible to have a [container] and a [service].user in a quadlet?
`services.podman.enable`
This also means that it's in the reproducible part of my setup which is a bonus.
> Remove layers, keep things simple.
Due to the first line above, I'm not sure if I'm reading the second line correctly. But I'm going to assume that you're referring to the OCI image layers. I feel your pain. But honestly, I don't think that image layers are such a bad idea. It's just that the best practices for those layers are not well defined and some of the early tooling propagated some sub-optimal uses of those layers.
I'll just start with when you might find layers useful. Flatpak's sandboxing engine is bubblewrap (bwrap). It's also a container runtime that uses namespaces, cgroups and seccomp like OCI runtimes do. The difference is that it has more secure seccomp defaults and it doesn't use layers (though mounts are available). I have a tool that uses bwrap to create isolated build and packaging environments. It has a single root fs image (no layers). There are two annoyances with a single layer like this:
1. If you have separate environments for multiple applications/packages, you may want to share the base OS filesystem. You instead end up replicating the same file system redundantly.
2. If you want to collect the artifacts from each step (like source download, extract and build, 'make install', etc) into a separate directory/archive, you'll find yourself reaching out for layers.
I have implemented this and the solutions look almost identical to what OCI runtimes do with OCI image layers - use either overlayfs or btrfs/zfs subvolume mounts.
So if that's the case, then what's the problem with layers? Here are a few:
1. Some tools like the image builders that use Dockerfile/Containerfile create a separate layer for every operation. Some layers are empty (WORKDIR, CMD, etc). But others may contain the results of a single RUN command. This is very unnecessary and the work-arounds are inelegant. You'll need to use caches to remove temporary artifacts, and chain shell commands into a single RUN command (using semicolons).
2. You can't manage layers like files. The chain of layers are managed by manifests and the entire thing needs a protocol, servers and clients to transfer images around. (There are ways to archive them. But it's so hackish.)
So, here are some solutions/mitigations:
1. There are other build tools like buildah and packer that don't create additional layers unless specified. Buildah, a sister project of Podman, is a very interesting tool. It uses regular (shell) commands to build the image. However, those commands closely resemble the Dockerfile commands, making it easy to learn. Thus you can write a shell script to build an image instead of a Dockerfile. It won't create additional layers unless you specify. It also has some nifty features not found in Dockerfiles.
Newer Dockerfile builders (I think buildkit) have options to avoid creating additional layers. Another option is to use dedicated tools to inspect those layers and split/merge them on demand.
2. While a protocol and client/servers are rather inconvenient for lugging images around, they did make themselves useful in other ways too. Container registries these days don't host just images. They can host any OCI artifact. And you can practically pack any sort of data into such an artifact. They are also used for hosting/transferring a lot of other artifacts like helm charts, OPA policies, kubectl plugins, argo templates, etc.
> So any alternative tooling that forces Docker to get it's act together is welcome
What else do you consider as some bad/sub-optimal design choices of Docker? (including those already solved by podman)
Idk what the problem is, but it's ugly. I switched to orbstack because there was something like a memory leak happening with docker desktop, just using waaaaay too many resources all the time, sometimes it would just crash. I just started using docker desktop from the get-go because when I came on I had multiple people with more experience say 'oh, you're coming from linux? Don't even try to use the docker daemon, just download docker desktop'.
The container split is often introduced because you have product-a, product-b and infrastructure operations teams/individuals that all share responsibility for an OS user space (and therefore none are accountable for it). You instead structure things as: a host OS and container platform for which infra is responsible, and then product-a container(s) and product-b container(s) for which those teams are responsible.
These boundaries are placed (between networks, machines, hosts and guests, namespaces, users, processes, modules, etc. when needed due to trust or useful due to knowledge sharing and goal alignment.
When they are present in single-user or small highly-integrated team environments, it's because they've been cargo-culted there, yes, but I've seen an equal number of environments where effective and correct boundaries were missing as I've seen ones where they were superfluous.
Must not be a good sysadmin then. SELinux improves the security and software like podman can be relatively easily be made to work with it.
I use podman on my Fedora Workstation with selinux set to enforce without issues
Whether it's $100/year or $10k/year it's all the same headache. Yes, this is dumb, but it's how the process works at a lot of companies.
Whereas if it's a free tool that just magically goes away. Yes, this is also dumb.
It's not the money, it's the bureaucracy. You can't just buy software, you need a justification, a review board meeting, marketplace survey with explanations of why this particular vendor was chosen over others with similar products, sign off from the management chain, yearly re-reviews for the support contract, etc...
And then you need to work with the vendor to do whatever licensing hoops they need to do to make the software work in an offline environment that will never see the Internet, something that more often than not blows the minds of smaller vendors these days. Half the time they only think in the cloud and situations like this seem like they come from Mars.
The actual cost of the product is almost nothing compared to the cost of justifying its purchase. It can be cheaper to hire a full time engineer to maintain the open source solutions just to avoid these headaches. But then of course you get pushback from someone in management that goes "we want a support contract and a paid vendor because that's best practices". You just can't win sometimes.
Does the "podman generate kube" command just define pods, or does it support other K8s components such as services and ingresses?
(Should live at https://forgejo.org/docs/v12.0/admin/actions/docker-access/ once it is finished up, if anyone runs into the comment after the draft is gone.)
I prefer to use podman if it’s available on my system but it still hasn’t hit the critical mass needed for it to be targeted and that’s a shame.
Also is there something like a dockerfile for buildah? I’ve tried a few times to understand how to use buildah and just fall back on a dockerfile because I can’t seem to wrap my head around how to put it into something IAC like.
Oh god. I can’t imagine how I could build reliably software if this is what I was doing. How do you know what “patches” are needed to run your software?
A good reference answer: https://unix.stackexchange.com/questions/651198/podman-volum...
TL;DR: lowercase if a file from the host is shared with a container or a volume is shared between multiple containers. Uppercase in the same scenario if you want the container to take an exclusive lock on the volumes/files (very unlikely).
This is going to differ company to company but since we're narrowing it to large companies I disagree. Usually there's a TPM that tracks license distribution and usage. Most companies provide that kind of information as part of their licensing program (and Docker certainly does.)
> Every new software license requires legal to review it.
Yes, but this is like 90% of what legal does - contract review. It's also what managers do but more on the negotiation end. Most average software engineers probably don't realize it but a lot of cloud services, even within a managed cloud provider like AWS, require contract and pricing negotiation.
> These centralized departments add up all of the license and SaaS costs and it shows up as one big number, which executives start pushing to decrease. When you let everyone get a license for everything they might need, it gets out of control quickly (many startups relearn this lesson in their growth phase)
As I said earlier, I can't speak for other companies but at large companies I've worked at this just simply isn't true. There's metrics for when the software isn't being used because the corporation is financially incentivized to shrink those numbers or consolidate on software that achieves similar goals. They're certainly individually tracked fairly far up the chain even if they do appear as a big number somewhere.
I get the why most people think they need containers, but it really seems only suited for hyper-complex (ironically, Google) deployments with thousands of developers pushing code simultaneously.
Mostly agree. But something like Podman w/ RedHat behind it is unlikely to be limited in the same way a lot of community OSS projects are.
Unfortunately, I disagree with just about every other point you made but don't think it's worth responding point-by-point. In short, I think a project having dedicated builds for popular OSes is a win-win for just about everyone, excepting that it does take, sometimes a considerable amount of, effort to support those cross OS builds. Additionally, there are now options like Snap/Flatpack/AppImage that can be targets instead of the OS itself, although there is admittedly a tradeoff there as well.
For some projects, say something like ripgrep, just using what is in the OS repo is fine because having the latest and greatest features/bug-fixes is unlikely to matter to most people using the tool.
But, on something like Podman, where there is so many pieces, it's a relatively new technology, and the interaction between Kernel, OS, and user space is so high, being stuck with a non-current OS provided release for a couple years is a non-starter.
> why not use a Linux distribution which works with modern software development cycles?
Because I like my OS to be stable, widely supported, and I also like some of my applications to be evergreen. I find Ubuntu is usually a really good mix that way and I'm going on 15+ years of use. There are other solutions for that that I could use, but I'm mostly happy where I am and don't want to spend the kind of time it would take to adopt a different OS and everything that would follow from that.
That leads _me_ to avoid Podman currently. I can appreciate that you have a different opinion, I just think you are a overplaying your perspective a bit in the comment above.
On Linux, for development, podman and docker are pretty similar but I prefer the k8s yaml approach vs compose so tend to use podman.
I don't think Apple really cares about dev use cases anymore so I haven't used a Mac for development in a while (I would for iOS development of course if that ever came up).
Either you haven't worked on k8s at scale or you're seriously suggesting an overly complex solution to elegant docker-compose. Docker compose exists because of it's simplicity and stability. I have also started using swarm and it doesn't get the recognition it deserves for the most easy-to-manage orchestration. Podman doesn't have such a thing. And yes, podman-compose is absolute garbage
It's only 9 bucks a year, its only 5 bucks a month, its less than a dollar a day.
Docker, ide, ticking system, GitHub, jira, sales force, email, office suit, Figma.... all of a sudden your spending 1000 bucks a month per staff member for a small 10 person office.
Meanwhile AWS is charging you .01xxxx for bandwidth, disk space, cpu time, s3 buckets, databases. All so tiencent based AI clients from China hammer your hardware and run up your bill....
The rent seeking has gotten out of hand.
The new docs split that out into separate podman-container/volume/etc.unit(5) pages, with quadlet.7 being the index page. So they're still linking to the same documentation, just the organization happened to change underneath them.
If you must see what they linked to originally, the versions docs are still the original organization (i.e. all on one page): https://docs.podman.io/en/v5.6.0/markdown/podman-systemd.uni...
More here:
- https://vermaden.wordpress.com/2023/06/28/freebsd-jails-cont...
- https://vermaden.wordpress.com/2025/04/11/freebsd-jails-secu...
- https://vermaden.wordpress.com/2025/04/08/are-freebsd-jails-...
- https://vermaden.wordpress.com/2024/11/22/new-jless-freebsd-...
1. People hear about how great rootless is with Podman but then expect to be able to switch directly from rootful Docker to rootless Podman without changing anything. The only way that could work is if there was no difference between rootful and rootless to begin with, but people don't want to hear that. They combine these two selling points in their head and think they can get both a drop-in replacement for Docker and also rootless by default. The proper choice is to either switch from rootful Docker to rootful Podman *or* put in the work to make your container work in rootless, work you would also have had to do with rootless Docker.
2. Docker Compose started out as an external third-party add-on (v1) which was later rewritten as an internal facility (v2) but `podman compose` calls out to either `docker-compose` (i.e. v1) or to its own clone of the same mechanism, `podman-compose`. The upshot is a lot of impedance mismatch. Combine that with the fact that Podman wants you to use Quadlets anyway, resulting in less incentive to work on these corner cases.
3. Docker has always tried to pretend SELinux doesn't exist, either by hosting on Debian and friends or by banging things into place by using their privileged (rootful) position. Podman comes from Red Hat, and until recently they had Mr SELinux on the team. Thus Podman is SELinux-first, all of which combines to confuse transplants who think they can go on ignoring SELinux.
4. On macOS and Windows, both Podman and Docker need a background Linux VM to provide the kernel, without which they cannot do LXC-type things. These VMs are not set up precisely the same way, which produces migration issues when someone is depending on exact details of the underlying VM. One common case is that they differ in how they handle file sharing with the host.
You're running into the `/etc/sub[ug]id` defaults. The old default was to start your normal user at 100000 + 64k additional sub-IDs per user, but that changed recently when people at megacorps with hundreds of thousands of employees defined in LDAP and similar ran into ID conflict. Sub-IDs now start at 2^19 on RHEL10 for this reason.
Whatever you gain by running FreeBSD comes at a high cost. And that high cost is keeping FreeBSD jails from taking over.
On Windows, you can use the docker that's built in to the default WSL2 image (ubuntu), and Docker Desktop will use it if available, otherwise it uses its own backend (probably also Hyper-V based).
I use Orbstack myself, but that's also a paid product.
sure I agree that where it's easily doable (like e.g. ripgrep) having non distro specific builds is a must have
But sadly this doesn't fully work for podman AFIK as it involves a lot of subtle interactions with things which aren't consistently setup across Linux distros with probably the worst offender being the Linux security modules system (e.g. SELinux, AppArmor etc.). But thinking about probably sooner or later you probably could have a mostly OS independent postman setup (limited to newer OS versions). Or to be more specific 3 one with SELinux one with AppArmor and neither with neither, so I guess maybe not :/
Linux gets a new privilege escalation exploit like once a month. If something would break out of the Docker daemon, it will break out of your own user account just fine. Using a non-root app does not make you secure, regardless of whatever containerization feature claims to add security in your own user namespace. On top of all that, Docker has a rootless mode. https://docs.docker.com/engine/security/rootless/
The only things that will make your system secure are 1) hardening every component in the entire system, or 2) virtualization. No containers are secure. That's why cloud providers all use mini-VMs to run customer containers (e.g. AWS Fargate) or force the customer to manage their own VMs that run the containers.
I tried to use podman, but that was largely a waste of time and I reverted to Docker. I don't have time going through docs to figure out why something that supposed to work is not working.
If you're building really arch-specific stuff, then I could see not wanting to go there, but Rosetta support is pretty much seamless. It's just slower.
I'm using dind too, but this requires privileged runners...
I don't think there's any stiffing going on, since the open source contributors knowingly contributed with a license that specifically says that payment isn't required. It is not reasonable for them to take the benefits of doing that but then expect payment anyway.
I'm not sure you realize that "open source" means anyone anywhere is free to use, modify, and redistribute the software in any way they see fit? Maybe you're thinking of freeware or shareware which often _do_ come with exceptions for commercial use?
But anyway, as an open source contributor, I have never felt I was being "stiffed" just because a company uses some software that I helped write or improve. I contribute back to projects because I find them useful and want to fix the problems that I run into so I don't have to maintain my own local patches, help others avoid the same problems, and because making the software better is how I give back to the open source community.
(It's titled "Don't Break Debian" but might also be called "Don't Break Ubuntu" as it applies there just as well.)
The only impactful difference I've noticed so far is that the company is moving to an artifact repository that requires authentication, and mounting secrets using --mount doesn't support the env= parameter -- that's really it.
I treat podman like I did docker all day long and it works great.
My deployment code worked by running the software outside of the jail environment and monitoring the running processes using `ptrace` to see what files it was trying to open. The `ptrace` output generated a list of dependencies, which could then be copied to create a deployment package.
This worked brilliantly and kept our deployments small and immutable and somewhat immune to attack -- not that being attacked was a huge concern in 2001 as it is today. When Docker came along, I couldn't help but recall that early work and wonder whether anyone has done a similar thing to monitor file usage within Docker containers and trim them down to size after observing actual use.
The majority of businesses in the world, (and the majority of jobs) are created and delivered by small business, not big.
And then the issues when a service goes down it takes everyone else down with it.
This is only partially true. Google's runtime (gvisor) does not share a kernel with the host machine, but still runs inside of a container.
1. As parallel commenters have pointed out, no. Plenty of open source developers exist who aren't interested in getting paid for their open source projects. You can tell this because some open source projects sell support or have donation links or outright sell their open source software and some do not. This line of thinking seems to come out of some utopian theoretical world where open source developers shouldn't sell their software because that makes them sell-outs but users are expected to pay them anyways.
2. I do love the idea of large companies paying for open source software they use because it tends to set up all kinds of good incentives for the long term health of software projects. That said, paying open source projects tends to be comically difficult. Large companies are optimized for negotiating enterprise software agreements with a counterparty that is primed to engage in that process. They often don't have a smooth way to like, just feed money into a Donate form, or make a really big Github or Patreon Sponsorship, etc. So even people in large companies that really want to give money to open source devs struggle to do so.
And then there's the windowing system of macOS that feels like it's straight from the 90s. "System tray" icons that accumulate over time and are distracting, awful window management with clunky animations, the near useless dock (clicking on VS Code shows all my 6 IDEs, why?). Windows and Linux are much modern in that regard.
The Mac hardware is amazing, well worth its price, but the OS feels like it's from a decade ago.
I'm sure what you wrote here is true but i cant fathom how. Maybe its a rh specific issue? (Like how ubuntu breaks rootless bwrap by default)
There are many benefits to be had for individuals and small companies as well. The piece of mind that comes with immutable architecture is incredible.
While it's true that you can often get quite far with the old cowboy ways, particularly for competent solo devs or small teams, there's a point where it starts to unravel, and you don't need to be a hyper-complex mega-corp to see it happen. Once you stray from the happy path or have common business requirements related to operations and security, the old ways become a liability.
There are reasons ops people will accept the extra layers and complexity to enable container-based architecture. They're not thrilled to add more infrastructure, but it's better than the alternative.
We set up a git post receive hook which built static files and restarted httpd on a git receive. Deployment was just 'git push live master'.
While I've used Docker a lot since then, that remains the single easiest deployment I've ever had.
I genuinely don't understand what docker brings to the table. I mean, I get the value prop. But it's really not that hard to set up http on vanilla Ubuntu (or God forbid, OpenBSD) and not really have issues.
Is the reproducibility of docker really worth the added overhead of managing containers, docker compose, and running daemons on your devbox 24/7?
Or so I was told when I made the monumental mistake of trying to fight such a policy once.
So now we just have a don't ask don't tell kind of gig going on.
I don't really know what the solution is, but dev laptops are goldmines for haxxors, and locking them down stops them from really being dev machines. shrug
Or so it seems to me whenever I have to deal with them. We ended up with Microsoft defender on our corp Macs even.. :|
There is no bottom to the barrel, and incompetence and insensitivity can rise quite high in some cases.
I do understand that this mostly is because management wants staff to be replaceable and disposable having specialty tools suggests that a person can be unique.
OT because not docker
In the realm of artistic software (thinking Alberton Live and Adobe suites) licensing hell is a real thing. In my recent experience it sorts the amateurs from the pros, in favour of amateurs
The time spent learning the closed system includes hours and dollars wrestling licenses. Pain++. Not just the unaffordable price, but time that could be spent creating
But for an aspiring professional it is the cost of entry. These tools must be mastered (if not paid for, ripping is common) as they have become a key part of the mandated tool chains, to the point of enshittification
The amateur is able to just get on with it, and produce what they want when they want with a dizzying array of possible tools
For me, as an ex-ops, the value proposition is to be able to package a complex stack made of one or more db, several services and tools (ours and external), + describe the interface of these services with the system in a standard way (env vars + mounts points).
It massively simplify the onboarding experience, make updating the stack trivial, and also allow devs, ci and prod to run the same version of all the libraries and services.
It doesn't quite change your argument, but where have you seen $9/year/dev?
The only way I see a $9 figure is the $9/month for Docker Pro with a yearly sub, so it's 12*$9=$108/year/dev or $1080/year for your 10 devs team.
Also it should be noted that Docker Pro is intended for individual professionals, so you don't have collaboration features on private repos and you have to manage each licence individually, which, even for only 10 licences, implies a big overhead.
If you want to work as a team you need to take the Docker Team licence, at $15/month/dev on a yearly sub, so now you are at $1800/year for your 10 devs team.
Twenty times more than your initial figure of $90/year. Still, $1800 is not that much in the grand scheme of things, but then you still have to add a usual Atlassian sub, an Office365/GWorkspace sub, an AI sub... You can end-up paying +$200/month/dev just in software licences, without counting the overhead of managing them.
But verbosity - yeah, kubernetes is absolutely super-verbose. Some 100-line docker-compose could easily end up as 20 yamls of 50 lines each. kubectl really needs some sugar to convert yamls from simple form to verbose and back.
https://cloud.google.com/blog/products/serverless/cloud-run-...
https://cloud.google.com/blog/products/serverless/cloud-run-...
zips the local copy of the branch and rsyncs it to the environment, and some other stuff
This would happen in your Dockerfile, and then the process of actually "installing" your application is just docker run (or kubectl apply, etc), which is an industry standard requiring no specialized knowledge about your application (since that is abstracted away in your Dockerfile).You're basically splitting the process of building and distributed your application into: write the software, build the image, deploy the image.
Everyone who uses these tools, which is most people by this point, will understand these steps. Additionally, any framework, cloud provider, etc that speaks container images, like ECS, Kubernetes, Docker Desktop, etc can manage your deployments for you, since they speak container images. Also, the API of your container image (e.g. the environment variables, entrypoint flags, and mounted volumes it expects) communicate to those deploying your application what things you expect for them to provide during deployment.
Without all this, whoever or whatever is deploying your application has to know every little detail and you're going to spend a lot of time writing custom workflows to hook into every different kind of infrastructure you want to deploy to.
If the software that you're running inside the container supports it, you can use socket activation [0] to get native performance.
[0]: https://github.com/containers/podman/blob/main/docs/tutorial...
You have a valid point in that many HN commentators seem to live in a bubble where spending thousands of dollars on a developer for "convenience" is seen as a no-brainer. They often work in companies that don't make a profit, but are funded by huge VC investments. I don't blame them, as it is a valid choice given the circumstances. If you have the money, why not? But they may start thinking differently if the flow of VC money slows down.
It's similar to how some wealthy people buy a private jet. Their time is valuable, and the cost seems justified (at least if you don’t care about the environmental impact).
I believe that frugality is actually the default mode of business, but many companies in SV are protected from the consequences by the VCs.
[0]: https://github.com/containers/podman-compose
[1]: https://docs.podman.io/en/latest/markdown/podman-image-scp.1...
Sounds great if you're only running a single web server or whatever. My team builds a fairly complex system that's comprised of ~45 unique services. Those services are managed by different teams with slightly different language/library/etc needs and preferences. Before we containerized everything it was a nightmare keeping everything in sync and making sure different teams didn't step on each others dependencies. Some languages have good tooling to help here (e.g. Python virtual environments) but it's not so great if two services require a different version of Boost.
With Docker, each team is just responsible for making sure their own containers build and run. Use whatever you need to get your job done. Our containers get built in CI, so there is basically a zero percent chance I'll come in in the morning and not be able to run the latest head of develop because someone else's dev machine is slightly different from mine. And if it runs on my machine, I have very good confidence it will run on production.
I find it easier to have the same interface for everything, where I can easily swap around ports.
The UX with orb is just much easier and the small gotchas between docker/podman started to add up. Especially with buildkit issues we had run into and things like using a remote buildkit instance (which we now use), was not supported well enough.
Running podman with SELinux enforcing (the default) and with "--security-opt=no-new-privileges" combined with running applications as non-root inside their containers should further reduce the security risk. You could also disable unprivileged user namespaces inside the containers if you want, which would mean that exploiting unprivileged user namespaces would first require arbitrary code execution on the host.
Obviously having a daemon running as root is larger attack surface than a program running as the user.
Going to the github.com/containers/podman/releases, the latest release is actually addressing a security risk that involves overwriting files of the host.
# v5.6.1 (Latest)
## Security
- This release addresses CVE-2025-9566, where Kubernetes YAML run by podman play kube containing ConfigMap and Secret volumes can use crafted symlinks to overwrite content on the host.
As always, the most secure computer is the one that is unplugged & turned off.Meanwhile, kompose.io exists, which is exactly what it does (but with Go templates as far as I can tell)
kompose: https://kubernetes.io/docs/tasks/configure-pod-container/tra...
Also, technically docker-compose was the first orchestration tool compared to Kubernetes. Expecting former to provide a translation layer for the latter is rather unorthodox. It is usually the latter tool provides certain compatibility features for former tools...
On the contrary, docker documentation *is* stable, I had bookmarks from 10-years ago on the *latest* editions, that still work today. The final link may have changed, but at least, there is a redirect (or a text showing that has been moved) instead of plain 404/not-found.
This is a crucial part of the quality applications offer. There might've been 100s of podmans probably since Docker was launched more than 10 years ago, but none came close to maintain high-quality of documentation and user-interface (ie. cli commands, switches). Especially in the backward-compatible way.
I'm sure I've spent more time writing and troubleshooting YAML files than I ever did just installing stuff on vm's.
[0]: https://github.com/containers/podman/blob/c8183c50/Makefile#...
> I genuinely don't understand what docker brings to the table.
I think you invalidated your own opinion here
1. You want to control spend - there are budgets. 2. You want to control accounting - minimize the number of vendors you work with. Each billing needs to come with an invoice, these need to be managed, when a developer leaves you need to cancel their seat etc. It's a pain. 3. You want to control compliance - are these tools safe? Are they accessing sensitive data? Are they audited? 4. You want to control interoperability between teams. Can't have it become a zoo of bring-your-own stuff.
So free tools get around all of these, you can just wing it under the radar and if the tool becomes prominent enough then you go fight the war to have it adopted. Once there's spend, you need to get into line. And that line makes a lot of sense when you're into 30 developers, let alone hundreds.
Thanks, I hate it. It might look fine at first but once you need anything more advanced than running a hello world container, it falls apart. Fun fact, there is no set of flags that would correctly build a multi-platform image on both Docker and Podman -- I found out the hard way.
I guess Podman may work if you go all in on it, but pretending that it's a drop-in replacement for Docker will bring you only pain.
For some products that might be worth it. For other not.
But whatever the outcome: you still got to track license compliance afterwards and renew licenses. (Which also works better when tracking internal usage as you know your need)
Also, latest with 20 employees or computers, someone in charge of IT (sysadmin, IT department) would decide to use a software asset management tool (aka software inventory system) to automatically track, roll out, uninstall, monitor vetted software. Anything else is just unprofessional.
Because it's not very useful by itself for running production infra, but it's great for helping to develop it.
Otherwise you're going to see more and more move to podman (and podman desktop) / OCI containers over time, as corps won't have to pay the docker tax and will get better integration with their existing k8s platform.
https://tangentsoft.com/podman/wiki?name=Not%20a%20Drop-In%2...
exactly. I've built podman for debian. It's not an esoteric target. It gets a little hairy with all of the capabilities stuff and selinux, but it's feasible. Give me, I don't know, $10k a quarter and I'd probably do it.
- Podman is usually used "rootless", but it doesn't have to be. It can also be used with rootful containers. It's still daemonless, though it can shim the Docker socket (very useful for using i.e. Docker Compose.)
- Docker can be used in a rootless fashion too. It will still run a daemon, but it can be a user service using user namespaces. Personally I think Podman does much better here.
Podman also has some other interesting advantages, like better systemd integrations. Sometimes Kubernetes just isn't necessarily; Podman + Systemd works well in a lot of those cases. (Note though that I have yet to try Quadlets.) Though unfortunately I don't think even the newer Quadlets integration has some basic niceties that Kubernetes has (like simple ways to do zero downtime deployments.)
It's a different style of documentation organization: if you want to link to a specific version you should link to the specific version not latest. I won't argue it's necessarily a better way of doing things than Docker, but knowing it's the same thing as what's with the package is nice.
That said, I'm not a nix guy, but to me, intuitively NixOS wins for this use case. It seems like you could either
A. Use declarative OS installs across deployments B. Put your app into a container which sometimes deploys it's own kernel and then sometimes doesn't and this gets pushed to a third party cloud registry, or you can set up your own registry, and then this container runs on a random ubuntu container or cloud hosting site where you basically don't administer or do any ops you just kind of use it as an empty vessel which exists to run your Docker container.
I get that in practice, these are basically the same, and I think that's a testament to the massive infrastructure work Docker, Inc has done. But it just doesn't make any sense to me
But this puts you in a league with some pretty advanced deployment tools, like high level K8, Ansible, cloud orchestration work, and nobody thinks those tools are really that appropriate for the majority of devteams.
People are out here using docker for like... make install.
I can't see how any kind of sensible security evaluation process would reach that conclusion. If you trust your users you don't need rootless, if you don't trust your users rootless containers aren't good enough. I suspect people do rootless because it seems easy and catches a few accidental mistakes rather than it being a legitimate security measure.
Also docker has the network effect. If there was a good light weight tool that was better enough people would absolutely use it.
But it doesn’t exist.
In an ideal world it wouldn’t exist, but we don’t live there.
Sadly "docker" is just a synonym for "container" for most people, so the main issue is that most projects only ship a compose file. Hopefully they'll ship quadlet files too, some day.
Alternatively, a public repository for sharing quadlets for popular open source software would be great.
Docker and other containerization solved the “it works on my machine” issue
These are almost always multi-tennant with differing levels of trust and experience between users. The data processed here can often have data access agreements or laws that limit who can see what data. You can’t have a poorly configured container exposing data, for example. So, the number of people who have root access is very limited. Normal users running workflows would all be required to run code rootless.
This can be a good or a bad thing—good because it's better, but bad because the popularity of Docker sometimes means things aren't compatible and require some tweaking to get running.
Kustomize eliminates the vast majority of the duplication (i.e. a unique fact about the cluster being expressed in more than one place), it's just the boilerplate that's annoying.
or you can use the `build(Layered)Image` to declaratively build an oci image with whatever inside it. I think you can mix and match the approaches.
but yes I'm personally a big fan of Nix's solution to the "works on my machine" problem. all the reproducibility without the clunkiness of having to shell into a special dev container, particularly great for packaging custom tools or weird compilers or other finnicky things that you want to use, not serve.
https://gist.github.com/bonzini/1abbbdec739e77503945a3605e0e...
Is this also applicable for single-host services? I have a lot of my toy projects packaged as a Docker Compose, and I just `docker compose up -d` in my EC2 host and it's ready to go. Last time I dabbled with K8s I remember it requiring separate etcd cluster, and a lot of configurations. I wonder if my existing projects could be converted to K8s manifest and it would be just as convenient as the `docker compose up -d`.
I even wrote wrote an article about that: https://joshkaramuth.com/blog/docker-vs-podman/
At the moment it seems docker compose misbehaves with Podman when WSL2 gets involved.
I look forward to when I can replace Docker entirely.
If you're running rootless Podman containers then the Podman API is only running with user privileges. And, because Podman uses socket activation, it only runs when something is actively talking to it.
I’ve dealt with a fair bit of Swarm internals for https://lunni.dev/, and I’m ready to switch to k8s any moment. Don’t wanna lose Compose Spec support though, so I’ve started an IaC thingy that can map it to both k8s and Swarm resources. (Now I need to find some time for it!)
Suddenly you're in a team with 2-3 people and one of them likes to git push broken code and walk-off.
Okay, lets make this less about working with a jack-ass, same setup, but each 5 minutes of downtime cost you millions of dollars. One of your pushes work locally but don't work on the server.
The point of a more structed / complex CI/CD process is to eliminate failures. As the stakes become higher, and the stack becomes more complex, the need for the automation grows.
Docker is just a single part of that automation that makes other things / possible / lowers specific class of failures.
If you want proper security go to firecracker [^1]. Podman is the "RedHat/IBM docker-way" but I see very little benefit overall; never less if it works for you great and go with it!
Almost because most common commands work, but I have not check all.
And almost, because for some docker-compose.yaml which you downloaded/LLM generated you may need to prepend `docker.io/` to the image name
Having used Docker Desktop on a Mac myself, it seems... fine? It does the job well enough, and it’s part of the development rather than production flow so it doesn’t need to be perfect, just unobtrusive.
it's pretty stupid because the same curl | bash that could have done that could have just posted the same contents directly to the internet without the container. The best chance you actually have is to do as much development as possible inside a sealed environment like ... a container where at least you have some way to limit visibility of partially trusted code of your file system.
I still do that for all my personal projects! One of the advantages of docker is that you don't have to rebuild the thing on each deployment target.
Yes. Everything on my box is ephemeral and can be deleted and recreated or put on another box with little-to-no thought. Infrastructure-as-code means my setup is immutable and self-documented.
It's a little more time to set up initially, but now I know exactly what is running.
I don't really understand the 24/7 comment, now that it is set up there's very very little maintenance. Sometimes an upgrade might go askew but that is rare.
Any change to it is recorded as a git commit, I don't have to worry about logging what I've done ever because it's done for me.
Changes are handled by a GitHub action, all I have to do to change what is running is commit a file, and the infra will update itself.
I don't use docker-compose, I use a low-overhead microk8s single-node cluster that I don't think about at all really, I just have changes pushed to it directly with Pulumi (in a real environment I'd use something like ArgoCD) and everything just works nicely. Ingress to services is done through Cloudflare tunnels so I don't even have to port-forward or think about NAT or anything like this.
To update my personal site, I just do a git commit/push, the it's CI/CD builds builds a container and then updates the Pulumi config in the other repo to point to the latest hash, which then kicks off an action in my infra repo to do a Pulumi apply.
Currently it runs on Ubuntu but I'm thinking of using Talos (though it's still nice to be able to just SSH to the box and mess around with files).
I'm not sure why people struggle with this, or the benefits of this approach, so much? It seems like a lot of complexity if you're inexperienced, but if you've been working with computers for a long time, it isn't particularly difficult—there are far more complicated things that computers do.
I could throw the box (old macbook) in a lake and be up and running with every service on a new box in an hour or so. Or I could run it on the cloud. Or a VPS, or metal, or whatever really, it's a completely portable setup.
My point is: If figuring things out with podman is similar to my experience, I understand why people don't want to do that. Do they have a definitive page dedicated to setting up Selinux for podman, that is well maintained and guaranteed to solve all Selinux issues, and allows me to use bind mounts with readonly permission?
I basically use (orbstack) docker containers as light weight VM, easily accessible through multiple shells and they shutdown when nothing is running anymore.
I use them for development isolation? Or when I need to run some tool. It mounts the current directory, so your container is chrooted to that project.
I've worked at companies that size and the "war" involved putting time in the calendar of the head of engineering, asking how his son was, demoing the product we wanted for about two minutes and explaining the pain point it solved, then promising to get our legal team and the one security person to review it after he put the credit card in and before we used it in prod. When I worked somewhere larger it was much more difficult.
Never really had any major problems with Docker Desktop on Windows. I run it and it allows me to run containers through WSL 2. Volume performance is near native Linux speeds and the software itself doesn't crash, even on my 10 year old machine.
I also use it on macOS on a work laptop for a lot of different projects and it works. There's more issues around volume mount performance here but it's not something that's unusably slow. Also given the volume performance is mostly due to OS level file system things I'm skeptical Podman would resolve that. I remember trying Colima for something and it made no difference there.
Nix is, as far as I know, not there and we would probably need weeks of training to get the same result.
Most of the time the value of a solution is not in its technical perfection but in how many people already know it, documentation, and more important all the dumb tooling that's around it!
Rootless mode seems to support all the same features, but is obviously more secure than the "run everything as root" mode. In fact, most of the CVE's mentioned would allow an attacker to escalate to the privilege of the user running docker, instead of escalating to he root user.
Comparing the security of rootless-podman to rootful-docker is an absurd (and obviously unfair) comparison.
It seems you never had to deal with timezone-dependent tests.
note that docker daemon does not have to be running with root privileges. you can use this script to start docker rootless: https://github.com/docker-archive/engine/blob/master/contrib...
(I ended up taking another offer but I still think they're onto something.)
The base image building can be pretty easily automated, then individual projects using those base images can expect new base images on a regular basis, and test updating to the latest at their leisure without getting any surprise changes.
If you want to handle all your deployments the same way, you can basically only choose between Nix and containers. Unfortunately, containers are far more popular and have more tooling.
Why wouldn't it be, containers are super easy to manage, dockerd uses bugger all resources in dev (on Linux anyway) and docker compose files are the simplest setup scripts I've ever used
I like docker because it's easy and I'm lazy
In the past I think I wound up using https://github.com/mgoltzsche/podman-static because I could not get those podman static binaries to work
There is some dissonance in presenting Podman as a plug-in replacement for Docker, and making it so damn hard to install on (some category's) most popular contemporary LTS Linux distro.
What you say is absolutely correct. If Docker keeps creating compatibility layers for it's competitors, it makes everyone to switch to a competitor. In this case, the competitor is Kubernetes as it's running in production for much larger scale (enterprise workloads) compared to Podman et. al.
Hence, it's the job of Podman, Kubernetes, et. al. to write their compatibility layer to provide a value-add for their customers.
The podman CLI is nearly a drop-in replacement for docker such that `alias docker=podman` works for many of the most common use cases.
If you don't care about the security implications of running containers as root via a client/server protocol, then by all means keep using Docker. I've switched to podman and I'm happy with my decision, but to each their own.
Just put this thread into a whatever LLM. I overall see 2 major themes here. Compatibility and stability issues, all over the place. Not just documentation, but with other tools. Compose schema v2 does not match the current/latest one, missing functionality (although this one is acceptable at certain level), etc.
Also, as soon as the docs were "posted", it became obsolete/useless/deprecated. I mean, what sort of quality are we talking about here?
the need to spin its own wsl instance (which takes a lot of disk space) and gpu workarounds are just not there yet.
others have mentioned about podman compose but the old docker-compose do work to be fair.
But second -- I use colima lots, on my home macs and my work macs, and it mostly just works. The profiles stuff is kinda annoying and I find myself accidentally running arm when I want x86, or other tedious config issues crop up. But it actually has been easier to live with than docker desktop where I'd run out of space and things would fall apart.
Docker on MacOS is broadly going work poorly relative to it on linux, just from having to run the docker stuff in a linux vm that's hiding somewhere behind the scenes.
If you find too much friction with any of these, probably it's easier to just run a linux vm on the mac and interact with docker in the 'native' environment. I've found UTM to be quite a bit easier to live with than virtualbox.
I'm aware that I, too, could be the someone but like I said it's hard to dedicate all the time and energy when the last time I used vagrant was years ago
I also just remembered that I haven't revisited the forks list to see if there's some meaningful activity https://github.com/hashicorp/vagrant/forks?include=active&pa...
But if you actually add up the time we spend using docker, I'm really not sure it saves that many cycles
Correction: Docker Desktop is $9/month (not $9/year).
With podman, RedHat made an effort to make SElinux work. With Docker, as third-party-software, no proper SElinux config was ever written. With Docker, there is no hope at all that you'd get SElinux to work.
With podman, there is hope, as long as all your containers and usecases are simple, "well-behaved" and preferrably also RedHat-based and SElinux-aware. In the easy cases, podman + SElinux will just work. But unfortunately, containers are the means to get crappy software running, where the developers were too lazy to do proper packaging/installation/configuration/integration. So most cases are not easy and will not work with SElinux, if you don't have infinite time to write your own config...
https://tangentsoft.com/podman/wiki?name=Not%20a%20Drop-In%2...
Thanks!
I would use containers too, in such cases.
Most software has issues, but Colima is noticeably worse than most software I've used. And the complete lack of documentation is definitely not normal.
As far as IT operations goes, it's usually easier to get approval for paid products since they come with support and are viewed as more "trustworthy". At least in my experience.
I've never worked in a 300+ organisation where you could "just use" things. I have worked in places where they gave some of us local admins (I've been a domainadmin in a few places too), but there is usually a large bureaucracy around software regardless of licenses. Where I work right now, licensing is a minor part of it for companies with good payment systems (like Docker) where it'll automatically go on the books and be EU tax deducted. Compare that to GitKraken where you need to create an IT owner account inside their system, and then distribute the annual licenses manually after you pay for them with a credit card that you will then need to manually submit for tax deduction.
Not that this should be an argument for docker. The idea that having someone to call makes a piece of software "safer" is as ridiculous at it sounds. Especially if you've ever tried "calling" a company you buy 20 licenses from, and when I say call what I really mean is talking with a chatbot and then waiting a month for them to get back to you via email. But IT's gonna IT.
You can also use this to create a VM for Podman that runs on Fedora, rootful by default: https://github.com/carlosonunez/bash-dotfiles/blob/main/lima...
If you go the Lima approach, use `podman system connection add` to add rootful and rootless VMs, then use the `--connection` flag to specify which you want to use. You can alias them to make that easier; for instance, use `alias podman=podman` for rootless stuff (assuming the rootless VM is your default) nad `alias rpodman=podman --connection rootful` for rootful stuff. I'll write a post describing how to set all of that up soon!
Ask yourself: how does an enterprise with 1000 engineers manage to push a feature out 1000x slower than two dudes in a garage? Well, processes, but also architecture.
Distributed systems slow down your development velocity by many orders of magnitude, because they create extremely fragile systems and maintenance becomes extremely high risk.
We're all just so used to the fragility and risk we might think it's normal. But no, it's really not, it's just bad. Don't do that.
Really less than 1% of systems need to be distributed. Are you Google? No? Then you probably don't need it.
The rest is just for fun. Or, well, pain. Usually pain.