I don’t doubt that people on all sides have made mis-steps, but from the outside it mostly just seems like Kent doesn’t want to play by the rules (despite having been given years of patience).
I don’t doubt that people on all sides have made mis-steps, but from the outside it mostly just seems like Kent doesn’t want to play by the rules (despite having been given years of patience).
Kent seems very patient in explaining his position (and frustrations arising from other people introducing bugs to his code) and the kernel & debian folks are performing a smearing campaign instead of replying to what I see are genuine problems in the process. As an example, the quotes that are referenced by user paravoid are, imho, taken out of context (judging by reading the provided links).
There probably is a lot more history to it, but judging from that thread it's not Kent who looks like a bad guy.
This is one of the problems: Kent is frequently unable to accept that things don't go his way. He will keep bringing it up again and again and he just grinds people down with it. If you see just one bit of it then it may seem somewhat reasonable, but it's really not because this is the umpteenth time this exact discussion is happening and it's groundhog day once again.
This is a major reason why people burn out on Kent. You can't just have a disagreement/conflict and resolve it. Everything is a discussion with Kent. He can't just shrug and say "well, I think that's a bit silly, but okay, I can work with it, I guess". The options are 1) Kent gets his way, or 2) he will keep pushing it (not infrequently ignoring previous compromises, restarting the discussion from square one). Here too, the Debian people have this entire discussion (again) forced upon them by Kent's comments in a way that's just completely unnecessary and does nothing to resolve anything.
Even as an interested onlooker who is otherwise uninvolved and generally more willing to accept difficult behaviour than most people, I've rather soured on Kent over time.
And there's a real connection to the issue that sparked all this drama in the kernel and the Debian drama: critical system components (the kernel, the filesystem, and others) absolutely need to be able to get bugfixes in a timely manner. That's not optional.
With Debian, we had a package maintainer who decided that unbundling Rust dependencies was more important than getting out updates, and then we couldn't get a bugfix out for mount option handling. This was a non-issue for every other distro with working processes because the bug was fixed in a few days, but a lot of Debian users weren't able to mount in degraded mode and lost access to their filesystems.
In the kernel drama, Linus threw a fit over a repair code to recover from a serious bug and make sure users didn't lose data, and he's repeatedly picked fights over bugfixes (and even called pushing for getting bugfixes out "whining" in the past).
There are a lot of issues that there can be give and take on, but getting fixes out in a timely manner is just part of the baseline set of expectations for any serious project.
But there are also reasons why things are the way they are, and that is also not unreasonable. And at the end of the day: Linus is the boss. It really does come down to that. He has dozens of other subsystem maintainers to deal with and this is the process that works for him.
Similar stuff applies to Debian. Personally, I deeply dislike Debian's inflexible and outmoded policy and lack of pragmatism. But you know, the policy is the policy, and at some point you just need to accept that and work with it the best you can.
It's okay to make all the arguments you've made. It's okay to make them forcefully (within some limits of reason). It's not okay to keep repeating them again and again until everyone gets tired of it and seemingly just completely fail to listen to what people are sating. This is where you are being unreasonable.
I mean, you *can* do that, I guess, but look at where things are now. No one is happy with this – certainly not you. And it's really not a surprise, I already said this in November last year: "I wouldn't be surprised to see bcachefs removed from the kernel at some point".[1] To be clear: I didn't want that to happen – I think you've done great work with bcachefs and I really want it to succeed every which way. But everyone could see this coming from miles.
XFS has burned through maintainers, citing "upstream burnout". It's not just bcachefs that things are broken for.
And it was burning me out, too. We need a functioning release process, and we haven't had that; instead I've been getting a ton of drama that's boiled over into the bcachefs community, oftentimes completely drowning out all the calmer, more technical conversations that we want.
It's not great. It would have been much better if this could have been worked out. But at this point, cutting ties with the kernel community and shipping as a DKMS module is really the only path forwards.
It's not the end of the world. Same with Debian; we haven't had those issues in any other distros, so eventually we'll get a better package maintainer who can work the process or they'll figure out that their Rust policy actually isn't as smart as they think it is as Rust adoption goes up.
I'm just going to push for doing things right, and if one route or option fails there's always others.
Yeah, that's in place. If nothing else, the decades of successful releases indicate that the process -at worst- functions. Whether that process fits your process is irrelevant.
> You have to consider the bigger picture.
Right back at you. Buddy, you need to learn how to lose.
It is unreasonable if it leads to users losing data. At this point, the only reasonable thing is to either completely remove support for bcachefs or give timely fixes for critical bugs, there's no middle position that won't willfully lead to users losing their data.
This used to be the default for distributions like Debian some time ago. You only supported foundational software if you were willing to also distribute critical fixes in a timely manner. If not, why bother?
For all other issues, I guess we can accept that things are the way they are.
Changing the kernel development process to allow adding new features willy-nilly late in the RC cycle will lead to much worse things than a few people using an experimental file system losing their data in the long term.
The process exists for a reason, and the kernel is a massive project that includes more than just one file system, no matter how special its developers and users believe it is.
And this is happening even though it's common for Debian to package the same C library multiple times, like, libfuse2 and libfuse3. This could be done for Rust libraries if they wanted to.
Anyway see the discussion and the relevant article here https://news.ycombinator.com/item?id=41407768 and https://jonathancarter.org/2024/08/29/orphaning-bcachefs-too...
And carrying multiple versions is problematic too as it causes increased burdens for the downstream maintainers.
I'd argue that libfuse is a bit of a special case since the API between 2 & 3 changed substantially, and not all dependencies have moved to version 3 (or can move, since if you move the v3 then you break on other platforms like BSD and macOS that still only support the v2 API).
Rust and especially Golang are both a massive pile of instability because the developers don't seem to understand that long term stable APIs are a benefit. You have to put in a bit of care and attention rather than always chasing the new thing and bundling everything.
This blowup was entirely unnecessary.