←back to thread

214 points ksec | 10 comments | | HN request time: 1.27s | source | bottom
Show context
sevg ◴[] No.45076556[source]
Is it just me or does Kent seem self-destructively glued to his own idea of how kernel development should work?

I don’t doubt that people on all sides have made mis-steps, but from the outside it mostly just seems like Kent doesn’t want to play by the rules (despite having been given years of patience).

replies(5): >>45077241 #>>45077371 #>>45077492 #>>45077724 #>>45080172 #
bornfreddy ◴[] No.45077492[source]
Being an outsider to this whole scene, the whole thread reads very differently to me.

Kent seems very patient in explaining his position (and frustrations arising from other people introducing bugs to his code) and the kernel & debian folks are performing a smearing campaign instead of replying to what I see are genuine problems in the process. As an example, the quotes that are referenced by user paravoid are, imho, taken out of context (judging by reading the provided links).

There probably is a lot more history to it, but judging from that thread it's not Kent who looks like a bad guy.

replies(4): >>45077710 #>>45077865 #>>45078165 #>>45086496 #
arp242 ◴[] No.45077865[source]
Kent brings up Debian himself, unprompted.

This is one of the problems: Kent is frequently unable to accept that things don't go his way. He will keep bringing it up again and again and he just grinds people down with it. If you see just one bit of it then it may seem somewhat reasonable, but it's really not because this is the umpteenth time this exact discussion is happening and it's groundhog day once again.

This is a major reason why people burn out on Kent. You can't just have a disagreement/conflict and resolve it. Everything is a discussion with Kent. He can't just shrug and say "well, I think that's a bit silly, but okay, I can work with it, I guess". The options are 1) Kent gets his way, or 2) he will keep pushing it (not infrequently ignoring previous compromises, restarting the discussion from square one). Here too, the Debian people have this entire discussion (again) forced upon them by Kent's comments in a way that's just completely unnecessary and does nothing to resolve anything.

Even as an interested onlooker who is otherwise uninvolved and generally more willing to accept difficult behaviour than most people, I've rather soured on Kent over time.

replies(1): >>45079169 #
koverstreet ◴[] No.45079169[source]
You do realize that data integrity issues are not "live and let live" type things, right?

And there's a real connection to the issue that sparked all this drama in the kernel and the Debian drama: critical system components (the kernel, the filesystem, and others) absolutely need to be able to get bugfixes in a timely manner. That's not optional.

With Debian, we had a package maintainer who decided that unbundling Rust dependencies was more important than getting out updates, and then we couldn't get a bugfix out for mount option handling. This was a non-issue for every other distro with working processes because the bug was fixed in a few days, but a lot of Debian users weren't able to mount in degraded mode and lost access to their filesystems.

In the kernel drama, Linus threw a fit over a repair code to recover from a serious bug and make sure users didn't lose data, and he's repeatedly picked fights over bugfixes (and even called pushing for getting bugfixes out "whining" in the past).

There are a lot of issues that there can be give and take on, but getting fixes out in a timely manner is just part of the baseline set of expectations for any serious project.

replies(1): >>45079488 #
1. arp242 ◴[] No.45079488[source]
Look, I get where you're coming from. It's not unreasonable. I've said this before.

But there are also reasons why things are the way they are, and that is also not unreasonable. And at the end of the day: Linus is the boss. It really does come down to that. He has dozens of other subsystem maintainers to deal with and this is the process that works for him.

Similar stuff applies to Debian. Personally, I deeply dislike Debian's inflexible and outmoded policy and lack of pragmatism. But you know, the policy is the policy, and at some point you just need to accept that and work with it the best you can.

It's okay to make all the arguments you've made. It's okay to make them forcefully (within some limits of reason). It's not okay to keep repeating them again and again until everyone gets tired of it and seemingly just completely fail to listen to what people are sating. This is where you are being unreasonable.

I mean, you *can* do that, I guess, but look at where things are now. No one is happy with this – certainly not you. And it's really not a surprise, I already said this in November last year: "I wouldn't be surprised to see bcachefs removed from the kernel at some point".[1] To be clear: I didn't want that to happen – I think you've done great work with bcachefs and I really want it to succeed every which way. But everyone could see this coming from miles.

[1]: https://news.ycombinator.com/item?id=42225345

replies(2): >>45079527 #>>45081126 #
2. koverstreet ◴[] No.45079527[source]
You have to consider the bigger picture.

XFS has burned through maintainers, citing "upstream burnout". It's not just bcachefs that things are broken for.

And it was burning me out, too. We need a functioning release process, and we haven't had that; instead I've been getting a ton of drama that's boiled over into the bcachefs community, oftentimes completely drowning out all the calmer, more technical conversations that we want.

It's not great. It would have been much better if this could have been worked out. But at this point, cutting ties with the kernel community and shipping as a DKMS module is really the only path forwards.

It's not the end of the world. Same with Debian; we haven't had those issues in any other distros, so eventually we'll get a better package maintainer who can work the process or they'll figure out that their Rust policy actually isn't as smart as they think it is as Rust adoption goes up.

I'm just going to push for doing things right, and if one route or option fails there's always others.

replies(1): >>45080437 #
3. simoncion ◴[] No.45080437[source]
> We need a functioning release process...

Yeah, that's in place. If nothing else, the decades of successful releases indicate that the process -at worst- functions. Whether that process fits your process is irrelevant.

> You have to consider the bigger picture.

Right back at you. Buddy, you need to learn how to lose.

4. nextaccountic ◴[] No.45081126[source]
> But there are also reasons why things are the way they are, and that is also not unreasonable.

It is unreasonable if it leads to users losing data. At this point, the only reasonable thing is to either completely remove support for bcachefs or give timely fixes for critical bugs, there's no middle position that won't willfully lead to users losing their data.

This used to be the default for distributions like Debian some time ago. You only supported foundational software if you were willing to also distribute critical fixes in a timely manner. If not, why bother?

For all other issues, I guess we can accept that things are the way they are.

replies(2): >>45081839 #>>45081934 #
5. abenga ◴[] No.45081839[source]
> It is unreasonable if it leads to users losing data.

Changing the kernel development process to allow adding new features willy-nilly late in the RC cycle will lead to much worse things than a few people using an experimental file system losing their data in the long term.

The process exists for a reason, and the kernel is a massive project that includes more than just one file system, no matter how special its developers and users believe it is.

replies(1): >>45083357 #
6. rwmj ◴[] No.45081934[source]
Not too familiar with the kernel process for this, but for Linux distros there are ways to respond to critical issues including data corruption and data loss. It's just that you have to follow their processes to do this, such as producing a minimal patch that fixes the problem which is backported into the older code base (and there's a reason for that too: end users don't want churn on their installed systems, they want an install to be stable and predictable). Since distros are how you ultimately get your code into users' hands, it's really their way or the highway. Telling the distros they are wrong isn't going to go well.
replies(1): >>45082218 #
7. nextaccountic ◴[] No.45082218{3}[source]
For the Debian thing, I'm not sure on the specifics for bcachefs-progs (I'm going by what the author is reporting and some blog posts) but I think the problem with Debian is that they willfully ignore when upstream says "this is only compatible with this library version 2.1.x" and will downgrade or upgrade the library into not supported versions, to match the versions used in other programs already packaged. This kind of thing can introduce subtle, hard to debug bugs. It's a mess and problems are usually reported to upstream, that's a recurrent problem for Rust programs packaged in Debian. Rust absolutely isn't this language where if it compiles, it works, no matter how much people think otherwise.

And this is happening even though it's common for Debian to package the same C library multiple times, like, libfuse2 and libfuse3. This could be done for Rust libraries if they wanted to.

Anyway see the discussion and the relevant article here https://news.ycombinator.com/item?id=41407768 and https://jonathancarter.org/2024/08/29/orphaning-bcachefs-too...

replies(1): >>45082245 #
8. rwmj ◴[] No.45082245{4}[source]
But that's exactly the point here. In the context of a whole distribution, you don't want to update some package to a new version (on a stable branch), because that would affect lots of other packages that depended on that one. It may even be that other packages cannot work with the new updated dependency. Even if they can, end users don't want versions to change greatly (again, along a stable branch). Upstreams should accept this reality and ensure they support the older libraries as far as possible. Or they can deny reality and then we get into this situation.

And carrying multiple versions is problematic too as it causes increased burdens for the downstream maintainers.

I'd argue that libfuse is a bit of a special case since the API between 2 & 3 changed substantially, and not all dependencies have moved to version 3 (or can move, since if you move the v3 then you break on other platforms like BSD and macOS that still only support the v2 API).

Rust and especially Golang are both a massive pile of instability because the developers don't seem to understand that long term stable APIs are a benefit. You have to put in a bit of care and attention rather than always chasing the new thing and bundling everything.

replies(1): >>45083697 #
9. koverstreet ◴[] No.45083357{3}[source]
There's no need for kernel development process to change. New features go in during RCs all the time, it's always just a risk vs. reward calculation, and I'm more conservative with what I send outside the merge window that a lot of subsystems.

This blowup was entirely unnecessary.

10. rwmj ◴[] No.45083697{5}[source]
BTW here's where I ported nbdfuse from v2 to v3 so you can see the kinds of changes: https://gitlab.com/nbdkit/libnbd/-/commit/c74c7d7f01975e708b...