The patch that kicked off the current conflict was the 'journal_rewind' patch; we recently (6.15) had the worst bug in the entire history upstream - it was taking out entire subvolumes.
The third report got me a metadata dump with everything I needed to debug the issue, thank god, and now we have a great deal of hardening to ensure a bug like this can never happen again. Subsequently, I wrote new repair code, which fully restored the filesystem of the 3rd user hit by the bug (first two had backups).
Linus then flipped out because it was listed as a 'feature' in the pull request; it was only listed that way to make sure that users would know about it if they were affected by the original bug and needed it. Failure to maintain your data is always a bug for a filesystem, and repair code is a bugfix.
In the private maintainer thread, and even in public, things went completely off the rails, with Linus and Ted basically asserting that they knew better than I do which bcachefs patches are regression risks (seriously), and a page and a half rant from Linus on how he doesn't trust my judgement, and a whole lot more.
There have been many repeated arguments like this over bugfixes.
The thing is, since then I started perusing pull requests from other subsystems, and it looks like I've actually been more conservative with what I consider a critical bugfix (and send outside the merge window) than other subsystems. The _only_ thing that's been out of the ordinary with bcachefs has been the volume of bugfixes - but that's exactly what you'd expect to see from a new filesystem that's stabilizing rapidly and closing out user bug reports - high volume of pure bugfixing is exactly what you want to see.
So given that, I don't think having a go-between would solve anything.
1. Regardless of whether correct or not, it's Linus that decides what's a feature and what's not in Linux. Like he has for the last however many decades. Repair code is a feature if Linus says it is a feature.
2. Being correct comes second to being agreeable in human-human interactions. For example, dunking on x file system does not work as a defense when the person opposite you is a x file system maintainer.
3. rules are rules, and generally don't have to be "correct" to be enforced in an organization
I think your perceived "unfairness" might make sense if you just thought of these things as un-workaroundable constraints, Just like the fact that SSDs wear out over time.
Do you argue with your school teachers that your book report shouldn't be due on Friday because it's not perfect yet?
I read several of your response threads across different websites. The most interesting to me was LWN, about the debian tools, where an actual psychologist got involved.
All the discussions seem to show the same issue: You disagree with policies held by people higher up than you, and you struggle with respecting their decisions and moving on.
Instead you keep arguing about things you can't change, and that leads people to getting frustrated and walking away from you.
It really doesn't matter how "right" you may be... not your circus, not your monkeys.
Edit since you expanded your post:
>The most interesting to me was LWN, about the debian tools, where an actual psychologist got involved.
To me the comment was patronizing implying it was purely due to bad communication from Kent's end and shows how immature people are with running these operating system are. Putting priority on processes over the end user.
>respecting their decisions and moving on.
When this causes real pain for end users. It's validating that the decision was wrong.
> really doesn't matter how "right" you may be... not your circus
It does because it causes reputational damage for bcachefs. Even beyond reputational damage, delivering a good product to end users should be a priority. In my opinion projects as big as Debian causing harm to users should be called out instead of ignored. Else it can lead to practices like replacing dependencies out from underneath programs to become standard practice.
This is the difference between being smart and being wise. If the goal of all this grandstanding was that, it's so incredibly and vitally important for these patches to get into the kernel, well guess what, now due to all this drama this part of the kernel is going to go unmaintained entirely. Is that good for the users? Did that help our stated goal in any way? No.
The adult thing is to do best by the users. Critical file system bugs are worth blocking the release of any serious operating system in the real world as there is serious user impact.
>Is that good for the users?
I think it's complicated. It could allow for a faster release schedule for bug fixes which can allow for addressing file system issues faster.
Best by users in the long term is predictable processes. "RC = pure bug fixes" is a battle tested, dependable rule, absence of which causes chaos.
> Critical file system bugs are worth blocking the release
"Experimental" label EXACTLY to prevent this stuff from blocking release. Do you not know that bcachefs is experimental? This is an example of another rule which helps predictability.
>"Experimental" label EXACTLY to prevent this stuff from blocking release
In practice bcachefs is used in production with real users. If the experimental label prevents critical bug fixes from making it into the kernel then it would be better to just remove that label.
I'm not sure exactly what you are talking about, and I'm not sure you do either. The discussion that preceded bcachefs to be dropped from the Linux kernel mainline involved an attempt to sneak a new features in RC, sidestepping testing and QA work, which was followed up by yet more egregious behavior from the mantainer.
https://www.phoronix.com/news/Linux-616-Bcachefs-Late-Featur...
Too solve a bug with the filesystem that people in the wild were hitting. Like how Linus has said in the past with how there is a blurry line between security fixes and bug fixes. There is a blurry line between filesystem bugs and recovery features.
If you read the email it is clear that the full feature has more work needed and this is more of a basic implementation to address bugs that people hit in the wild.
So you acknowledge that this last episode involved trying to push new features into a RC.
As it was made abundantly clear, not only is the point of RC branches to only get tiny bugfixes after testing, the feature work that was presented was also untested and risked introducing major regressions.
All these red flags were repeatedly raised in the mailing list by multiple kernel maintainers. Somehow you're ignoring all the feedback and warnings and complains raised by people from Linux kernel maintainers, and instead you've opted to try to gaslight the thread.
bcachefs has a ton of QA, both automated testing and a lot of testers that run my latest and I work with on a daily basis. The patch was well tested; it was for codepaths that we have good regression tests for, it was algorithmically simple, and it worked perfectly to recover a filesystem from the original bug report, and it performed flawlessly again not long after.
I've explained my testing and QA on the lists multiple times.
You, like the other kernel maintainers in that thread, are making wild assertions despite having no involvement with the project.