But the way I see it, a bug report is a bug report, no matter how small or big the bug or the team, it should be addressed.
I don’t know, I’m not exactly a pillar of the FOSS community with weight behind my words.
But the way I see it, a bug report is a bug report, no matter how small or big the bug or the team, it should be addressed.
I don’t know, I’m not exactly a pillar of the FOSS community with weight behind my words.
As the article states, these are AI-generated bug reports. So it's a trillion-dollar company throwing AI slop over the wall and demanding a 90-day turn around from unpaid volunteers.
There is a convergence of very annoying trends happening: more and more are garbage found and written using AI and with an impact which is questionable at best, the way CVE are published and classified is idiotic and platform founding vulnerability research like Google are more and more hostile to projects leaving very little time to actually work on fixes before publishing.
This is leading to more and more open source developers throwing the towel.
What's the point of just showering these things with bug reports when the same tool (or a similar one) can also apparently fix the problem too?
The 90 day period is the grace period for the dev, not a demand. If they don't want to fix it then it goes public.
If this keeps up, there won't be anyone willing to maintain the software due to burn out.
In today's situation, free software is keeping many companies honest. Losing that kind of leverage would be a loss to the society overall.
And the public disclosure is going to hurt the users which could include defense, banks and other critical institutions.
Some of them are not even bugs in the traditional sense of the world but expected behaviours which can lead to unsecure side effects.
I'm not a Google fan, but if the maintainers are unable to understand that, I welcome a fork.
This was a bug, which caused an exploitable security vulnerability. The bug was reported to ffmpeg, over their preferred method for being notified about vulnerabilities in the software they maintain. Once ffmpeg fixed the bug, a CVE number was issued for the purpose of tracking (e.g. which versions are vulnerable, which were never vulnerable, which have a fix).
Having a CVE identifier is important because we can't just talk about "the ffmpeg vulnerability" when there have been a dozen this year, each with different attack surfaces. But it really is just an arbitrary number, while the bug is the actual problem.
Things which are usually managed inside a project now have a visibility outside of it. You might justify it as you want like the need to have an identifier. It doesn't fundamentally change how that impacts the dynamic.
Also, the discussion is not about a specific bug. It's a general discussion regarding how Google handles disclosure in the general case.
Imagine you're a humble volunteer OSS developer. If a security researcher finds a bug in your code they're going to make up a cute name for it, start a website with a logo, Google is going to give them a million dollar bounty, they're going to go to Defcon and get a prize and I assume go to some kind of secret security people orgy where everyone is dressed like they're in The Matrix.
Nobody is going to do any of this for you when you fix it.
Doesn't really fit with your narrative of security researchers as shameless glory hounds, does it?
Note FFmpeg and cURL have already had maintainers quit from burnout from too much attention from security researchers.
I suppose you'd prefer they abandon their projects entirely? Because that's the real alternative at this point.