To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
In this world the user is left vulnerable because attackers can use published vulnerabilities that the maintainers are to overwhelmed to fix
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
[1] https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...
This is significant when they represent one of the few entities on the planet likely able to find bugs at that scale due to their wealth.
So funding a swarm of bug reports, for software they benefit from, using a scale of resources not commonly available, while not contributing fixes and instead demanding timelines for disclosure, seems a lot more like they'd just like to drive people out of open source.
It looks like they are now starting to flood OSS with issues because "our AI tools are great", but don't want to spend a dime helping to fix those issues.
xkcd 2347
This bit of ffmpeg is not a Chrome dependency, and likely isn’t used in internal Google tools either.
> Just publishing bug reports by themselves does not make open source projects secure!
It does, especially when you first privately report them to the maintainers and give them a plenty of time to fix the bug.
An exploit is different. It can affect anyone and is quite pertinent.
Nobody is against Google reporting bugs, but they use automatic AI to spam them and then expect a prompt fix. If you can't expect the maintainers to fix the bug before disclosure, then it is a balancing act: Is the bug serious enough that users must be warned and avoid using the software? Will disclosing the bug now allow attackers to exploit it because no fix has been made?
In this case, this bug (imo) is not serious enough to warrant a short disclosure time, especially if you consider *other* security notices that may have a bigger impact. The chances of an attacker finding this on their own and exploiting it are low, but now everybody is aware and you have to rush to update.
What do you believe would be an appropriate timeline?
>especially if you consider other security notices that may have a bigger impact.
This is a bug in the default config that is likely to result in RCE, it doesn’t get that much worse than this.