To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
codependency is when someone accepts too much responsibility, in particular responsibility for someone else or other things out of their control.
the answer is to have a "healthy neutrality".
The X days is a concession to the developers that the public disclosure will be delayed to give them an opportunity to address the issue.
Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?
We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.
This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.
You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.
That is not an accurate description? Project Zero was using a 90 day disclosure policy from the start, so for over a decade.
What changed[0] in 2025 is that they disclose earlier than 90 days that there is an issue, but not what the issue is. And actually, from [1] it does not look like that trial policy was applied to ffmpeg.
> To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
You clearly know that no actual demands or even requests for a fix were made, hence the scare quotes. But given you know it, why call it a "demand"?
[0] https://googleprojectzero.blogspot.com/2025/07/reporting-tra..., discussed at https://news.ycombinator.com/item?id=44724287
[1] https://googleprojectzero.blogspot.com/p/reporting-transpare...
I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.
In this world the user is left vulnerable because attackers can use published vulnerabilities that the maintainers are to overwhelmed to fix
I think this is the heart of the issue and it boils off all of the unimportant details.
If it's a real, serious issue, you want to know about it and you want to fix it. Regardless of who reports it.
If it's a real, but unimportant issue, you probably at least want to track it, but aren't worried about disclosure. Regardless of who reports it.
If it's invalid, or AI slop, you probably just want to close/ignore it. Regardless of who reports it.
It seems entirely irrelevant who is reporting these issues. As a software project, ultimately you make the judgment call about what bugs you fix and what ones you don't.
If the obscure coded is not included by default or cannot be triggered by any means other than being explicitly asked for, then it would be reasonable to tag it Won't Fix. If it can be triggered by other means, such as auto file type detection on a renamed file, then it doesn't matter how obscure the feature is, the exploit would affect all.
What is the alternative to a time limited embargo. I don't particularly like the idea of groups of people having exploits that they have known about for ages that haven't been publicly disclosed. That is the kind of information that finds itself in the wrong hands.
Of course companies should financially support the developers of the software they depend upon. Many do this for OSS in the form of having a paid employee that works on the project.
Specifically, FFMPEG seems to have a problem that much of their limitation of resources comes from them alienating contributors. This isn't isolated to just this bug report.
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
I dont agree the following framing is accurate, but I can mention it because you've already said the important part (about how this issue exists, and mearly knowing about it doesn't create required work.) But here announcing it, and registering a CVE, Google is starting the clock. By some metrics, it was already running, but the reputational risk clearly was not. This does change priorities, and requires as urgent context switch. neither are free actions, especially not within FOSS.
To me, being someone who believes everyone, individuals and groups, have a responsibility to contribute fairly. I would frame it as Google's behavior gives the appearance weaponizing their cost center externally, given this is something Google could easily fix, but instead they shirked that responsibility to unfunded volunteers.
Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
They have the option to pay someone to fix them.
They also have the option to not spend resources finding the bugs in the first place.
If they think these are so damn important to find that it's worth devoting those resources to, then they can damn well pay for fixing them too.
Or they can shut the hell up and let FFmpeg do its thing in the way that has kept it one of the https://xkcd.com/2347/ pieces of everyone's infrastructure for over 2 decades.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
Yes? It's in the license
>NO WARRANTY
>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.
Are the bug reports accurate? If so, then they are contributing just as if I found them and sent a bug report, I'd be contributing. Of course a PR that fixes the bug is much better than just a report, but reports have value, too.
The alternative is to leave it unfound, which is not a better alternative in my opinion. It's still there and potentially exploitable even when unreported.
As to why they bother finding these bugs... it's because that's how Google does things. You don't wait for something to break or be exploited, you load your compiler up with santizers and go hunting for bugs.
Yeah this one is kind of trivial, but if the bug-finding infrastructure is already set up it would be even more stupid if Google just sat on it.
Really, the burden is on those shipping products that depend on ffmpeg: they are the ones who have to fix the security issues for their customers. If Google is one of those companies, they should provide the fix in the given time.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
[0] More or less. It seems the actual language is shied from. Is there a meaningful difference?
Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.
Consumers of closed source software have a pretty reasonable expectation that the creator will fix it in a timely manner. They paid money, and the (generally) the creator shouldn't put the customer in a nasty place because of errors.
Consumers of open source software should have zero expectation that someone else will fix security issues. Individuals should understand this; it's part of the deal for us using software for free. Organizations that are making money off of the work of others should have the opposite of an expectation that any vulns are fixed. If they have or should have any concern about vulnerabilities in open source software, then they need to contribute to fixing the issue somehow. Could be submitting patches, paying a contractor or vendor to submit patches, paying a maintainer to submit patches, or contributing in some other way that betters the project. The contribution they pick needs to work well with the volunteers, because some of the ones I listed would absolutely be rejected by some projects -- but not by others.
The issue is that an org like Google, with its absolute mass of technical and financial resources, went looking for security vulnerabilities in open source software with the pretense of helping. But if Google (or whoever) doesn't finish the job, then they're being a piece of shit to volunteers. The rest of the job is reviewing the vulns by hand and figuring out patches that can be accepted with absolutely minimal friction.
To your point, the beginning of the expectation should be that vulns are disclosed, since otherwise we have known insecure software. The rest of the expectation is that you don't get to pretend to do a nice thing while _knowing_ that you're dumping more work on volunteers that you profit from.
In general, wasting the time of volunteers that you're benefiting from is rude.
Specifically, organizations profiting off of volunteer work and wasting their time makes them an extractive piece of shit.
Stop being a piece of shit, Google.
The problem isn't Google reporting vulnerabilities. It's Google using AI to find obscure bugs that affect 2 people on the planet, then making a CVE out of it, without putting any effort into fixing it themselves or funding the project. What are the ffmpeg maintainers supposed to do about this? It's a complete waste of everybody's time.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
It's similar to someone cooking a meal for you, and you go on and complain about every little thing that could have been better instead of at least saying "thank you"!
Here, Google is doing the responsible work of reporting vulnerabilities. But any company productizing ffmpeg usage (Google included) should sponsor a security team to resolve issues in high profile projects like these too.
Sure, the problem is that Google is a behemoth and their internal org structure does not cater to this scenario, but this is what the complaint is about: make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work. Who'd argue against halving their vulnerability finding budget and using the other half to fund a security team that fixes highest priority vulnerabilities instead?
The same question applies if they have time to fix it in six months, since that presumably still gives attackers a large window of time.
In this case the bug was so obscure it’s kind of silly.
This opens up transparency of ffmpeg’s security posture, giving others the chance to fix it themselves, isolate where it’s run or build on entirely new foundations.
All this assuming the reports are in fact pointing to true security issues. Not talking about AI-slop reports.
It's just not possible.
So Google is dedicating resources to finding these bugs
and feeding them to bad actors.
Bad actors who might, hypothetically have had the information before, but definitely do once Google publicizes them.
You are talking about an ideal situation; we are talking about a real situation that is happening in the real world right now, wherein the option of Google reports bug > FFmpeg fixes bug simply does not exist at the scale Google is doing it at.
(The argument also seems backwards to me: Google appears to use a lot of not-inexpensive human talent to produce high quality reports to projects, instead of dumping an ASan log and calling it a day. If all they cared about was shoveling labor onto OSS maintainers, they could make things a lot easier for themselves than they currently do!)
"obscurity isn't security" is true enough, as far as it goes, but is just not that far.
And "put the bugs that won't be fixed soon on a billboard" is worse.
The super naive approach is ignoring that and thinking that "fix the bugs" is a thing that exists.
So while this might be a high security risk because it possibly could allow RCE, the real-world risk is very low.
My understanding is that the bug in question was fixed about 100 times faster than Project Zero's standard disclosure timeline. I don't know what vulnerability report your scenario is referring to, but it certainly is not this one.
> and name-calling the maintainers
Except Google did not "name-call the maintainers" or anything even remotely resembling that. You just made it up, just like GP made up the the "demands". It's pretty telling that all these supposed misdeeds are just total fabrications.
Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."
How many people are actively looking for bugs? Google, and then the other guys that don't share their findings, but perhaps sell them to the highest bidder. Seems like Google is doing some good work by just picking big, popular open source projects and seeing if they have bugs, even if they don't intend to fix them. And I doubt Google was actually using the Lucas Arts video format their latest findings were about.
However, in my mind the discussion whether Google should be developing FFmpeg (beyond the codec support mentioned elsewhere in the thread) or other OSS projects is completely separate from whether they should be finding bugs in them. I believe most everyone would agree they should. They are helping OSS in other ways though, e.g. https://itsfoss.gitlab.io/post/google-sponsors-1-million-to-... .
So thank you for saying the important thing too! :)
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
I think the far more likely result of all the complaints is that Google simply completely disengages from ffmpeg and stops doing any security work on it. I think that would be quite bad for the security of the project - if Google can trivially find bugs at a high speed such that it overwhelms the ffmpeg developers, I would imagine bad actors can also search for them and find those same vulnerabilities Google is constantly finding, and if they know that those vulnerabilities very much exist, but that Google has simply stopped searching for them upon demand of the ffmpeg project, this would likely give them extremely high motivation to go looking in a place they can be almost certain they'll find unreported/unknown vulnerabilities in. The result would likely be a lot more 0-day attacks involving ffmpeg, which I do not think anyone regards as a good outcome (I would consider "Google publishes a bunch of vulnerabilities ffmpeg hasn't fixed so that everyone knows about them" to be a much preferable outcome, personally)
Now, you might consider that possibility fine - after all, the ffmpeg developers have no obligation to work on the project, and thus to e.g. fix any vulnerabilities in it. But if that's fine, then simply ignoring the reports Google currently makes is presumably also fine, no ?
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
[1] https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...
Sure, in maybe 1 special lucky case you might be empowered. And in 99 other cases you are subject to a bug without being in the remotest control over it since it's buried away within something you use and don't even have the option not to use the surface service or app let alone control it's subcomponents.
One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.
Expecting a reporter to fix your security vulnerabilities for you is entitlement.
If your reputation is harmed by your vulnerable software, then fix the bugs. They didn’t create the hazzard they discovered it. You created it, and acting like you’re entitled to the free labor of those that gave you the heads up is insane, and trying to extort them for their labor is even worse.
Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
This doesn't feel like a medium-severity bug, and I think "Perhaps reconsider the severity" is a polite reading. I get that it's a bug either way, but this leaves me with a vague feeling of the ffmpeg maintainer's time being abused.
But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.
"Given enough eyeballs, every bug is shallow" right? Well, Google just contributed some eyeballs, and now a bug has been made shallow. So what's the actual problem here? If some retro game enthusiast had filed the same but report would that be "abusing" the maintainer's time? I would think not, but then we're saying that a bug report can be "abusive" simply by the virtue of who submits it. And I'm really not sure "don't assign employees to research bugs in your open source dependencies and if you do certainly don't submit bug reports on what you find because that's abusive" is the message we want to be sending to corporations that are using these projects.
In no reasonable reading of the situation can I see how anything Google has done here has made things worse:
1) Before hand, the bug existed, but was either known by no one, or known only by people exploiting it. The maintainers weren't actively looking at or for this particular bug and so it may have continue to go undiscovered for another 20 years.
2) Then Google was the only one that knew about it (modulo exploiters) and were the only people that could take any steps to protect themselves. The maintainers still don't know so everyone else would remain unprotected until they discover it independently.
3) Now everyone knows about the issue, and are now informed to take whatever actions they deem appropriate to protect themselves. The maintainers know and can choose (or not) to patch the issue, remove the codec or any number of other steps including deciding it's too low priority in their list of todos and advising concerned people to disable/compile it out if they are worried.
#3 is objectively the better situation for everyone except people who would exploit the issue. Would it be even better if Google made a patch and submitted that too? Sure it would. But that doesn't make what they have done worthless or harmful. And more than that, there's nothing that says they can't or won't do that. Submitting a bug report and submitting a fix don't need to happen at the same time.
It's hard enough convincing corporations to spend any resources at all on contributing to upstream. Dragging them through the mud for not submitting patches in addition to any bug reports they file is in my estimation less likely to get you more patches, and more likely to just get you less resources spent on looking for bugs in the first place.
Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).
So how long should all bug reporters wait before filing public bugs against open source projects? What about closed source projects? Anyone who works in software knows to ship software is to always have way more things to do than time to do it in. By this logic, we should never make bug reports public until the software maintainers (whether OSS, Apple or Microsoft) has a fix ready. Instead of "with enough eyeballs, all bugs are shallow" the new policy going forward I guess will be "with enough blindfolds, all bugs are low priority".
That is, you'd rather a world where Google either does know about a vulnerability and refuses to tell anyone, or just doesn't look for them at all, over a world where google looks for them and lets people know they exist, but doesn't submit their own fix for it.
Why do you want that world? Why do you want corporations to reduce the already meager amounts of work and resources they put into open source software even further?
(To put this in context: I assume that on average a published security vulnerability is known about to at least some malicious actors before it's published. If it's published, it's me finding out about it, not the bad actors suddenly getting a new tool)
This is called fuzzing and it has been standard practice for over a decade. Nobody has had any problem with it until FFmpeg decided they didn’t like that AI filed a report against them and applied the (again, mostly standard at this point) disclosure deadline. FWIW, nobody would have likely cared except they went on their Twitter to complain, so now everyone has an opinion on it.
My point was it would be hard to imagine eschewing ffmpeg completely, not that there is no value for other tools and ffmpeg is better at everything. It is so versatile and ubiquitous it is hard to not use it somewhere.
In my experience there usually is always some scenarios in the stack where throwing in ffmpeg for a step is simpler and easier even if there no proper language binding etc, for some non-core step or other.
From a security context that wouldn't matter, As long it touches data, security vulnerabilities would be a concern.
It would be surprising, not that it would impossible to forgo ffmpeg completely. It would be just like this site is written Lisp, not something you would typically expect not impossible.
What you do with the notice as a dev is up to you, but responsible ones would fix it without throwing a tantrum.
Devs need to stop thinking of themselves as the main character and things get a lot more reasonable.
Either way, users need to know about the vulnerabilities. That way, they can make an informed tradeoff between, for example, disabling the LucasArts Smush codec in their copy of ffmpeg, and being vulnerable to this hole (and probably many others like it).
There are many groups searching for security vulnerabilities in popular open source software who deliberately do not disclose them. They do this to save them for their own use or even to sell them to bad actors.
It’s starting to feel silly to demonize Google for doing security research at this point.
Typically disclosures happen after a fix exists.
A lot of these core pieces of infrastructure are maintained by one to three middle-aged engineers in their free time, for nothing. Meanwhile, billion dollar companies use the software everywhere, and often give nothing back except bug reports and occasional license violations.
I mean, I love "responsible disclosure." But the only result of billion dollar corporations drowning a couple of unpaid engineers in bug reports is that the engineers will walk away and leave the code 100% unmaintained.
And yeah, part of the problem here is that C-based data parsers and codecs are almost always horrendously insecure. We could rewrite it all in Rust (and I have in fact rewritten one obscure codec in Rust) or WUFFS. But again, who's going to pay for that?
I mean, yes, the ffmpeg maintainers are very likely to decide this on their own, abandoning the project entirely. This is already happening for quite a few core open source projects that are used by multiple billion-dollar companies and deployed to billions of users.
A lot of the projects probably should be retired and rewritten in safer system languages. But rewriting all of the widely-used projects suffering from these issues would likely cost hundreds of millions of dollars.
The alternative is that maybe some of the billion-dollar companies start making lists of all the software they ship to billions of users, and hire some paid maintainers through the Linux or Apache Foundations.
The OSS maintainer has the responsibility to either fix, or declare they won't fix - both are appropriate actions, and they are free to make this choice. The consumer of OSS should have the right to know what vulns/issues exist in the package, so that they make as informed a decision as they can (such as adding defense in depth for vulns that the maintainers chooses not to fix).
In regards whether it's a bad idea to publicly document security concerns found regardless whether you plan on fixing them, it often depends if you ask the product manager what they want for their product or what the security concerned folks in general want for every product :).
It's a call not to stop reporting, but to equally invest in fixing these.
It's about accountability! Who really gets to do it once those who ship it to customers care, is on them to figure out (though note that maintainers will have some burden to review, integrate and maintain the change anyway).
In my opinion if the problem is money, and they cannot raise enough, then somebody should help them with that. Isn’t it?
This is significant when they represent one of the few entities on the planet likely able to find bugs at that scale due to their wealth.
So funding a swarm of bug reports, for software they benefit from, using a scale of resources not commonly available, while not contributing fixes and instead demanding timelines for disclosure, seems a lot more like they'd just like to drive people out of open source.
In the end, Google does submit patches and code to ffmpeg, they also buy consulting from the ffmpeg maintainers. And here they did some security testing and filed a detailed and useful bug report. But because they didn't file a patch with the bug report, we're dragging them through the mud. And for what? When another corporation looks at what Google does do, and what the response this bug report has gotten them, which do you think is the most likely lesson learned?
1) "We should invest equally in reporting and patching bugs in our open source dependencies"
2) "We should shut the hell up and shouldn't tell anyone else about bugs and vulnerabilities we discover, because even if you regularly contribute patches and money to the project, that won't be good enough. Our name and reputation will get dragged for having the audacity to file a detailed bug report without also filing a patch."
Maintaining a reputation might be enough reward for you, but not everyone is happy to work for free for a billion dollars corporation breathing down their necks. It's puzzling to me why people keep defending their free lunch.
It looks like they are now starting to flood OSS with issues because "our AI tools are great", but don't want to spend a dime helping to fix those issues.
xkcd 2347
remember we're not talking about keeping a bug secret, we're talking about using a power tool to generate a fire hose of bugs and only doing that, not fixing them
And after all that, they just drop an issue, instead of spending a little extra time on producing a patch.
If not pumping out patches allows them to get more security issues fixed, that’s fine!
This bit of ffmpeg is not a Chrome dependency, and likely isn’t used in internal Google tools either.
> Just publishing bug reports by themselves does not make open source projects secure!
It does, especially when you first privately report them to the maintainers and give them a plenty of time to fix the bug.
Silly nitpick, but you search for vulnerabilities not CVEs. CVE is something that may or may not be assigned to track a vulnerability after it has been discovered.
Most security issues probably get patched without a CVE ever being issued.
Google submitting a patch does not address this issue. The main work for maintainers here is making the decision whether or not they want to disable this codec, whether or not Google submits a patch to do that is completely immaterial.
An exploit is different. It can affect anyone and is quite pertinent.
These two terms are not interchangeable.
Most vulnerabilities never have CVEs issued.
There’s absolutely no reason to assume that it does not lead to RCE, and certainly no reason whatsoever to invest significant time to prove that one way or the other unless you make a living selling exploits.
Most vulnerabilities never get CVEs even when they’re patched.
Also in general Google does investigate software they don't make money off.
Google did nothing like this.
If people infer that a hypothetical project doesn't care about security because they didn't fix anything, then they're right. It's not google's fault they're factually bad at security. Making someone look bad is not always a bad action.
Drawing attention to that decision by publicly reporting a bug is not a demand for what the decision will be. I could imagine malicious attention-getting but a bug report isn't it.
Wrong. The original files only affect 2 people. A malicious file could be anywhere.
Do you remember when certain sequences of letters could crash iphones? The solution was not "only two people are likely to ever type that, minimum priority". Because people started spreading it on purpose.
The way many (perhaps most) people think of CVEs is badly broken. The CVE system is deeply unreliable, resulting in CVEs being issued for things that are neither bugs nor vulnerabilities while at the same time most things that probably should have CVEs assigned do not have them. Not to even mention the ridiculous mess that is CVSS.
I’m just ranting though. You know all this, almost certainly much better than me.
but they make money off the reputational increase they earn for having their name attached to the investigation. Unless the investigation and report is anonymous and their name not attached (which, could be true for some researchers), i can say that they're not doing charity.
It's not some lone report of an important bug, it's AI spam that put forth security issues at a speed greater than they have resources to fix it.
Nobody is against Google reporting bugs, but they use automatic AI to spam them and then expect a prompt fix. If you can't expect the maintainers to fix the bug before disclosure, then it is a balancing act: Is the bug serious enough that users must be warned and avoid using the software? Will disclosing the bug now allow attackers to exploit it because no fix has been made?
In this case, this bug (imo) is not serious enough to warrant a short disclosure time, especially if you consider *other* security notices that may have a bigger impact. The chances of an attacker finding this on their own and exploiting it are low, but now everybody is aware and you have to rush to update.
P.S. I'm an open source maintainer myself, and I used to think, "oh, OSS developers should just stop whining and fix stuff." Fast forward a few years, and now I'm buried under false-positive "reports" and overwhelmed by non-coding work (deleting issue spam, triage, etc.)
P.P.S. What's worse, when your library is a security component the pressure’s even higher - one misplaced loc could break thousands of apps (we literally have a million downloads at nuget [1] )
What do you believe would be an appropriate timeline?
>especially if you consider other security notices that may have a bigger impact.
This is a bug in the default config that is likely to result in RCE, it doesn’t get that much worse than this.
That just means the script kiddies will have more trouble, while more scary actors like foreign intellegence agencies will have free reign.
whether or not AI found it, clearly a human refined it and produced a very high quality bug report. There's no AI slop here. No spam.
All I am saying is that you should be as mindful to open source maintainers as you are to the people at companies.
To my original comment, the underlying problem here IMO is wanting to have it both ways: you can adhere to common notions of security for reputational reasons, or you can exercise your right as a maintainer to say “I don’t care,” but you can’t do both.
Making open source code more secure and at the same time less prevalent seems like a net loss for society. And if those researchers could spare some time to write patches for open source projects, that might benefit society more than dropping disclosure deadlines on volunteers.
At least, if this information is public, someone can act on it and sandbox ffmpeg for their use case, if they think it's worth it.
I personally prefer to have this information be accessible to all users.
It’s almost almost like bitching about the “free labor” open source projects are getting from their users, especially when that labor is of good quality and comes from a user that is actively contributing both code and money to the project is a losing strategy for open source fans and maintainers.
> All I am saying is that you should be as mindful to open source maintainers as you are to the people at companies.
And all I’m saying is there is nothing that’s “un-mindful” about reporting real bugs to an open source project, whether that report is public or not. And especially when that report is well crafted and actionable. If this report were for something that wasn’t a bug, is this report was a low quality “foo is broke, plz to fix” report with no actionable information, or if the report actually came with demands for responses and commitment timelines, then it would be a different matter. But ffmpeg runs a public bug tracker. To say then that making public bug reports is somehow disrespectful of the maintainers is ridiculous.
Google does contribute some patches for codecs they actually consume e.g. https://github.com/FFmpeg/FFmpeg/commit/b1febda061955c6f4bfb..., the bug in question was just an example of one the bug finding tool found that they didn't consume - which leads to this conversation.
Yes, because publicly disclosing the vulnerability means someone will have enough information to exploit it. Without public disclosure, the chance of that is much lower.
(But also, while this is great, it doesn’t make an expectation of a patch with a security report reasonable! Most security reports don’t come with patches.)
High quality bug reports like this are very good for open source projects.
Then point to the "PoC + Patch or GTFO" sign when reports come in. If you use a library with a "NO WARRANTY" license clause in an application where you're responsible for failures, it's on you to fix or mitigate the issues, not on the library authors.
This is true. Congratulations. Man we are all so smart for getting that right. How could anyone get something so obvious and simple wrong?
What you leave out is "in a vacuum" and "all else being equal".
We are not in a vacuum and all else is not equal, and there are more than those 2 factors alone that interact.