Most of them would just pirate in the old days, and most FOSS licences give them clear conscience to behave as always.
Most of them would just pirate in the old days, and most FOSS licences give them clear conscience to behave as always.
1) dedicating compute resources to continuously fuzzing the entire project
2) dedicating engineering resources to validating the results and creating accurate and well-informed bug reports (in this case, a seriously underestimated security issue)
3) additionally for codecs that Google likely does not even internally use or compile, purely for the greater good of FFMPEG's user base
Needless to say, while I agree Google has a penny to spare to fund FFMPEG, and should (although they already contribute), I do not agree with funding this maintainer.
Providing a real CVE is a contribution, not a burden. The ffmpeg folks can ignore it, since by all indications it's pretty minor.
If it's to fix vulnerabilities, it seems within reason to expect a patch. If the reason Google isn't sending a patch is because they truly think the maintainers can fix it better, then that seems fair. But if Google isn't sending a patch because fixing vulns "doesn't scale" then that's some pretty weak sauce.
Maybe part of the solution is creating a separate low priority queue for bug reports from groups that could fix it but chose not to.
I do think that contributing fuzzing and quality bug reports can be beneficial to a project, but it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry.
Rather than going off and digging up ten time bombs which all start counting down together, how about digging up one and defusing it? Or even just contributing a bit of funding towards the team of people working for free to defuse them?
If Google really wants to improve the software quality of the open source ecosystem, the best thing they could do is solve the funding problem. Not a lot of people set out to intentionally write insecure code. The only case that immediately comes to mind is the xz backdoor attempt, which again had a root cause of too few maintainers. I think figuring out a way to get constructive resources to these projects would be a much more impressive way to contribute.
This is a company that takes a lot of pride in being the absolute best of the best. Maybe what they're doing can be justified in some way, but I see why maintainers are feeling bullied. Is Google really being excellent here?
Google is not a monolith. If you asked the board, or the shareholders of google what they thought of open source software quality they would say they don't give a rat's ass about it. Someone within google who does care has been given very limited resources to deal with the problem, and are approaching it in the most efficient way they can.
>it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry
Bug reports are not criticism, they are in fact contributions, and the human thing to do when someone contributes to your project is to thank them.
>This is a company that takes a lot of pride in being the absolute best of the best.
There was an era when people actually believed that google was the best of the best, rather than saying it as a rhetorical trick, and during that era they never would have dreamed of making such self centered demands of google. This project zero business comes across as the last vestige of a dying culture within google. Why do people feel the need to be so antagonitic towards it?
>I can tell you with 100% certainty that there are undiscovered vulnerabilities in the Linux kernel right now. Does that mean they should stop shipping?
Hence why I qualified "deliberately".
There's an appeals process: https://www.cve.org/Resources/General/Policies/CVE-Record-Di...
And of course CVE is not the only numbering system, there's OSV DB, GHSA, notcve.org etc.
The Linux kernel went in the opposite direction: Every bugfix that looks like it could be relevant to security gets a CVE[1]. The number of CVEs has increased significantly since it became a CNA.
If someone goes on to use that code for serious purposes, that's on them. They were explicitly warned that this is not production commercial code. It's weekend hobby work. There's no ethical obligation to make your hobby code suitable for production use before you share it. People are allowed to write and share programs for fun.
Deliberate malware would be something like an inbuilt trojan that exfiltrates data (e.g. many commercial applications). Completely different.
If they wanted to market ffmpeg as a toy project only, not to be trusted, they could do that, but they are not doing that.
But they don't say they warrant their work. They have a notice reminding you that you are receiving something for free, and that thing comes with no support, and is not meant to be fit for any particular use you might be thinking of, and that if you want support/help fulfilling some purpose, you can pay someone (maybe even them if you'd like) for that service. Because the way the world works is that as a general principle, other people don't owe you something for nothing. This is not just some legal mumbo jumbo. This is how life works for everyone. It's clear that they're not being malicious (they're not distributing a virus or something), and that's the most you can expect from them.
Computer security is always contextual, but as a general rule, if you're going to be accepting random input from unknown parties, you should have an expert that knows how to do that safely. And as mentioned elsewhere in these comments, such an expert would already be compiling out codecs they don't need and running the processing in a sandboxed environment to mitigate any issues. These days even software written in-house is run in sandboxed environments with minimal permissions when competent professionals are making things. That's just standard practice.
So they should be proud that they support obscure codecs, and by default the onus is on no one to ensure it's free from bugs. If an engineer needs to make a processing pipeline, the onus is always on them to do that correctly. If they want to use a free, unsupported hobby tool as part of their serious engineering project, it's on them to know how to manage any risks involved with that decision. Making good decisions here is literally their job.
All I'm asking for right here is consistency about whether the library is mostly secure. The ethical requirement is to follow through on your claims and implications, while making claims and implications is completely optional.
> Computer security is always contextual, but as a general rule, if you're going to be accepting random input from unknown parties, you should have an expert that knows how to do that safely. And as mentioned elsewhere in these comments, such an expert would already be compiling out codecs they don't need and running the processing in a sandboxed environment to mitigate any issues.
Sandboxing is great defense in depth but most software should not require sandboxing. And expecting everyone to have an expert tweaking compilation is not realistic. Defaults matter, and security expectations need to be established between the site, the documentation, and the defaults, not left as a footgun for only experts to avoid.
People are allowed to make secure, robust software for fun. They can take pride in how good of a job they do at that. They can correctly point out that their software is the best. That still leaves them with no obligations at all for having shared their project for free.
If you are not an expert in hardening computers, don't run random untrusted inputs through it, or pay someone to deliver a turnkey hardened system to you. That someone might be Adobe selling their codecs/processing tools, or it might be an individual or a company like Redhat that just customizes ffmpeg for you. In any case, if you're not paying someone, you should be grateful for whatever goodwill you get, and if you don't like it, you can immediately get a full refund. You don't even have to ask.
The person doing serious things in a professional context is always the one with the obligation to do them correctly. When I was at IBM, we used exactly 1 external library (for very early processor initialization) and 1 firmware blob in the product I worked on, and they were paid deliverables from hardware vendors. We also paid for our compiler. Everything else (kernel, drivers, firmware, tools) was in-house. If companies want to use random free code they found on the Internet without any kind of contract in place, that's up to them.
It is if they fix bugs like this. Status quo everything is fine with their actions, they don't need to do anything they aren't already doing.
If they decide they don't want to fix bugs like this, I would say they have the ethical obligation to make it clear that the software is no longer mostly secure. This is quite easy to accomplish. It's not a significant burden in any way.
Basically, if they want to go the less-secure route, I want it to be true that they're "effectively saying" that all caps text you wrote earlier. That's all. A two minute edit to their front page would be enough. They could edit the text that currently says "A complete, cross-platform solution to record, convert and stream audio and video." I'll even personally commit $10 to pay for those two minutes of labor, if they decide to go that route.