Relevant XKCD: https://xkcd.com/2347/
There have been a lot of cases where something once deemed "unreachable" eventually was reachable, sometimes years later, after a refactoring and now there was an issue.
I'm only half-joking when I say that one of the premier selling points of GPL over MIT in this day and age is that it explicitly deters these freeloading multibillion-dollar companies from depending on your software and making demands of your time.
Things like "panics on certain content" like [1] or [2] are "security bugs" now. By that standard anything that fixes a potential panic is a "security bug". I've probably fixed hundreds if not thousands of "security bugs" in my career by that standard.
Barely qualifies as a "security bug" yet it's rated as "6.2 Moderate" and "7.5 HIGH". To say nothing of gazillion "high severity" "regular expression DoS" nonsense and whatnot.
And the worst part is all of this makes it so much harder to find actual high-severity issues. It's not harmless spam.
[1]: https://github.com/gomarkdown/markdown/security/advisories/G...
> The viewpoint expressed by Wellnhofer's is understandable, though one might argue about the assertion that libxml2 was not of sufficient quality for mainstream use. It was certainly promoted on the project web site as a capable and portable toolkit for the purpose of parsing XML. Open-source proponents spent much of the late 1990s and early 2000s trying to entice companies to trust the quality of projects like libxml2, so it is hard to blame those companies now for believing it was suitable for mainstream use at the time.
I think it's very obvious that the maintainer is sick of this project on every level, but the efforts to trash talk its quality and the contributions of all previous developers doesn't sit right with me.
This is yet another case where I fully endorse a maintainer's right to reject requests and even step away from their project, but in my opinion it would have been better to just make an announcement about stepping away than to go down the path of trash talking the project on the way out.
I don’t think many projects see acquiring unpaying corporate customers as a goal.
That is not true at all. Availability is also critical. If nobody can use bank accounts, bank has no purpose.
On some of our projects this has been a great success. We have some strong outside contributors doing work on our project without us needing to pay them. In some cases, those contributors are from companies that are in direct competition with us.
On other projects we've open sourced, we've had people (including competitors) use, without anyone contributing back.
Guess which projects stay open source.
I'm interested in people (not companies, or at least I don't care about companies) being able to read, reference, learn from, or improve the open source software that I write. It's there if folks want it. I basically never promote it, and as such, it has little uptake. It's still useful though, and I use it, and some friends use it. Hooray. But that's all.
I think that's seriously over-estimating the quality of software in mainstream browsers and operating systems. Certainly some parts of mainstream OS's and browsers are very well written. Other parts, though...
Security issues like this are a prime example of why all FOSS software should be at least LGPLed. If a security bug is found in FOSS library, who's the more motivated to fix it? The dude who hacked the thing together and gave it away, or the actual users? Requesting that those users share their fixes is farrr from unreasonable, given that they have clearly found great utility in the software.
If a hacker wants to DoS their own browser I’m fine with that.
And even if it somehow could, it's 1) just not the same thing as "I lost all my money" – that literally destroys lives and the bank not being available for a day doesn't. And 2) almost every bug has the potential to do that in at least some circumstances – circumstances with are almost never true in real-world applications.
(Disclosure: I'm a past collaborator with Nick on other projects. He's a fantastic engineer and a responsible and kind person.)
“Three.”
“Like, the number 3? As in, 1, 2, …?”
“Yes. If you’re expecting me to pick, this will be CVE-3.”
a) nonsense in which case nobody should spend any time fixing this (I'm thinking things like the frontend DDOS CVEs that are common) b) an actual problem in which case a compliance person at one of these mega tech companies will tell the engineers it needs to be fixed. If the maintainer refuses to be the person fixing it (a reasonable choice), the mega tech company will eventually just do it.
I suppose the risk is the mega tech company only fixes it for their internal fork.
It's true that once upon a time, libxml was a critical path for a lot of applications. Those days are over. Protocols like SOAP are almost dead and there's not really a whole lot of new networking applications using XML in any sort of manor.
The context where these issues could be security bugs is an ever-vanishing usecase.
Now, find a similar bug in zlib or zstd and we could talk about it being an actual security bug.
I wouldn't personally classify these as denial of service. They are just bugs. 500 status code does not mean that server uses more resources to process it than it typically does. OOMing your browser has no impact to others. These should be labeled correctly instead of downplaying the significance of denial of service.
Like I said in my other comment, there are two entities - the end-user and the service provider. The service provider/business loses money too when customers cannot make transactions (maybe they had promise to keep specific uptime and now they need to pay compensations). Or they simple get bankrupted because they lost their users.
Even customers may lose money or something else when they can't make transactions. Or maybe identification is based on bank credentials on some other service. The list goes on.
You owe them nothing. That fact doesn’t mean maintainers or users should be a*holes to each other, it just means that as a user, you should be grateful and you get what you get, unless you want to contribute.
Or, to put it another way: you owe them exactly what they’ve paid for!
Many open source developers feel a sense of responsibility for what they create. They are emotionally invested in it. They may want to be liked or not be disliked.
You’re able to not care about these things. Other people care but haven’t learned how to set boundaries.
It’s important to remember, if you’re not understanding what a majority of people are doing, you are the different one. The question should be “Why am I different?” not “Why isn’t everyone else like me?”
“Here’s the solution” comes off far better than, “I don’t understand why you don’t think like me.”
Some open source projects which are well funded and/or motivated to grow are giddy with excitement at the prospect you might file a bug report [1,2]. Other projects will offer $250,000 bounties for top tier security bugs [3].
Other areas of society, like retail and food service, take an exceptionally apologetic, subservient attitude when customers report problems. Oh, sir, I'm terribly sorry your burger had pickles when you asked for no pickles. That must have made you so frustrated! I'll have the kitchen fix it right away, and of course I'll get your table some free desserts.
Some people therefore think doing a good job, as an open source maintainer, means emulating these attitudes. That you ought to be thankful for every bug report, and so very, very sorry to everyone who encounters a crash.
Needless to say, this isn't a sustainable way to run a one-person project, unless you're a masochist.
[1] https://llvm.org/docs/Contributing.html#id5 [2] https://dev.java/contribute/test/ [3] https://bughunters.google.com/about/rules/chrome-friends/574...
DoS results in whatever the system happens to do. It may well result in bad things happening, for example stopping AV from scanning new files, breaking rate limiting systems to allow faster scanning, hogging all resources on a shared system for yourself, etc. It's rarely a security issue in isolation, but libraries are never used in isolation.
> Does a gas pedal use a markdown or XML parser? No.
Cars in general use, extensively: https://en.wikipedia.org/wiki/AUTOSAR
Control integrity, nonrepudiation, confidentiality, privacy, ...
Also, define what you mean by "utility" because there's inability to convert a Word document, inability to stop a water treatment plant from poisoning people, and ability to stop a fire requiring "utility".
A bug in a library that does rate limiting arguably is a security issue because the library itself promises to protect against abuse. But if I make a library for running Lua in redis that ends up getting used by a rate limiting package, and my tool crashes when the input contains emoji, that's not a security issue in my library if the rate limiting library allows emails with punycode emoji in them.
"Hogging all of the resources on a shared system" isn't a security bug, it's just a bug. Maybe an expensive one, but hogging the CPU or filling up a disk doesn't mean the system is insecure, just unavailable.
The argument that downtime or runaway resource use due is considered a security issue but only if the problem is in someone else's code is some Big Brained CTO way of passing the buck onto open source software. If it was true, Postgres autovacuuming due to unpleasant default configuration would be up there with Heartbleed.
Maybe we need a better way of alerting downstream users of packages when important bugs are fixed. But jamming these into CVEs and giving them severities above 5 is just alert noise and makes it confusing to understand what issues an organization should actually care about and fix. How do I know that the quadratic time regexp in a string formatting library used in my logging code is even going to matter? Is it more important than a bug in the URL parsing code of my linter? It's impossible to say because that responsibility was passed all the way downstream to the end user. Every single person needs to make decisions about what to upgrade and when, which is an outrageous status quo.
(And other examples) That's a fallacy of looking for the root cause. The library had an issue, the system had an issue and together they resulted in a problem for you. Some issues will be more likely to result in security problems than others, so we classify them as such. We'll always deal with probabilities here, not clear lines. Otherwise we'll just end up playing a blame game "sure, this had a memory overflow, but it's package fault for not enabling protections that would downgrade it to a crash", "no it's deployments fault for not limiting that exploit to just this users data partition", "no it's OS fault for not implementing detailed security policies for every process", ...
This isn't a popularity contest and I'm sick of gamification of literally everything.
https://www.statista.com/chart/25795/active-github-contribut...
"Microsoft is now the leading company for open source contributions on GitHub" (2016)
Not necessarily. 500 might indicate the process died, which might take more resources to startup, have cold cache, whatever. If you spam that repeatedly it could easily take down the site.
I agree with your point broadly though that the risk of such things are grossly overstated, but i think we should be careful about going in the opposite direction too far.
Quite the opposite. NETCONF is XML https://en.wikipedia.org/wiki/NETCONF and all modern ISP/Datacenter routers/switches have it underneath and most of the time as a primary automation/orchestration protocol.
That is true, but the status code 500 alone does not reveal that; it is speculation. Status codes are not always used correctly. It is typically just indicator to dig deeper. There might be a security issue, but the code itself is not enough.
Maybe this just the same general problem of false positives. Proving something requires more effort and more time and people tend to optimise things.
I used to work on a kernel debugging tool and had a particularly annoying security researcher bug me about a signed/unsigned integer check that could result in a target kernel panic with a malformed debug packet. Like you couldn't do the same by just writing random stuff at random addresses, since you are literally debugging the kernel with full memory access. Sad.
Does availability not matter to you? Great. For others, maybe it does, like you are some medical device segfaulting or OOMing in an unmanaged way on a cfg upload is not good. 'Availability' is a pretty common security concern for maybe 40 years now from an industry view.
That's reasonable, being a maintainer is a thankless job.
However i think there is a duty to step aside when that happens. If nobody can take the maintainer's place, then so be it, its still better than the alternative. Being burned out but continuing anyways just hurts everyone.
Its absolutely not the security researcher's fault for reporting real albeit low severity bugs (to be clear though, entirely reasonable for maintainers to treat low severity security bugs as public. The security policy is the maintainer's decision, its not right to blame researchers for following the policy maintainers set)
What I do is I add the following notice to my GitHub issue template: "X is a passion project and issues are triaged based on my personal availability. If you need immediate or ongoing support, then please purchase a support contract through my software company: [link to company webpage]".
This is worse for essentially everyone except the people who should be doing more diligence around the code that they use. If you need code to be bug free (other than the fact that you're delusional about the notion of "bug free" code) you're just playing the blame game when you don't put protections in place. And I'm not talking about memory safety, I'm talking about a regexp with pathological edge cases or a panic in user inputs. If you're not handling unexpected failure modes from code you didn't write and inspect, why does that make it a security issue where the onus is on the library maintainer?
Memory safety is arguably always a security issue. But a library segfaulting when NOT dealing with arbitrary external input wouldn't be a CVE in any case, it's just a bug. An external third party would need to be able to push a crafted config to induce a segfault. I'm not sure what kind of medical device, short of a pacemaker that accepts Bluetooth connections, might fall into such a category, but I'd argue that if a crash in your dependencies' code prevents someone's heart from beating properly, relying CVEs to understand the safety of your system is on you.
Should excessive memory allocation in OpenCV for certain visual patterns be a CVE because someone might have built an autonomous vehicle with it that could OOM and (literally) crash? Just because you put the code in the critical path of a sensitive application doesn't mean the code has a vulnerability.
> 'Availability' is a pretty common security concern for maybe 40 years now from an industry view.
Of course! It's a security problem for me in my usage of a library because I made the failure mode of the library have security implications. I don't want my service to go offline, but that doesn't mean I should be entitled to having my application's exposure to failure modes affecting availability be treated on equal footing to memory corruption or an RCE or permissions bypass.
But when you talk about URL parsing in a linter or a regexp in logging code, I think you're implying that the bugs are unimportant, in part, because the code only handles trusted input.
Which is valid enough. The less likely some component is to receive untrusted input, the lower the severity should be.
But beware of going all the way and saying "it's not a bug because we assume trusted input". Whenever you do that, you're also passing down a responsibility to the user: the responsibility to segregate trusted and untrusted data!
Countless exploits have arisen when some parser never designed for untrusted input ended up being exposed to it. Perhaps that's not the parser's fault. But it always happens.
If you want to build secure systems, the only good approach is to stop using libraries that have those kinds of footguns.
It is a bug but it’s not necessarily a security hole in the library. That’s what OP is saying.
1. Agreed it's totally fine for a system to have some bugs or CVEs, and likewise fine for OSS maintainers to not feel compelled to address them. If someone cares, they can contribute.
2. Conversely, it's very useful to divorce some application's use case from the formal understanding of whether third-party components are 'secure' because that's how we stand on the shoulders of giants. First, it lets us make composable systems: if we use CIA parts, with some common definition of CIA, we get to carry that through to bigger parts and applications. Second, on a formal basis, 10-20 years after this stuff was understood to be useful, the program analysis community further realized we can even define them mathematically in many useful ways, where different definitions lead to different useful properties, and enables us to provably verify them, vs just test for them.
So when I say CIA nowadays, I'm actually thinking both mathematically irrespective of downstream application, and from the choose-your-own-compliance view. If some library is C+I but not A... that can be fine for both the library and the downstream apps, but it's useful to have objective definitions. Likewise, something can gradations of all this -- like maybe it preserves confidentiality in typical threat models & definitions, but not something like "quantitative information flow" models: also ok, but good for everyone to know what the heck they all mean if they're going to make security decisions on it.
The foundation of the internet is something that gets bigger and bigger every year. I understand the sentiment and the reasoning of declaring software a "public good", but it won't scale.
1. Serious. "This is a problem and it needs fixing yesterday."
2. Marketing. "We discovered that if earth had two moons and they aligned right and you had local root already you could blah blah. By the way we are selling this product that will generate a positive feedback loop for your paranoid tendencies, buy buy buy!".
3. Reputation chasing. Same as above, except they don't sell you a product, they want to establish themselves as an expert in aligning moons.
Much easier to do 2 or 3 via "AI" by the way.
> ...there are currently four bugs marked with the security label in the libxml2 issue tracker. Three of those were opened on May 7 by Nikita Sveshnikov, a security researcher who works for a company called Positive Technologies.
I'm confused. Why doesn't Positive Technologies submit a patch or offer to pay the lead maintainer to implement a fix?FYI, Wiki tells me:
> Positive Technologies is a Russian information security research company and a global leader in cybersecurity.
The maintainer claims this is caused by allocator failure (malloc returning null), but it is still a valid bug. If you don't want to deal with malloc failures, just crash when malloc() returns null, instead of not checking malloc() result at all.
The maintainer could just write a wrapper around malloc that crashes on failure and replace all calls with the wrapper. It seems like an easy fix. Because almost no software can run where there is no heap memory so it makes no sense for the program to continue.
Another solution is to propagate every error back to the caller, but it is difficult and there is high probability that the caller won't bother checking the result because of laziness.
A quote from a bug report [1]:
> If xmlSchemaNewValue returns NULL (e.g., due to a failure of malloc), xmlSchemaDupVal checks for this and returns NULL.
I have mentioned this in the past, but there was this weird shift in culture just after 2000 where increasingly open source projects were made to please their users, whether they were corporate or not, and "your project is your CV" became how their maintainers would view their projects. It does not have to be this way and we should (like it seems to be the case with libxml2) maybe try to fix this culture?
Not true. Many companies uses Linux for example.
They will just avoid using GPL software in ways that would impact their own intellectual property (linking a GPL library to their proprietary software). Sometimes they will even use it with dubious "workaround" such as saying "we use a deamon with IPC so that's ok"
Yup. $10 000.
Remind me what the average Google salary is? Or how much profit Google made that year?
Or better still, what is the livable wage is where libxml maintainer lives? You know, the maintainer of the library used in the core Google Product?
Anyway, the GPL is there to protect final users and not the maintainer of the project. And if a software is running on someone else server, you are not the user of that software. (Although you use the service and give the data, but that's another problem)
I not think you can deal with the failure condition the way you think on Linux (and I imagine other operating systems too).
A DoS affects the availability of an application, and as such is a real security bug. While the severity of it might be lower than a bug that allows to "empty bank accounts", and fixing it might get a lower priority, it doesn't make it any less real.
It could still be a security error, but only if all availability errors are for that project. But after triage, the outcome is almost always “user can hang own browser on input which isn’t likely”. And yes, it’s a pity I wrote ‘almost’, which means having to check 99% false alarms.
Asymmetrical gifting is only acceptable with a power imbalance; if the boss gives an employee a gift, it need not be reciprocated.
FOSS actually turns this on its head, since unpaid volunteers are giving billionaires like Bezos gifts. Worse, people argue in favor of it.
Unfortunately that's not how it happens in practice. People run security scanners, and those report that you're using library X version Y which has a known vulnerability with a High CVSS score or whatever. Even if you provide a reasoned explanation of why that vulnerability doesn't impact your use case and you convince your customer's IT team of this, this is seen as merely a temporary waiver: very likely, you'll have the same discussion next time something is scanned and found to contain this.
The whole security audit system and industry is problematic, and often leads to huge amounts of busy work. Overly pessimistic CVEs are not the root cause, but they're still a big problem because of this.
So could the reporter of the bug. Alternatively, he could add an `if(is null){crash}` after the malloc. The fix is easy for anyone that has some knowledge of the code base. The reporter has demonstrated this knowledge in finding the issue.
If a useful PR/patch diff was provided with the reporter, I would have expected it to be merged right away.
However, instead of doing the obvious thing to actually solve the issue, the reporter hits the maintainer with this bureaucratic monster:
> We'd like to inform you that we are preparing publications on the discovered vulnerability.
> Our Researchers plan to release the technical research, which will include the description and details of the discovered vulnerability.
> The research will be released after 90 days from the date you were informed of the vulnerability (approx. August 5th, 2025).
> Please answer the following questions:
>
> * When and in what version will you fix the vulnerability described in the Report? (date, version)
> * If it is not possible to release a patch in the next 90 days, then please indicate the expected release date of the patch (month).
> * Please, provide the CVE-ID for the vulnerability that we submitted to you.
>
> In case you have any further questions, please, contact us.
https://gitlab.gnome.org/GNOME/libxml2/-/issues/905#note_243...
The main issue here is really one of tone. The maintainer has been investing his free time to altruistically move the state of software forward and the reporter is too lazy to even type up a tone-adjusted individual message. Would it have been so hard for the reporter to write the following?
> Thank you for your nice library. It is very useful to us! However, we found a minor error that unfortunately might be severely exploitable. Attached is a patch that "fixes" it in an ad-hoc way. If you want to solve the issue in a different way, could we apply the patch first, and then you refactor the solution when you find time? Thanks! Could you give us some insights on when after merging to main/master, the patch will end up in a release? This is important for us to decide whether we need to work with a bleeding edge master version. Thank you again for your time!
Ultimately, it is a very similar message content. However, it feels completely different.
Suppose you are a maintainer without that much motivation left, and you get hit with such a message. You will feel like the reporter is an asshole. (I'm not saying he is one.) Do you really care, if he gets powned via this bug? It takes some character strength on the side of the maintainer to not just leave the issue open out of spite.
What would a fair model look like? An open-source infrastructure endowment? Ongoing support contracts per critical library?
At the same time, I think there’s a tension in open source we don’t talk about enough: it’s built to be free and open to all, including the corporations we might wish were more generous. No one signed a contract!
As the article states, Libxml2 was widely promoted (and adopted) as the go-to XML parser. Now, the maintainer is understandably tired. There is now a sustainability problem that is more systemic than personal. How much did the creator of libxml benefit?
I don’t think we should expect companies to do the right thing just because they benefit and it isn’t how open source was meant to be and this isn’t how open source is supposed to work
But maybe that’s the real problem
I suspect the maintainer would mind less if it was reported by actual users of the library who encountered a real world issue and even better if they offer a patch at the same time, but these bugs are likely the result of scanning tools or someone eyeballing the code for theoretical issues.
In light of the above, the proposed MAINTENANCE-TERMS.md makes a lot of sense, but I think it should also state that security researchers looking for CVEs or are concerned about responsible disclosure should contact the vendor of the software distributing the library.
This would put the onus on the large corporates leveraging the library (at no charge) to use their own resources to deal with addressing security researcher concerns appropriately and they can probably do most of the fix work themselves and the coordinate with the maintainer only to get a release out in a timely manner.
If maintainers find that people coming to them with security issues have done all work possible before hand, they’d probably be completely happy to help.
What I can and will do however is write a bug ticket that says what I think the issue is, where my closest suspicion is that causes the issue, and provide either a reproduction or a bugfix patch. Dealing with the remainder of the bureaucracy however is what I do not see as my responsibility.
There is plenty of software today that is tested within cost and schedule that’s closed source and it’s running in production. I get the point but libxml is not one of those cases
> Denial of service is not resulting in ...
Turns out they result in deaths. (This was DoS through ransomware)
That is why I think that "severity" and the usual kinds of vulnerability scores are BS. Anyone composing a product or operating a system has to do their own assessment, taking into account all circumstances.
In the context of the original article this means that it is hopeless anyways, and the maintainer's point of view is valid: in some context everything is "EXTREMELY HIGH SEVERITY, PANIC NOW!". So he might as well not care and treat everything equally. Absolutely rational decision that I do agree with.
Yes open source has changed, from when the early 90s. There are more users, companies use projects and make millions with other peoples work.
I feel with the maintainer, with how ungrateful people are. And demanding without giving.
Open Source licenses fall short.
Open Source projects should clearly state what they think about fixing security, taking on external contributions or if they consider the project feature complete. Just like standard licenses, we should have a standard, parseable maintenance "contract".
"I fix whatever you pay for, I fix nothing, I fix how I see fit. Including disclosure, etc."
So everyone is clear about what to expect.
That doesn't help anyone, because it is far too primitive.
A medical device might have a deadly availability vulnerability. That in itself doesn't tell you anything about the actual severity of the vulnerability, because the exploit path might need "the same physical access as pulling the power plug". So not actually a problem.
Or the fix might need a long downtime which harms a number of patients. So maybe a problem, but the cure would be worse than the disease.
Or the vulnerability involves sending "I, Eve Il. Attacker, identified by badge number 666, do want to kill this patient" to the device. So maybe not a problem because an attacker will be caught and punished for murder, because the intent was clear.
The reporter doesn't care about libxml2 being more secure, they only care about having a CVE-ID to brag about discovering a vulnerability and publishing it on their blog. If the reporter used the second message you wrote, they wouldn't get what they want.
I think something like "Null-pointer-referencing issues will not be looked at by core maintainers unless someone already provides a patch". That way, someone else who knows how to fix the problem can step in, and users aren't left with the false impression that merely reporting their bug will not guarantee a solution.
In short, Apple maintain a 448 kB diff which they 'throw across the wall' in the form of an opaque tarball, shorn of all context. Many of the changes contained within look potentially security-related, but it's been released in a way which would require a huge amount of work to unpick.
That level of effort is unfeasible for a volunteer upstream developer, but is a nice juicy resource for a motivated attacker. Apple's behaviour, therefore, is going to be a net negative from a security point of view for all other users of this library.
When it comes to fixing the issues, their customers will have to beg/spam/threaten the maintainers until the problem is solved. They probably won't write a patch; after all, Apple, Google, and Microsoft are only small companies with limited funds.
There isn't much difference between MIT and GPL unless you are selling a product that runs locally or on premisses and with the latter some companies try to work around GPL by renting servers with software on it - either as physical boxes or something provided on cloud provider marketplace.
Look at what you actually have installed on your computer - odds are that unless your job requires something like CAD, photo/video editing or other highly specialized software you have nothing made by large enterprise with exception of OS and Slack/Teams/Zoom.
Unhappy with a maintainer? Fork and maintain it yourself.
Some open source code creates issues in your project? Fix it and try to upstream. Upstream is not accepted? Fork and announce the fix.
Unpaid open source developers owe you nothing, you can't demand anything, their work is already a huge charitable contribution to humanity. If you can do better — fork button is universally available. Don't forget to say thank you to original authors while you stay on the shoulders of giants.
That's fine for feature requests, but the issue in the present case is bug reports.
Risk, severity, etc are careful terms that are typically defined contextually relative to the application... yet CVEs do want some sort of prioritization level reported too for usability reasons, so it feels shoe-horned. Those words are useful in operational context where a team can prioritize based on them, and agreed, a third-party rating must be reinterpreted for the application's rating. CVE ratings is an area where it seems "something is better than nothing", and I don't think about it enough to have an opinion on what would be better.
Conversely, saying a library has a public method with an information flow leak is a statement that we can compositionally track (e.g., dataflow analysis). It's useful info that lets us stand on the shoulders of giants.
FWIW, in an age of LLMs, both kinds of information will be getting even more accessible and practical for many more people. I can imagine flipping my view on risk/severity to being more useful as the LLM can do the compositional reasoning in places the automated symbolic analyzers cannot.
Also the not so relevant security bugs are not just costs to the developers but the library churn is also costing more and more users if the user is forced by policy to follow in a timely manner the latest versions in the name of "security".
Of course that's exactly what traditional Linux distributions signed up to do.
Clearly many people have decided that they're better off without the distributions' packaging work. But maybe they should be thinking about how to get the "buffering" part back, and ideally make it work better than the distributions managed to.
Yet this is off topic for the libxml funding/bug debate.
For embedded mission critical C libxml is surely unsuitable, just like 99.99% of the open source third party code. Also unneeded. If crashes the app on the developer machine or in the build pipeline if it runs out of memory? Who cares (from a safety point of view)? That has nothing to do with availability of safety critical systems in the car.
If getting rid of your input gets rid of the other 20 issues, I would take it.
I agree in theory but it's impractical to achieve due to the coordination effort involved, hence using taxes as a proxy.
> The foundation of the internet is something that gets bigger and bigger every year. I understand the sentiment and the reasoning of declaring software a "public good", but it won't scale.
For a long time, a lot of foundational development was funded by the government. Of course it can scale - the problem is most people don't believe in capable government any more after 30-40 years of neoliberal tax cuts and utter incompetence (California HSR comes to my mind). We used to be able to do great things funded purely by the government, usually via military funding: laser, radar, microwaves and generally a lot of RF technology, even the Internet itself originated out of the military ARPANET. Or the federal highways. And that was just what the Americans did.
The library isn't a worm, it does not find its way into anything. If the bank cares about security they will write their own, use a library that has been audited for such issues, sponsor the development, or use the software provided as is.
You may rejoin with the fact that it could find its way into a project as a dependency of something else. The same arguments apply at any level.
If those systems crash because they balanced their entire business on code written by randos who contribute to an open source project then the organizations in question will have to deal with the consequences. If they want better, they can do what everyone is entitled to: they can contribute to, make, or pay for something better.
> Even if it is a valid security flaw, it is clear why it might rankle a maintainer. The report is not coming from a user of the project, and it comes with no attempt at a patch to fix the vulnerability. It is another demand on an unpaid maintainer's time so that, apparently, a security research company can brag about the discovery to promote its services.
> If Wellnhofer follows the script expected of a maintainer, he will spend hours fixing the bugs, corresponding with the researcher, and releasing a new version of libxml2. Sveshnikov and Positive Technologies will put another notch in their CVE belts, but what does Wellnhofer get out of the arrangement? Extra work, an unwanted CVE, and negligible real-world benefit for users of libxml2.
> So, rather than honoring embargoes and dealing with deadlines for security fixes, Wellnhofer would rather treat security issues like any other bug; the issues would be made public as soon as they were reported and fixed whenever maintainers had time. Wellnhofer also announced that he was stepping down as the libxslt maintainer and said it was unlikely that it would ever be maintained again. It was even more unlikely, he said, with security researchers ""breathing down the necks of volunteers.""
> [...] He agreed that ""wealthy corporations"" with a stake in libxml2 security issues should help by becoming maintainers. If not, ""then the consequence is security issues will surely reach the disclosure deadline (whatever it is set to) and become public before they are fixed"".
> Not true. Many companies uses Linux for example.
I thought it was clear, given that this is a discussion about an open source library, that they were talking about GPL libraries. The way that standalone GPL software is used in companies is qualitatively quite different.
Is it thought? Certainly it is according the C and C++ standards, but POSIX adds:
> References to unmapped addresses shall result in a SIGSEGV signal
While time-traveling-UB is a theoretical possibility, practically POSIX compliant compilers won't reorder around potentially trapping operations (they will do the reverse, they might remove a null check if made redundant by a prior potentially trapping dereference) .
A real concern is if a null pointer is dereferenced with a large attacker-controlled offset that can avoid the trap, but that's more of an issue of failing to bound check.
In the later case I'm wondering if there's an argument to be made for "Show me the code or shut up". Simply rejecting reports on security issue which are not also accompanied by a patch. I'm think, will it devalue the CVE on the researchers resume, if the project simply says no, on the grounds of not being a fix?
Probably not.
I'm not saying XML is unused.
I'm saying that the specific space where it's use can cause security problems from things like a DDOS are rare.
A legacy backend system that consumes XML docs isn't at risk of a malicious attacker injecting DDOS docs.
When XML is used for data interchange, it's typically only in circumstances where trusted parties are swapping XML docs. Where it's not typically being used is the open Internet. You aren't going to find many new rest endpoints emitting or consuming XML.
And the reason it's being used is primarily legacy. The format and parser are static. Swapping them out would be disruptive and gives few benefits.
That's what it means for something to increasingly become irrelevant. When new use slows or stops and development is primarily on legacy.
But to make optimizations pretending it is an invariant, it can't happen, when the specification clearly says it could happen. That's wild, and I would argue out of specification.
Disastrous, apocalyptic consequences is the only way to get the attention of the real decision makers. If libxml2 just vanishes and someone explains to John Chrome or whoever that $150k a year will make the problem go away, it's a non-decision. $150k isn't even a rounding error on a rounding error for Google.
The only way to fight corporations just taking whatever they want is to absolutely wreck their shit when they misbehave.
Call it juvenile, sure, but corporations are not rational adults and usually behave like a child throwing a temper tantrum. There have to be real, painful and ongoing consequences in order to force a corporation to behave.
This is a garbage criticism. It’s perfectly adequate for that for almost everyone. If you are shipping it in a browser to billions of people, that’s a very unique situation, and any security issues are a you problem.
Not sure if this is intended to be a “show both sides” journalism thing but it’s a totally asshole throwaway comment.