Most active commenters
  • nicce(7)
  • bastawhiz(7)
  • arp242(5)
  • viraptor(4)
  • lmeyerov(3)
  • codedokode(3)

←back to thread

277 points jwilk | 93 comments | | HN request time: 1.298s | source | bottom
1. arp242 ◴[] No.44382233[source]
A lot of these "security bugs" are not really "security bugs" in the first place. Denial of service is not resulting in people's bank accounts being emptied or nude selfies being spread all over the internet.

Things like "panics on certain content" like [1] or [2] are "security bugs" now. By that standard anything that fixes a potential panic is a "security bug". I've probably fixed hundreds if not thousands of "security bugs" in my career by that standard.

Barely qualifies as a "security bug" yet it's rated as "6.2 Moderate" and "7.5 HIGH". To say nothing of gazillion "high severity" "regular expression DoS" nonsense and whatnot.

And the worst part is all of this makes it so much harder to find actual high-severity issues. It's not harmless spam.

[1]: https://github.com/gomarkdown/markdown/security/advisories/G...

[2]: https://rustsec.org/advisories/RUSTSEC-2024-0373.html

replies(13): >>44382268 #>>44382299 #>>44382855 #>>44384066 #>>44384368 #>>44384421 #>>44384513 #>>44384791 #>>44385347 #>>44385556 #>>44389612 #>>44390124 #>>44390292 #
2. nicce ◴[] No.44382268[source]
> A lot of these "security bugs" are not really "security bugs" in the first place. Denial of service is not resulting in people's bank accounts being emptied or nude selfies being spread all over the internet.

That is not true at all. Availability is also critical. If nobody can use bank accounts, bank has no purpose.

replies(5): >>44382300 #>>44382443 #>>44382474 #>>44382869 #>>44383755 #
3. icedchai ◴[] No.44382299[source]
Everything is a "security bug" in the right (wrong?) context, I suppose.
replies(1): >>44382581 #
4. bogeholm ◴[] No.44382300[source]
Security and utility are separate qualities.

You’re correct that inaccessible money are useless, however one could make the case that they’re secure.

replies(4): >>44382341 #>>44382350 #>>44382887 #>>44383068 #
5. nicce ◴[] No.44382341{3}[source]
I think you are only considering the users - for the business provider the availability has larger meaning because the lack of it can bankrupt your business. It is about securing operations.
replies(3): >>44382398 #>>44382482 #>>44382532 #
6. marcusb ◴[] No.44382350{3}[source]
https://www.sentinelone.com/cybersecurity-101/cybersecurity/...
7. leni536 ◴[] No.44382398{4}[source]
Virtually all bugs have some cost. Security bugs tend to be more expensive than others, but it doesn't mean that all very expensive bugs are security bugs.
8. antonymoose ◴[] No.44382443[source]
I routinely handle regex DoS complaints on front-end input validation…

If a hacker wants to DoS their own browser I’m fine with that.

replies(2): >>44382485 #>>44382527 #
9. arp242 ◴[] No.44382474[source]
Many of these issues are not the type of issues that will bring down an entire platform; most are of the "if I send wrong data, the server will return with a 500 for that request" or "my browser runs out of memory if I use a maliciously crafted regexp". Well, whoopdeedoo.

And even if it somehow could, it's 1) just not the same thing as "I lost all my money" – that literally destroys lives and the bank not being available for a day doesn't. And 2) almost every bug has the potential to do that in at least some circumstances – circumstances with are almost never true in real-world applications.

replies(1): >>44382589 #
10. em-bee ◴[] No.44382482{4}[source]
not paying rent can get you evicted. and not paying your medical bill can get you denied care. (in china most medical care is not very expensive, but every procedure has to be paid in advance. you probably won't be denied emergency care so your life would not be in immediate danger, but sometimes an optional scan discovers something life threatening that you weren't aware of so not being able to pay for it can put you at risk)
11. Onavo ◴[] No.44382485{3}[source]
Until the same library for their "isomorphic" backend..
replies(1): >>44382895 #
12. nicce ◴[] No.44382527{3}[source]
This depends on the context to be fair. Front-end DoS can suddenly expand into botnet DDoS if you can trigger it by just serving a specific kind of URL. E.g. search goes into endless loop that makes requests into the backend.
replies(1): >>44384851 #
13. arp242 ◴[] No.44382532{4}[source]
If a panic or null pointer deref in some library causes your entire business to go down long enough that you go bankrupt, then you probably deserve to go out of business because your software is junk.
replies(1): >>44382657 #
14. cogman10 ◴[] No.44382581[source]
Well, that's sort of the problem.

It's true that once upon a time, libxml was a critical path for a lot of applications. Those days are over. Protocols like SOAP are almost dead and there's not really a whole lot of new networking applications using XML in any sort of manor.

The context where these issues could be security bugs is an ever-vanishing usecase.

Now, find a similar bug in zlib or zstd and we could talk about it being an actual security bug.

replies(4): >>44383188 #>>44383685 #>>44383777 #>>44385767 #
15. nicce ◴[] No.44382589{3}[source]
> Many of these issues are not the type of issues that will bring down an entire platform; most are of the "if I send wrong data, the server will return with a 500 for that request" or "my browser runs out of memory if I use a maliciously crafted regexp". Well, whoopdeedoo.

I wouldn't personally classify these as denial of service. They are just bugs. 500 status code does not mean that server uses more resources to process it than it typically does. OOMing your browser has no impact to others. These should be labeled correctly instead of downplaying the significance of denial of service.

Like I said in my other comment, there are two entities - the end-user and the service provider. The service provider/business loses money too when customers cannot make transactions (maybe they had promise to keep specific uptime and now they need to pay compensations). Or they simple get bankrupted because they lost their users.

Even customers may lose money or something else when they can't make transactions. Or maybe identification is based on bank credentials on some other service. The list goes on.

replies(1): >>44383552 #
16. nicce ◴[] No.44382657{5}[source]
I believe you know well that bankrupt is the worst case. Many business functions can be so critical that 24h disturbance is enough to cause high financial damages or even loss of life. A bug in the car's brakes that prevents their usage is also denial of service.
replies(1): >>44382731 #
17. arp242 ◴[] No.44382731{6}[source]
Or 24h disturbance. Or indeed taking the entire system down at all.

And no one is talking about safety-critical systems. You are moving the goalposts. Does a gas pedal use a markdown or XML parser? No.

replies(1): >>44383023 #
18. viraptor ◴[] No.44382855[source]
> Denial of service is not resulting in ...

DoS results in whatever the system happens to do. It may well result in bad things happening, for example stopping AV from scanning new files, breaking rate limiting systems to allow faster scanning, hogging all resources on a shared system for yourself, etc. It's rarely a security issue in isolation, but libraries are never used in isolation.

replies(2): >>44383029 #>>44383134 #
19. p1necone ◴[] No.44382869[source]
I think it's context dependent whether DoS is on par with data loss/extraction, including whether it's actually a security issue or not. I would argue DoS for a bank (assuming it affects backend systems and not just the customer portal) would be a serious security issue given the kinds of things it could impact.
20. hsbauauvhabzb ◴[] No.44382887{3}[source]
Inability for drug dispensers to dispense life saving drugs due to DoS has failed utility and will cost lives, would you describe that as secure?
21. hsbauauvhabzb ◴[] No.44382895{4}[source]
Server side rendering is all the rage again, so yeah it might be.
22. nicce ◴[] No.44383023{7}[source]
The point was about the importance of availability.

> Does a gas pedal use a markdown or XML parser? No.

Cars in general use, extensively: https://en.wikipedia.org/wiki/AUTOSAR

replies(2): >>44383882 #>>44385743 #
23. ivanjermakov ◴[] No.44383029[source]
DoSing autonomous vehicle brake controls...
replies(1): >>44383140 #
24. burnt-resistor ◴[] No.44383068{3}[source]
Define what you mean by "security".

Control integrity, nonrepudiation, confidentiality, privacy, ...

Also, define what you mean by "utility" because there's inability to convert a Word document, inability to stop a water treatment plant from poisoning people, and ability to stop a fire requiring "utility".

25. bastawhiz ◴[] No.44383134[source]
An AV system stopping because of a bug in a library is bad, but that's not because the library has a security bug. It's a security problem because the system itself does security. It would be wild if any bug that leads to a crash or a memory leak was a "security" bug because the library might have been used by someone somewhere in a context that has security implications.

A bug in a library that does rate limiting arguably is a security issue because the library itself promises to protect against abuse. But if I make a library for running Lua in redis that ends up getting used by a rate limiting package, and my tool crashes when the input contains emoji, that's not a security issue in my library if the rate limiting library allows emails with punycode emoji in them.

"Hogging all of the resources on a shared system" isn't a security bug, it's just a bug. Maybe an expensive one, but hogging the CPU or filling up a disk doesn't mean the system is insecure, just unavailable.

The argument that downtime or runaway resource use due is considered a security issue but only if the problem is in someone else's code is some Big Brained CTO way of passing the buck onto open source software. If it was true, Postgres autovacuuming due to unpleasant default configuration would be up there with Heartbleed.

Maybe we need a better way of alerting downstream users of packages when important bugs are fixed. But jamming these into CVEs and giving them severities above 5 is just alert noise and makes it confusing to understand what issues an organization should actually care about and fix. How do I know that the quadratic time regexp in a string formatting library used in my logging code is even going to matter? Is it more important than a bug in the URL parsing code of my linter? It's impossible to say because that responsibility was passed all the way downstream to the end user. Every single person needs to make decisions about what to upgrade and when, which is an outrageous status quo.

replies(3): >>44383193 #>>44383817 #>>44384248 #
26. bastawhiz ◴[] No.44383140{3}[source]
I hope my brakes aren't parsing xml
27. fires10 ◴[] No.44383188{3}[source]
SOAP is used far more than most people realize. I deal extensively in "cutting edge" industries that rely heavily on SOAP or SOAP based protocols. Supply chain systems and manufacturing.
replies(1): >>44388327 #
28. viraptor ◴[] No.44383193{3}[source]
> An AV system stopping because of a bug in a library is bad, but that's not because the library has a security bug.

(And other examples) That's a fallacy of looking for the root cause. The library had an issue, the system had an issue and together they resulted in a problem for you. Some issues will be more likely to result in security problems than others, so we classify them as such. We'll always deal with probabilities here, not clear lines. Otherwise we'll just end up playing a blame game "sure, this had a memory overflow, but it's package fault for not enabling protections that would downgrade it to a crash", "no it's deployments fault for not limiting that exploit to just this users data partition", "no it's OS fault for not implementing detailed security policies for every process", ...

replies(1): >>44384117 #
29. bawolff ◴[] No.44383552{4}[source]
> I wouldn't personally classify these as denial of service. They are just bugs. 500 status code does not mean that server uses more resources to process it than it typically does

Not necessarily. 500 might indicate the process died, which might take more resources to startup, have cold cache, whatever. If you spam that repeatedly it could easily take down the site.

I agree with your point broadly though that the risk of such things are grossly overstated, but i think we should be careful about going in the opposite direction too far.

replies(1): >>44383721 #
30. betaby ◴[] No.44383685{3}[source]
> there's not really a whole lot of new networking applications using XML in any sort of manor.

Quite the opposite. NETCONF is XML https://en.wikipedia.org/wiki/NETCONF and all modern ISP/Datacenter routers/switches have it underneath and most of the time as a primary automation/orchestration protocol.

31. nicce ◴[] No.44383721{5}[source]
> Not necessarily. 500 might indicate the process died, which might take more resources to startup, have cold cache, whatever. If you spam that repeatedly it could easily take down the site

That is true, but the status code 500 alone does not reveal that; it is speculation. Status codes are not always used correctly. It is typically just indicator to dig deeper. There might be a security issue, but the code itself is not enough.

Maybe this just the same general problem of false positives. Proving something requires more effort and more time and people tend to optimise things.

replies(1): >>44383877 #
32. SchemaLoad ◴[] No.44383755[source]
If every single bug in libxml is a business ending scenario for the bank, then maybe the bank can afford to hire someone to work on those bugs rather than pestering a single volunteer.
33. monocasa ◴[] No.44383777{3}[source]
Unfortunately stuff like SAML is XML.

That being said, I don't think that libxml2 has support for the dark fever dream that is XMLDSig, which SAML depends on.

34. lmeyerov ◴[] No.44383817{3}[source]
Traditional security follows the CIA triad: Confidentiality (ex: data leaks), Integrity (ex: data deletion), and Availability (ex: site down). Something like SOC2 compliance typically has you define where you are on these, for example

Does availability not matter to you? Great. For others, maybe it does, like you are some medical device segfaulting or OOMing in an unmanaged way on a cfg upload is not good. 'Availability' is a pretty common security concern for maybe 40 years now from an industry view.

replies(4): >>44383876 #>>44384078 #>>44384219 #>>44385894 #
35. int_19h ◴[] No.44383876{4}[source]
We're talking about what's reasonable to expect as a baseline. A higher standard isn't wrong, obviously, but those who need it shouldn't be expecting others to provide it by default, and most certainly not for free.
36. bawolff ◴[] No.44383877{6}[source]
True, but in the context of the article we are talking about null pointer dereference. That is almost certainly going to cause a segfault and require restarting the process.
37. int_19h ◴[] No.44383882{8}[source]
Great, then we have someone with both resources and an incentive to write and maintain an XML parser with strict availability guarantees.
replies(1): >>44385769 #
38. dcow ◴[] No.44384066[source]
Full disclosure is the only fair and humane way to handle “security” bugs, because as you point out, every bug is a security bug to someone. And adversaries will make their way onto embargo lists anyway. It’s good to see a principled maintainer other than openbsd fighting the fight.
39. bastawhiz ◴[] No.44384117{4}[source]
But it's not treated as dealing in probabilities. The CVEs (not that I think they're even worthwhile) are given scores that ignore the likelihood of an issue being used in a security sensitive context. They're scored for the worst case scenario. And if we're dealing with probabilities, it puts less onus on people who actually do things where security matters and spams everyone else where those probabilities are unrealistic, which in a huge majority of cases.

This is worse for essentially everyone except the people who should be doing more diligence around the code that they use. If you need code to be bug free (other than the fact that you're delusional about the notion of "bug free" code) you're just playing the blame game when you don't put protections in place. And I'm not talking about memory safety, I'm talking about a regexp with pathological edge cases or a panic in user inputs. If you're not handling unexpected failure modes from code you didn't write and inspect, why does that make it a security issue where the onus is on the library maintainer?

replies(1): >>44384590 #
40. bastawhiz ◴[] No.44384219{4}[source]
> some medical device segfaulting or OOMing in an unmanaged way

Memory safety is arguably always a security issue. But a library segfaulting when NOT dealing with arbitrary external input wouldn't be a CVE in any case, it's just a bug. An external third party would need to be able to push a crafted config to induce a segfault. I'm not sure what kind of medical device, short of a pacemaker that accepts Bluetooth connections, might fall into such a category, but I'd argue that if a crash in your dependencies' code prevents someone's heart from beating properly, relying CVEs to understand the safety of your system is on you.

Should excessive memory allocation in OpenCV for certain visual patterns be a CVE because someone might have built an autonomous vehicle with it that could OOM and (literally) crash? Just because you put the code in the critical path of a sensitive application doesn't mean the code has a vulnerability.

> 'Availability' is a pretty common security concern for maybe 40 years now from an industry view.

Of course! It's a security problem for me in my usage of a library because I made the failure mode of the library have security implications. I don't want my service to go offline, but that doesn't mean I should be entitled to having my application's exposure to failure modes affecting availability be treated on equal footing to memory corruption or an RCE or permissions bypass.

replies(2): >>44384385 #>>44384401 #
41. comex ◴[] No.44384248{3}[source]
This is a tangent from your main argument about DoS.

But when you talk about URL parsing in a linter or a regexp in logging code, I think you're implying that the bugs are unimportant, in part, because the code only handles trusted input.

Which is valid enough. The less likely some component is to receive untrusted input, the lower the severity should be.

But beware of going all the way and saying "it's not a bug because we assume trusted input". Whenever you do that, you're also passing down a responsibility to the user: the responsibility to segregate trusted and untrusted data!

Countless exploits have arisen when some parser never designed for untrusted input ended up being exposed to it. Perhaps that's not the parser's fault. But it always happens.

If you want to build secure systems, the only good approach is to stop using libraries that have those kinds of footguns.

replies(2): >>44384365 #>>44390342 #
42. scott_w ◴[] No.44384365{4}[source]
> But when you talk about URL parsing in a linter or a regexp in logging code, I think you're implying that the bugs are unimportant, in part, because the code only handles trusted input.

It is a bug but it’s not necessarily a security hole in the library. That’s what OP is saying.

replies(1): >>44384900 #
43. pjmlp ◴[] No.44384368[source]
Any bug that can be used directly, or indirectly alongside others, is a security bug.

A denial of service in a system related to emergency phone calls can result on people's deaths.

44. pjmlp ◴[] No.44384385{5}[source]
Yes it should, software will eventually be liable like in any other industry that has been around for centuries.
replies(1): >>44390156 #
45. lmeyerov ◴[] No.44384401{5}[source]
I agree on the first part, but it's useful to be more formal on the latter --

1. Agreed it's totally fine for a system to have some bugs or CVEs, and likewise fine for OSS maintainers to not feel compelled to address them. If someone cares, they can contribute.

2. Conversely, it's very useful to divorce some application's use case from the formal understanding of whether third-party components are 'secure' because that's how we stand on the shoulders of giants. First, it lets us make composable systems: if we use CIA parts, with some common definition of CIA, we get to carry that through to bigger parts and applications. Second, on a formal basis, 10-20 years after this stuff was understood to be useful, the program analysis community further realized we can even define them mathematically in many useful ways, where different definitions lead to different useful properties, and enables us to provably verify them, vs just test for them.

So when I say CIA nowadays, I'm actually thinking both mathematically irrespective of downstream application, and from the choose-your-own-compliance view. If some library is C+I but not A... that can be fine for both the library and the downstream apps, but it's useful to have objective definitions. Likewise, something can gradations of all this -- like maybe it preserves confidentiality in typical threat models & definitions, but not something like "quantitative information flow" models: also ok, but good for everyone to know what the heck they all mean if they're going to make security decisions on it.

replies(1): >>44385469 #
46. nottorp ◴[] No.44384421[source]
"Security" announcements seem to be of 3 kinds lately:

1. Serious. "This is a problem and it needs fixing yesterday."

2. Marketing. "We discovered that if earth had two moons and they aligned right and you had local root already you could blah blah. By the way we are selling this product that will generate a positive feedback loop for your paranoid tendencies, buy buy buy!".

3. Reputation chasing. Same as above, except they don't sell you a product, they want to establish themselves as an expert in aligning moons.

Much easier to do 2 or 3 via "AI" by the way.

47. codedokode ◴[] No.44384513[source]
Dereferencing a null pointer is an error. It is a valid bug.

The maintainer claims this is caused by allocator failure (malloc returning null), but it is still a valid bug. If you don't want to deal with malloc failures, just crash when malloc() returns null, instead of not checking malloc() result at all.

The maintainer could just write a wrapper around malloc that crashes on failure and replace all calls with the wrapper. It seems like an easy fix. Because almost no software can run where there is no heap memory so it makes no sense for the program to continue.

Another solution is to propagate every error back to the caller, but it is difficult and there is high probability that the caller won't bother checking the result because of laziness.

A quote from a bug report [1]:

> If xmlSchemaNewValue returns NULL (e.g., due to a failure of malloc), xmlSchemaDupVal checks for this and returns NULL.

[1] https://gitlab.gnome.org/GNOME/libxml2/-/issues/905

replies(5): >>44384716 #>>44385168 #>>44385200 #>>44386885 #>>44387099 #
48. viraptor ◴[] No.44384590{5}[source]
The score assigned to issues has to be the worst case one, because whoever is assessing it will not know how people use the library. The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact. People outside that system would be only guessing. And you really don't want to guess "nobody would use it this way, it's fine" if it turns out some huge private deployment does.
replies(2): >>44385113 #>>44390221 #
49. worthless-trash ◴[] No.44384716[source]
A while back i remember looking at the kernel source code, when overcommit is enabled, malloc would not fail if it couldnt allocate memory, it would ONLY fail if you attempted to allocate memory larger than the available memory space.

I not think you can deal with the failure condition the way you think on Linux (and I imagine other operating systems too).

replies(2): >>44385434 #>>44386490 #
50. yeyeyeyeyeyeyee ◴[] No.44384791[source]
A basic definition of a security bug is something that violates confidentiality, integrity or availability.

A DoS affects the availability of an application, and as such is a real security bug. While the severity of it might be lower than a bug that allows to "empty bank accounts", and fixing it might get a lower priority, it doesn't make it any less real.

replies(2): >>44384859 #>>44386086 #
51. talkin ◴[] No.44384851{4}[source]
No. The Regex DoS class of bugs is about infinite backtracking or looping inside the regex engine. Completely isolated component, just hogging CPU inside the regex engine. It may also have ‘DoS’ in its name, but there’s no relation to network (D)DoS attacks.

It could still be a security error, but only if all availability errors are for that project. But after triage, the outcome is almost always “user can hang own browser on input which isn’t likely”. And yes, it’s a pity I wrote ‘almost’, which means having to check 99% false alarms.

52. citrin_ru ◴[] No.44384859[source]
The problem is that DoS is the most vaguely defined category. If a library processes some inputs 1000 slower than average one may claim that this is a DoS. What if it is just 10x slower? Where to draw the line? What is the problem domain is such that some inputs just take more time and there is no way to 'fix' it? What if the input comes only from a trusted source?
53. comex ◴[] No.44384900{5}[source]
Yes, that’s the OP’s main point, but their choice of examples suggests that they are also thinking about trusted input.
54. tsimionescu ◴[] No.44385113{6}[source]
> The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact.

Unfortunately that's not how it happens in practice. People run security scanners, and those report that you're using library X version Y which has a known vulnerability with a High CVSS score or whatever. Even if you provide a reasoned explanation of why that vulnerability doesn't impact your use case and you convince your customer's IT team of this, this is seen as merely a temporary waiver: very likely, you'll have the same discussion next time something is scanned and found to contain this.

The whole security audit system and industry is problematic, and often leads to huge amounts of busy work. Overly pessimistic CVEs are not the root cause, but they're still a big problem because of this.

55. fredilo ◴[] No.44385168[source]
> The maintainer could just write a wrapper around malloc that crashes on failure and replace all calls with the wrapper. It seems like an easy fix. Because almost no software can run where there is no heap memory so it makes no sense for the program to continue.

So could the reporter of the bug. Alternatively, he could add an `if(is null){crash}` after the malloc. The fix is easy for anyone that has some knowledge of the code base. The reporter has demonstrated this knowledge in finding the issue.

If a useful PR/patch diff was provided with the reporter, I would have expected it to be merged right away.

However, instead of doing the obvious thing to actually solve the issue, the reporter hits the maintainer with this bureaucratic monster:

> We'd like to inform you that we are preparing publications on the discovered vulnerability.

> Our Researchers plan to release the technical research, which will include the description and details of the discovered vulnerability.

> The research will be released after 90 days from the date you were informed of the vulnerability (approx. August 5th, 2025).

> Please answer the following questions:

>

> * When and in what version will you fix the vulnerability described in the Report? (date, version)

> * If it is not possible to release a patch in the next 90 days, then please indicate the expected release date of the patch (month).

> * Please, provide the CVE-ID for the vulnerability that we submitted to you.

>

> In case you have any further questions, please, contact us.

https://gitlab.gnome.org/GNOME/libxml2/-/issues/905#note_243...

The main issue here is really one of tone. The maintainer has been investing his free time to altruistically move the state of software forward and the reporter is too lazy to even type up a tone-adjusted individual message. Would it have been so hard for the reporter to write the following?

> Thank you for your nice library. It is very useful to us! However, we found a minor error that unfortunately might be severely exploitable. Attached is a patch that "fixes" it in an ad-hoc way. If you want to solve the issue in a different way, could we apply the patch first, and then you refactor the solution when you find time? Thanks! Could you give us some insights on when after merging to main/master, the patch will end up in a release? This is important for us to decide whether we need to work with a bleeding edge master version. Thank you again for your time!

Ultimately, it is a very similar message content. However, it feels completely different.

Suppose you are a maintainer without that much motivation left, and you get hit with such a message. You will feel like the reporter is an asshole. (I'm not saying he is one.) Do you really care, if he gets powned via this bug? It takes some character strength on the side of the maintainer to not just leave the issue open out of spite.

replies(3): >>44385549 #>>44386251 #>>44387283 #
56. saurik ◴[] No.44385200[source]
> It is a valid bug.

But is it a high-severity security bug?

replies(1): >>44385763 #
57. viraptor ◴[] No.44385347[source]
Unfortunately this is timely news: https://news.sky.com/story/patient-death-linked-to-cyber-att...

> Denial of service is not resulting in ...

Turns out they result in deaths. (This was DoS through ransomware)

replies(1): >>44385410 #
58. holowoodman ◴[] No.44385410[source]
Security bugs always have a context-dependent severity. An availability problem in a medical device is far more severe than a confidentiality problem. In a cloud service, the same problems might switch their severity, downtime isn't deadly and just might affect some SLAs, but disclosing sensitive data will yield significant punishment and reputation damage.

That is why I think that "severity" and the usual kinds of vulnerability scores are BS. Anyone composing a product or operating a system has to do their own assessment, taking into account all circumstances.

In the context of the original article this means that it is hopeless anyways, and the maintainer's point of view is valid: in some context everything is "EXTREMELY HIGH SEVERITY, PANIC NOW!". So he might as well not care and treat everything equally. Absolutely rational decision that I do agree with.

59. codedokode ◴[] No.44385434{3}[source]
The bug was about the case when malloc returns null, but the library doesn't check for it.
replies(1): >>44385961 #
60. holowoodman ◴[] No.44385469{6}[source]
> So when I say CIA nowadays, I'm actually thinking both mathematically irrespective of downstream application, and from the choose-your-own-compliance view.

That doesn't help anyone, because it is far too primitive.

A medical device might have a deadly availability vulnerability. That in itself doesn't tell you anything about the actual severity of the vulnerability, because the exploit path might need "the same physical access as pulling the power plug". So not actually a problem.

Or the fix might need a long downtime which harms a number of patients. So maybe a problem, but the cure would be worse than the disease.

Or the vulnerability involves sending "I, Eve Il. Attacker, identified by badge number 666, do want to kill this patient" to the device. So maybe not a problem because an attacker will be caught and punished for murder, because the intent was clear.

replies(1): >>44385602 #
61. sersi ◴[] No.44385549{3}[source]
> the reporter is too lazy to even type up a tone-adjusted individual message. Would it have been so hard for the reporter to write the following?

The reporter doesn't care about libxml2 being more secure, they only care about having a CVE-ID to brag about discovering a vulnerability and publishing it on their blog. If the reporter used the second message you wrote, they wouldn't get what they want.

62. cedws ◴[] No.44385556[source]
Denial of service is a security bug. It may seem innocuous in the context of a single library, but what happens when that library finds it way into core banking systems, energy infrastructure and so on? It's a target ripe for exploitation by foreign adversaries. It has the same potential to harm people as other bugs.
replies(2): >>44385906 #>>44386120 #
63. lmeyerov ◴[] No.44385602{7}[source]
We're talking about different things. I agree CVE ratings and risk/severity/etc levels in general for third party libraries are awkward. I don't have a solution there. That does not mean we should stop reporting and tracking C+I+A violations - they're neutral, specific, and useful.

Risk, severity, etc are careful terms that are typically defined contextually relative to the application... yet CVEs do want some sort of prioritization level reported too for usability reasons, so it feels shoe-horned. Those words are useful in operational context where a team can prioritize based on them, and agreed, a third-party rating must be reinterpreted for the application's rating. CVE ratings is an area where it seems "something is better than nothing", and I don't think about it enough to have an opinion on what would be better.

Conversely, saying a library has a public method with an information flow leak is a statement that we can compositionally track (e.g., dataflow analysis). It's useful info that lets us stand on the shoulders of giants.

FWIW, in an age of LLMs, both kinds of information will be getting even more accessible and practical for many more people. I can imagine flipping my view on risk/severity to being more useful as the LLM can do the compositional reasoning in places the automated symbolic analyzers cannot.

64. fodkodrasz ◴[] No.44385743{8}[source]
AUTOSAR xml-s are compile-time/integration time toolchain metadata mostly in my memory.

Yet this is off topic for the libxml funding/bug debate.

For embedded mission critical C libxml is surely unsuitable, just like 99.99% of the open source third party code. Also unneeded. If crashes the app on the developer machine or in the build pipeline if it runs out of memory? Who cares (from a safety point of view)? That has nothing to do with availability of safety critical systems in the car.

65. Quekid5 ◴[] No.44385763{3}[source]
Considering that it's Undefined Behavior, quite possibly.

EDIT: That said, I'm on the maintainer's side here.

replies(1): >>44386062 #
66. tzs ◴[] No.44385767{3}[source]
Aside from heavy use in the healthcare, finance, banking, retail, manufacturing, transportation, logistics, telecommunications, automotive, publishing, and insurance industries, w̶h̶a̶t̶ ̶h̶a̶v̶e̶ ̶t̶h̶e̶ ̶R̶o̶m̶a̶n̶s̶ who uses XML?
replies(1): >>44386472 #
67. fodkodrasz ◴[] No.44385769{9}[source]
Automotive companies pay big buck to vendors who supply certified tools/libraries, because getting stuff certified is lot of work/time. This also means that those stuff are often outdated, and a pain to work with, yet their vendors are not expected to function as charities, as often expected by FLOSS authors, esp. when releasing their code under BSD/MIT licenses and then getting eaten by the sharks.
68. ◴[] No.44385894{4}[source]
69. sigilis ◴[] No.44385906[source]
The importance of the system in question is not a factor in whether something is a security bug for a dependency. The threat model of the important system should preclude it from using dependencies that are not developed with a similar security paradigm. Libxml2 simplly operates under a different regime than, as an arbitrary example, the nuclear infrastructure of a country.

The library isn't a worm, it does not find its way into anything. If the bank cares about security they will write their own, use a library that has been audited for such issues, sponsor the development, or use the software provided as is.

You may rejoin with the fact that it could find its way into a project as a dependency of something else. The same arguments apply at any level.

If those systems crash because they balanced their entire business on code written by randos who contribute to an open source project then the organizations in question will have to deal with the consequences. If they want better, they can do what everyone is entitled to: they can contribute to, make, or pay for something better.

70. bjourne ◴[] No.44385961{4}[source]
Correct, but the point is that it is difficult to get malloc to return null on Linux. Why litter your code with checks for de facto impossible scenarios?
replies(2): >>44386485 #>>44387412 #
71. gpderetta ◴[] No.44386062{4}[source]
> Considering that it's Undefined Behavior, quite possibly.

Is it thought? Certainly it is according the C and C++ standards, but POSIX adds:

> References to unmapped addresses shall result in a SIGSEGV signal

While time-traveling-UB is a theoretical possibility, practically POSIX compliant compilers won't reorder around potentially trapping operations (they will do the reverse, they might remove a null check if made redundant by a prior potentially trapping dereference) .

A real concern is if a null pointer is dereferenced with a large attacker-controlled offset that can avoid the trap, but that's more of an issue of failing to bound check.

replies(1): >>44386336 #
72. thinkharderdev ◴[] No.44386086[source]
The CIA triad is a framework for threat modeling, not a threat model in and of itself. And what those specific terms mean will also be very system-specific.
73. arp242 ◴[] No.44386120[source]
By that standard almost any bug could be considered a "security bug", including things like "returns error even though my XML is valid" or "it parses this data wrong".
74. rwmj ◴[] No.44386251{3}[source]
If someone had reported that on a project I maintain, I'd have told them to get outta here, in somewhat less polite language. They're quite clearly promoting their own company / services and don't care in the slightest about libxml2.
replies(1): >>44387804 #
75. ynik ◴[] No.44386336{5}[source]
Under your interpretation, neither gcc nor clang are POSIX compliant. Because in practice all these optimizing compilers will reorder memory accesses without bothering to prove that the pointers involved are valid -- the compiler just assumes that the pointers are valid, which is justified because otherwise the program would have undefined behavior.
replies(2): >>44386394 #>>44386634 #
76. gpderetta ◴[] No.44386394{6}[source]
Actually you are right, what I said about reordering is nonsense. The compiler will definitely reorder non-aliasing accesses. There are much weaker properties that are preserved.
77. cogman10 ◴[] No.44386472{4}[source]
I think you (and others) are misconstruing what I'm saying.

I'm not saying XML is unused.

I'm saying that the specific space where it's use can cause security problems from things like a DDOS are rare.

A legacy backend system that consumes XML docs isn't at risk of a malicious attacker injecting DDOS docs.

When XML is used for data interchange, it's typically only in circumstances where trusted parties are swapping XML docs. Where it's not typically being used is the open Internet. You aren't going to find many new rest endpoints emitting or consuming XML.

And the reason it's being used is primarily legacy. The format and parser are static. Swapping them out would be disruptive and gives few benefits.

That's what it means for something to increasingly become irrelevant. When new use slows or stops and development is primarily on legacy.

78. daef ◴[] No.44386485{5}[source]
in systems level programming (the introductory course before operating systems in our university) this was one of the first misconceptions to be eradicated. you cannot trust malloc to return null.
79. vbezhenar ◴[] No.44386490{3}[source]
It's very easy to make malloc return NULL:

  % ulimit -v 80000
  
  % cat test.c
  #include <stdio.h>
  #include <stdlib.h>
  
  int main(void) {
    char *p = malloc(100'000'000);
    printf("%p\n", p);
  }
  
  % cc test.c
  
  % ./a.out
  (nil)
80. somat ◴[] No.44386634{6}[source]
I am not so sure. Assuming the program does not do anything undefined is sort of the worst possible take on leaving the behavior undefined in the first place. I mean the behavior was left undefined so that "something" could be done. the language standard just does not know what that "something" is. Hell the compiler could do nothing and that would make more sense.

But to make optimizations pretending it is an invariant, it can't happen, when the specification clearly says it could happen. That's wild, and I would argue out of specification.

81. andrewaylett ◴[] No.44386885[source]
Many systems have (whether you like the idea or not) effectively infallible allocators. If malloc won't ever return null, there's not much point in checking.
82. sidewndr46 ◴[] No.44387099[source]
in the event that malloc returns NULL and it isn't checked, isn't the program going to crash anyways? I usually just use a macro like "must_malloc" that does this anyways. But the out come is the same I would think. It's mostly a difference of where it happens.
83. yrro ◴[] No.44387283{3}[source]
If I received an email like that I'd reply with an invoice.
84. codedokode ◴[] No.44387412{5}[source]
First, Linux has thousands of settings that could affect this, second the library probably works not only on Linux.
85. Pet_Ant ◴[] No.44387804{4}[source]
I mean, no security researchers do. It's very much like capitalists. They aren't trying to do something to improve society, but by persuing their own private incentives, they end up with behaviour that benefits the commons. Sometimes we need regulations around that in the marketplace, and that's what the FTC is. So we need an OSS-social-contract version of that.

It's kind of like the enshitification of bug reports. The best way to deal with it is probably denying them CVE numbers to disincentivise the look of low hanging fruit that reasonably could be done by a linter.

Reminds me of students juicing their PRs be making changes to typos in documentation and comments just to put it on their resumes.

replies(1): >>44390983 #
86. TheCoelacanth ◴[] No.44388327{4}[source]
But in scenarios where the person generating the XML is untrusted?

I'm aware of plenty of usage of SOAP, but only between companies that have contractual relationships with each other and who could easily sue each other if one of them tried to exploit a security bug.

That greatly mitigates the risk of a security bug being exploited, especially something like a DOS attack that is easily noticed.

87. Tadpole9181 ◴[] No.44389612[source]
A DoS bug is not important for almost anyone. You probably aren't targeted, you probably sanitized correctly anyway, there's not a huge impact potential anyway.

But a hospital? A bank? A stock broker? Some part of the military's stack?

Context is important, and what is innocuous to you may kill someone or cost millions if exploited in the wild elsewhere.

It would be profoundly difficult for a machine or a convention to understand everyone's context and be able to frame it correctly, so it's left to the developers to review what the security issues are and act accordingly.

I do agree the system should be improved and there's a lot of spam, but your blasé attitude toward what is or is-not a security issue seems off the mark.

88. kiitos ◴[] No.44390124[source]
Particularly for [1], I strongly agree with you.

This is so frustrating.

The claimed CWE-125 [2] has a description that says "The product reads data past the end, or before the beginning, of the intended buffer." -- which empirically does not happen in the Go Markdown parser issue. It panics, sure, but that doesn't result in any reads past the end, or before the beginning, of the intended buffer. Said another way, *there is no out-of-bounds read* happening here at all.

These kinds of false-positive CVE claims are super destructive to the credibility of the CVE system in general.

--

[1] https://github.com/gomarkdown/markdown/security/advisories/G...

[2] https://cwe.mitre.org/data/definitions/125.html

89. bastawhiz ◴[] No.44390156{6}[source]
Who should be liable? The person who sells you the software? Or the person who put some code on GitHub that the first guy used?
90. bastawhiz ◴[] No.44390221{6}[source]
> The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact.

If you make it use the worst case lowest common denominator, it biases nearly everything towards Critical, and the actually critical stuff gets lost in a sea of prioritization. It's spam. If I got fifty emails for critical issues and two of them are really actually critical, I'm going to miss far more important ones than if I only got ten emails about critical issues.

If we all had infinite time and motivation, this wouldn't be a problem. But by being all-or-nothing purists, everything is worse in general.

91. skissane ◴[] No.44390292[source]
Example I observed firsthand: a CVE was filed because GNU C Library had a memory corruption bug. Yes, the memory corruption bug was real, but the glibc core developers did not agree that it was a security issue, and I think they are right: https://sourceware.org/bugzilla/show_bug.cgi?id=29444

Why? Because the memory corruption only happened if you manually called a semi-undocumented API. And that API was only there to support the profiler (gprof), so it being called manually almost never happened and wasn’t officially supported, in normal use the compiler would insert calls to it automatically in profiler builds and in normal production builds the undocumented API would never be called. So in practice this is impossible to exploit, except for apps which do weird things which almost nobody does (e.g. use profiler builds in production, and then expose a REST API to let a remote user stop/start the profiler at runtime)

And yet, now it is listed as a real vulnerability in umpteen security vendor databases. Because the CVE database just accepts anything as a vulnerability if someone claims it is, and if the developers disagree, they just mark it as “Disputed”. But then in my experience a lot of these vendors don’t treat Disputed vulnerabilities any differently, their code analysis tools will still flag them as a “security risk” even though the vast majority of them are BS

92. bastawhiz ◴[] No.44390342{4}[source]
> But when you talk about URL parsing in a linter or a regexp in logging code, I think you're implying that the bugs are unimportant, in part, because the code only handles trusted input.

You proved my point, though. URL parsing is scary and it's a source of terrible security bugs. Not in a linter! Does it even have a means of egress? Is someone fetching the URLs that have been misparsed URLs from the output? How could you even deliver untrusted data to it?

In isolation, the issue is Bad On Paper. In context, the ability to actually exploit it meaningfully is vanishingly small if it even practically exists.

> Countless exploits have arisen when some parser never designed for untrusted input ended up being exposed to it. Perhaps that's not the parser's fault. But it always happens.

Yes! The cve should be for the tool that trusted code to do something it wasn't expected to do. Not for the code that failed in an unexpected circumstances. That's the point.

93. croemer ◴[] No.44390983{5}[source]
Nothing wrong with genuine typo fix PRs