←back to thread

277 points jwilk | 1 comments | | HN request time: 0s | source
Show context
arp242 ◴[] No.44382233[source]
A lot of these "security bugs" are not really "security bugs" in the first place. Denial of service is not resulting in people's bank accounts being emptied or nude selfies being spread all over the internet.

Things like "panics on certain content" like [1] or [2] are "security bugs" now. By that standard anything that fixes a potential panic is a "security bug". I've probably fixed hundreds if not thousands of "security bugs" in my career by that standard.

Barely qualifies as a "security bug" yet it's rated as "6.2 Moderate" and "7.5 HIGH". To say nothing of gazillion "high severity" "regular expression DoS" nonsense and whatnot.

And the worst part is all of this makes it so much harder to find actual high-severity issues. It's not harmless spam.

[1]: https://github.com/gomarkdown/markdown/security/advisories/G...

[2]: https://rustsec.org/advisories/RUSTSEC-2024-0373.html

replies(13): >>44382268 #>>44382299 #>>44382855 #>>44384066 #>>44384368 #>>44384421 #>>44384513 #>>44384791 #>>44385347 #>>44385556 #>>44389612 #>>44390124 #>>44390292 #
viraptor ◴[] No.44382855[source]
> Denial of service is not resulting in ...

DoS results in whatever the system happens to do. It may well result in bad things happening, for example stopping AV from scanning new files, breaking rate limiting systems to allow faster scanning, hogging all resources on a shared system for yourself, etc. It's rarely a security issue in isolation, but libraries are never used in isolation.

replies(2): >>44383029 #>>44383134 #
bastawhiz ◴[] No.44383134[source]
An AV system stopping because of a bug in a library is bad, but that's not because the library has a security bug. It's a security problem because the system itself does security. It would be wild if any bug that leads to a crash or a memory leak was a "security" bug because the library might have been used by someone somewhere in a context that has security implications.

A bug in a library that does rate limiting arguably is a security issue because the library itself promises to protect against abuse. But if I make a library for running Lua in redis that ends up getting used by a rate limiting package, and my tool crashes when the input contains emoji, that's not a security issue in my library if the rate limiting library allows emails with punycode emoji in them.

"Hogging all of the resources on a shared system" isn't a security bug, it's just a bug. Maybe an expensive one, but hogging the CPU or filling up a disk doesn't mean the system is insecure, just unavailable.

The argument that downtime or runaway resource use due is considered a security issue but only if the problem is in someone else's code is some Big Brained CTO way of passing the buck onto open source software. If it was true, Postgres autovacuuming due to unpleasant default configuration would be up there with Heartbleed.

Maybe we need a better way of alerting downstream users of packages when important bugs are fixed. But jamming these into CVEs and giving them severities above 5 is just alert noise and makes it confusing to understand what issues an organization should actually care about and fix. How do I know that the quadratic time regexp in a string formatting library used in my logging code is even going to matter? Is it more important than a bug in the URL parsing code of my linter? It's impossible to say because that responsibility was passed all the way downstream to the end user. Every single person needs to make decisions about what to upgrade and when, which is an outrageous status quo.

replies(3): >>44383193 #>>44383817 #>>44384248 #
viraptor ◴[] No.44383193[source]
> An AV system stopping because of a bug in a library is bad, but that's not because the library has a security bug.

(And other examples) That's a fallacy of looking for the root cause. The library had an issue, the system had an issue and together they resulted in a problem for you. Some issues will be more likely to result in security problems than others, so we classify them as such. We'll always deal with probabilities here, not clear lines. Otherwise we'll just end up playing a blame game "sure, this had a memory overflow, but it's package fault for not enabling protections that would downgrade it to a crash", "no it's deployments fault for not limiting that exploit to just this users data partition", "no it's OS fault for not implementing detailed security policies for every process", ...

replies(1): >>44384117 #
bastawhiz ◴[] No.44384117[source]
But it's not treated as dealing in probabilities. The CVEs (not that I think they're even worthwhile) are given scores that ignore the likelihood of an issue being used in a security sensitive context. They're scored for the worst case scenario. And if we're dealing with probabilities, it puts less onus on people who actually do things where security matters and spams everyone else where those probabilities are unrealistic, which in a huge majority of cases.

This is worse for essentially everyone except the people who should be doing more diligence around the code that they use. If you need code to be bug free (other than the fact that you're delusional about the notion of "bug free" code) you're just playing the blame game when you don't put protections in place. And I'm not talking about memory safety, I'm talking about a regexp with pathological edge cases or a panic in user inputs. If you're not handling unexpected failure modes from code you didn't write and inspect, why does that make it a security issue where the onus is on the library maintainer?

replies(1): >>44384590 #
viraptor ◴[] No.44384590{3}[source]
The score assigned to issues has to be the worst case one, because whoever is assessing it will not know how people use the library. The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact. People outside that system would be only guessing. And you really don't want to guess "nobody would use it this way, it's fine" if it turns out some huge private deployment does.
replies(2): >>44385113 #>>44390221 #
1. tsimionescu ◴[] No.44385113{4}[source]
> The downstream users can then evaluate the issue and say it does/doesn't/kinda affects them with certainty and lower their internal impact.

Unfortunately that's not how it happens in practice. People run security scanners, and those report that you're using library X version Y which has a known vulnerability with a High CVSS score or whatever. Even if you provide a reasoned explanation of why that vulnerability doesn't impact your use case and you convince your customer's IT team of this, this is seen as merely a temporary waiver: very likely, you'll have the same discussion next time something is scanned and found to contain this.

The whole security audit system and industry is problematic, and often leads to huge amounts of busy work. Overly pessimistic CVEs are not the root cause, but they're still a big problem because of this.