←back to thread

103 points voxadam | 1 comments | | HN request time: 0.203s | source
Show context
pedrozieg ◴[] No.46215560[source]
CVE counts are such a good example of “what’s easy to measure becomes the metric”. The moment Linux became a CNA and started issuing its own CVEs at scale, it was inevitable that dashboards would start showing “Linux #1 in vulnerabilities” without realizing that what changed was the paperwork, not suddenly worse code. A mature process with maintainers who actually file CVEs for real bugs looks “less secure” than a project that quietly ships fixes and never bothers with the bureaucracy.

If Greg ends up documenting the tooling and workflow in detail, I hope people copy it rather than the vanity scoring. For anyone running Linux in production, the useful question is “how do I consume linux-cve-announce and map it to my kernels and threat model”, not “is the CVE counter going up”. Treat CVEs like a structured changelog feed, not a leaderboard.

replies(3): >>46217577 #>>46217767 #>>46219692 #
Sohcahtoa82 ◴[] No.46219692[source]
The problem I have is that the hyperfixation on CVE counts has turned the entire vulnerability management industry into Boy-Who-Cried-Wolf-as-a-Service.

99% of CVEs are essentially unexploitable in practice. If you're just concerned about securing your web apps and don't use WordPress, then the number of CVEs produced per year that you actually have to worry about is in the single digits and possibly even zero, yet Wiz will really love to tell you about hundreds of CVEs living in your environment because it's been a month since you ran "apt upgrade".

replies(1): >>46232029 #
1. devwastaken ◴[] No.46232029[source]
the reason we needed CVE is due to the fallacy of “99% are unexploitable”. memory and logic bugs are a time bomb. you dont need 1 big exploit, only a system that is put together poorly enough to have the bugs in the first place.