Programming, philosophy, pedaling.

More thoughts on vulnerabilities and misaligned incentives

Mar 20, 2024     Tags: oss, security    

About 15 months ago, I posted a rant about misaligned incentives in the vulnerability triage and classification ecosystem1, with particular attention given to low-impact, high-noise categories like ReDoS.

Since then, I’ve had more time to think about the overall role of the CVE system and the parties that participate in it. I haven’t changed my mind about the overall (limited) value of categories like ReDoS, but I do have new thoughts (brought about by recent developments) that color my overall view of vulnerability reporting, triaging, and the incentives that continue to drive money and engineering effort into the space.

This post is an attempt to flesh out those thoughts2. No particular degree of cogency is promised.

Strategic ambiguity in the CVE system

When I wrote my original post rant, one of my assumptions was a common understanding of the purpose of the CVE system: I incorrectly understood (and believed everybody to understand) identifier assignment to be a normative claim about the existence of a vulnerability and, consequently, a normative claim about that vulnerability’s severity3.

From that assumption, my objection was a technical one: I did not (and do not) believe that just the presence of a potentially intensive regular expression constitutes a vulnerability.

Over the last year, I’ve learned that this is not the only way people (including serious security people who I respect!) understand vulnerability identifiers.

Another understanding of CVEs IDs, one with substantial historical, is as merely stable identifiers. In this view, CVE IDs and other identifiers do not intrinsically represent a claim about a vulnerability’s existence, but are instead “just” unique handles that enable alerting, minimize confusion, and reduce duplicated effort among busy and distracted engineers.

Under this understanding, vulnerability feeds are not responsible for the factual contents of their reports: the responsibility of the feed is solely to uniquely identify and distribute a common data format, one that individual consuming parties are entirely responsible for interpreting and assigning importance to.

This interpretation has desirable justifying properties:

I like these properties. However, in 2024, they largely ring hollow to me:

In effect, the CVE system benefits from a position of strategic ambiguity: it can be a dumb source of unique identifiers when its supporters (and some detractors8) need it to be, and it can also be a rich source of severity metadata when bug bounty programs, reputation seekers, and security vendors so need it to be.

The CVE system’s participants don’t universally encourage this ambiguity, but none appear to discourage it either. After all, there are very few incentives in place for NVD, MITRE, AISuppySecTrustShieldly™, &c. to dispel any ambiguity here: ambiguity maintains industry and regulatory interest in the problem space, ensuring ongoing funding and engagement9 in what ultimately boils down to a unique identifier.


So, who’s responsible?

In my original post, I split the responsibility 3 ways, suggesting:

  1. That security feeds outright ban or curtail low-value vulnerability categories like ReDoS, removing the public venue for bogus reports;
  2. That software supply chain companies become better participants in the ecosystem, taking a more curatorial and throughput-oriented approach rather than a volume-oriented one;
  3. That open source developers “just say no,” and outright refuse to acknowledge or accept reports that have no real security impact, even when doing so is harder (in terms of political or social pressure) than yielding.

I believe that these responsibilities largely hold true and, critically, have paid off over the past year: there has been a growth in pushback against the incentive structures that got us into this mess, including by individual maintainers.

At the same time, it’s no longer clear to me that there is a sufficient common understanding of the purpose of the CVE system against which to justify systematic changes to how we approach vulnerability identification, triage, curation, &c.

In other words: so long as the CVE system’s participants voluntarily maintain the system’s strategic ambiguity (and have a financial interest in doing so), things will not systematically improve.

Forcing the hand?

One way to “attack” strategic ambiguity is to force it to reveal its hand by contriving a situation that cannot be made simultaneously stable and ambiguous.

This is the approach that Linux’s new CNA program appears to be taking:

Note, due to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team are overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team.

In (rightfully!) assigning a CVE to most bugfixes, the Linux kernel CNA has forced security vendors out of a state of comfortable, stable ambiguity: no curation can be assumed, downstream consumers are (rightfully!) annoyed at the deluge of irrelevant reports passed on to them that should have been filtered by their vendors.

By the reasoning in my previous post, I ought to consider this a disaster: not unlike ReDoS reports, this flood from the Linux CNA risks security and alert fatigue. But instead, I’m (cautiously) optimistic: unlike ReDoS and other largely chaff categories, a rise in volume here is not noise, and has the potential to counter the incentives that brought us to the this state.

That leaves us in a funny state, one where things will get worse for good reasons before they can possibly get better. That’s not great, but it does leave open room for systematic improvement where none seemed possible before. At least, that’s how I’m thinking about it now.

  1. …and nascent for-profit industry. 

  2. So that I can perhaps revisit them again. 

  3. At the very least, in the most basic sense of “you wouldn’t call it a vulnerability if it wasn’t even minimally severe.” 

  4. Of the “Nigerian prince” variety, not the “ReDoS vulnerability in black” variety. 

  5. To the extent that “CVE score” is an intelligible thing that security people will say, despite CVE IDs not having their own scores. 

  6. Which is not to say that security is not still ignored all the time, only that companies now (in part thanks the NIST/NVD!) find it much harder to sweep serious breaches under the rug. 

  7. “Shift-left,” “DevSecOps,” blah blah. 

  8. In the sense that, as a mere identifier, a CVE ID is never itself a cause for concern. 

  9. To be clear, I don’t begrudge agencies, FFRDCs, startups, &c. this: a sense of urgency is often the only way to obtain essential funding. But that doesn’t make the incentive structure itself good.