Why CVEs Are an Incentives Problem

It's time to rethink the pivotal role incentives play in shaping behavior to find and disclose software vulnerabilities. More accurate guidance to reflect real-world risks and a tiered verification process to establish potential impact could slow misleading submissions.

Paul Asadoorian, Principal Security Evangelist, Eclypsium

May 29, 2024

6 Min Read
Pictures of padlocks on tiles, and all are black except for one in red, indicating a vulnerability; the background is red
Source: Andrii Yalanskyi via Alamy Stock Photo


Two decades ago, the economist Steven Levitt and New York Times reporter Stephen Dubner published Freakonomics, which applied economic principles to various social phenomena. In essence, they argued that to understand how people make decisions, it's crucial to consider what incentives they're responding to. Through an assortment of sociological examples, they show how incentives often can lead to unforeseen outcomes, in many cases counterproductive to the original intent.

I've been thinking about some of these unintended consequences in the context of a growing problem faced by all of us in cybersecurity: how a fast-rising tide of software vulnerabilities tracked as common vulnerabilities and exposures (CVEs) — are reported and maintained. Last year saw a record 28,902 published CVEs — or almost 80 vulnerabilities published every day — representing a 15% increase over 2022. Some of these software flaws represent a real cost, with two-thirds of security organizations reporting an average backlog of more than 100,000 vulnerabilities, and estimating that due to this overwhelming volume, they're able to patch fewer than half of them. The increase in published CVEs is just one metric, as not all vulnerabilities receive a CVE, with decisions being left to the software vendor. In some cases, a software vulnerability is fixed and no CVE is issued.

Looking at those figures, one might think the sheer volume of vulnerabilities points to a serious issue with the state of software security today. Yet, the numbers themselves don't tell the whole story. The growing number of CVEs stems from two factors: We've gotten better at discovering vulnerabilities, and there are insufficient safeguards in place governing the creation and tracking mechanisms for CVEs. The incentive structure, particularly who is motivated to identify and assign severity to reported vulnerabilities accurately or inaccurately, must also be considered. So it's worth asking: In what ways does the incentive structure within the cybersecurity ecosystem influence the reporting and addressing of vulnerabilities?

Misaligned Incentives

While the system by which CVEs are assigned and scored is widely used and accepted, it's not without its fair share of problems. Established in 1999 by MITRE, the CVE system serves as a trusted clearinghouse for the security industry, offering a standardized method for identifying and cataloging software vulnerabilities. By providing unique identifiers for security weaknesses found in commercial and open source software, CVEs enable enterprises and software vendors to effectively prioritize and mitigate vulnerabilities, thereby reducing the opportunity for threat actors to exploit these flaws.

However, the incentive mechanisms behind the assignment and scoring of CVEs aren't without significant challenges that can undermine the effectiveness of this system. Some of these challenges include:

  • Gaming for reputation: The quest for reputation or "clout" within the cybersecurity community has led some security researchers to game the CVE system. The motivation to discover and report vulnerabilities, driven by the desire for recognition or professional advancement, sometimes results in a focus on quantity over quality of submissions, which can lead to the reporting of trivial or noncritical issues that clutter the system and divert attention from more severe vulnerabilities.

  • Lack of accountability: The ability to file CVEs anonymously, or with minimal evidence supporting the vulnerability claim, introduces a layer of opacity that can be problematic. While anonymity can protect researchers, it also opens the door for submissions that may be erroneous, exaggerated, or even maliciously intended to mislead or cause harm. This lack of accountability challenges the integrity of the CVE database and necessitates rigorous verification processes to maintain trust in the system.

  • Measuring the wrong metric: The Common Vulnerability Scoring System (CVSS), which provides a numerical score to indicate the severity of vulnerabilities, has come under criticism for its lack of correlation with the actual risk posed by vulnerabilities in real-world environments. Because the CVSS score doesn't always accurately reflect the exploitability or impact of a vulnerability within a specific context, we increasingly see situations where high-scoring vulnerabilities may receive undue attention while more critical, exploitable flaws in certain environments often get deprioritized.

To fully appreciate the scope of the problem, consider this recent post by security researcher Dan Lorenc, outlining a single day in which a staggering 138 CVEs were published, two of which were assigned a severity score of 9.8 — marking them with the critical priority. However, upon closer examination, it turns out that this so-called critical vulnerability isn't a vulnerability at all. Nor were the other 136 CVEs entered that day, all of which were submitted without communicating with the project developers, who would have quickly confirmed as much. As Lorenc noted, "I'd bet $1000 this is someone running a script on grepping old commit messages for things like this and auto-filing CVEs." 

So are we seeing a higher number of CVEs because there are more vulnerabilities? Or is it because the rewards and recognition for discovering and reporting these issues have become more pronounced?

Fixing the Incentive Structure of CVE Reporting

Just as a policymaker can nudge citizen behavior by creating or removing certain incentives, we should consider revising the incentive structure of CVE reporting to discourage low-effort reporting of vulnerabilities. Consider some of the following ways that we might pull the levers of incentives to strike the right balance:

  • Reward quality over quantity: Implementing rewards based not only on the quantity but the quality and impact of reported vulnerabilities would encourage researchers to focus on exploits that pose a  threat in a particular environment. A reward system focused on higher-quality submissions might better motivate researchers to prioritize vulnerabilities that could potentially impact a large user base or cause widespread disruption and data breaches.

  • Enhance verification and accountability measures: To address the issue of anonymous submissions with little evidence, a tiered verification process could be established. While protecting the identity of researchers, this process would require more substantial proof of a vulnerability's existence and its potential impact before a CVE is assigned. Such a measure would help mitigate the risk of erroneous or misleading submissions.

  • Redefine the CVSS to reflect real-world risk: Revamping the CVSS to better reflect the real-world risk and exploitability of vulnerabilities would help ensure that designated scores provide more accurate guidance for prioritization. Incorporating feedback loops from organizations that have experienced attempts or successful exploitations could be one such way to refine scoring metrics. While the CISA KEV (Known Exploited Vulnerabilities) list is a great stride in this direction, it doesn't necessarily represent all vulnerabilities being exploited in the wild.

Incentives undoubtedly play a significant role in motivating individuals and organizations to invest time and resources into finding and disclosing vulnerabilities. However, it's become clear that to properly address the many issues plaguing the current state of CVE reporting, we must rethink the pivotal role that incentives play in shaping human behavior. Until we do so, expect to see another record-breaking year for CVEs.

About the Author(s)

Paul Asadoorian

Principal Security Evangelist, Eclypsium

Paul Asadoorian is the principal security evangelist at Eclypsium, focused on supply chain security awareness. Paul's passion for security extends back many years, to the WRT54G hacking days and reverse-engineering firmware on IoT devices for fun. He co-authored the book WRTG54G Ultimate Hacking in 2007, which fueled the firmware hacking fire even more. Asadoorian has worked in technology and information security for more than 20 years, holding various security and engineering roles at a lottery company, university, ISP, independent penetration tester, and security product companies such as Tenable. In 2005, he founded Security Weekly, a weekly podcast dedicated to hacking and information security. Asadoorian is still the host of one of the longest-running security podcasts, Paul's Security Weekly. He enjoys coding in Python, telling everyone he uses Linux as his daily driver, poking at the supply chain, and reading about UEFI and other firmware-related technical topics.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights