CVSS scores alone are ineffective risk predictors - modeling for likelihood of exploitation also needs to be taken into account.

The way that organizations today decide which software vulnerabilities to fix and which to ignore reduces risk no better than if they rolled dice to choose, according to a new study today from Kenna Security and Cyentia Institute. The report's authors argue that enterprises need to get smarter about how they prioritize flaws for remediation if they want to really make a dent in their risk exposure. 

The fact is, that organizations today are drowning in software vulnerabilities. A different report out today from Risk Based Security highlights this reality. It found that last quarter alone there were nearly 60 new vulnerabilities disclosed every single day. Among the 5,375 flaws published in the first 90 days of the year, approximately 18% had CVSS scores of 9.0 or higher. 

Those numbers in part demonstrate why some organizations can't fix every vulnerability in their environment - which means they must prioritize their efforts. The question is, what makes for a good prioritization system? 

Techniques like using CVSS vulnerability severity scores to guide vulnerability management activities have long been the stand-in methodologies. But those can't necessarily predict how likely attackers will be to actually exploit any given flaw in order to carry out an attack. And that's the real fly in the ointment, because according to the Kenna and Cyentia report, just 2% of published vulnerabilities have observed exploits in the wild. 

So, say an organization had the resources to miraculously fix 98% of the flaws in their environment; if they chose the wrong 2% to miss they still could be wide open to the full brunt of vulnerabilities attackers are actually targeting. And given the breach statistics against mature organizations that presumably use some standardized method of prioritization, one must question the efficacy of the same old, same old way of how flaws are picked for remediation.

"Security people know intuitively that what they've been doing historically is wrong, but they have no data-driven way to justify a change internally," says Michael Roytman, chief data scientist for Kenna. "That's what we hope this report provides people." 

Cyentia examined prioritization techniques statistically in terms of two big variables that were measured in light of whether exploits exist: coverage and efficiency.

Coverage measures how thoroughly organizations were able to fix flaws in their environment for which an exploit exists. If there are 100 vulnerabilities in an environment that have exploits and the organization only fixes 15 of them then the coverage of that prioritization is 15%. The leftover 85% is the organization's unremediated risk.

On the flip side, efficiency measures how effective the organization is in choosing vulnerabilities that are being exploited in practice by the bad guys. If the organization fixes 100 flaws but only 15 of them are being exploited, then that's a prioritization efficiency rating of 15%. The other 85% are those for which time might have been better spent doing something other than fixing them.  

"Ideally, we’d love a remediation strategy that achieves 100% coverage and 100% efficiency," the report explains. "But in reality, a direct trade-off exists between the two." 

So a strategy that goes after really bad vulnerabilities with scores of CVSS 10 or higher would have a good efficiency rating but is going to have terrible coverage. But the other mode of going after everything CVSS 6 and above means that efficiency is going to go through the floor because many of these will never be exploited. 

When measuring coverage and efficiency of  prioritization using simplistic remediation rules such as using CVSS scores, Cyentia found that the various choices tended to be no more better than choosing at random. It then analyzed coverage and efficiency using a more complex model that tries to predict which vulnerabilities are most likely to be exploited - using variables like whether the flaw includes key words like "remote code execution," predictive weighting of the vendor, CVSS score and the volume of community chatter around a given flaw in reference lists like Bugtraq. This kind of modeling was able to outperform historical rules with better coverage, twice the efficiency, and half the effort. 

"We'll never, of course, have perfect in vulnerability remediation. What we have to do is figure out where we are and then figure out how to get better," says Jay Jacobs, chief data scientist and founder of Cyentia. "Being exploit-driven, I think, is one of the better approaches."

Related Content:

About the Author(s)

Ericka Chickowski, Contributing Writer

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights