In theory, the flaw scoring system aims to give security professionals, researchers, and software vendors a repeatable way to rank the severity of a vulnerability by measuring the issue's base exploitability, how that evolves over time, and the impact the security bug has on the IT environment. While use of the CVSS has grown, especially in government and research circles, a number of shortcomings -- including less-than-objective scoring -- has curtailed its use in commercial settings.
"It is a grand goal to have -- if I look at some data, and you look at some data, and we end up with the same score," says Carsten Eiram, chief research officer for Risk Based Security, a consultancy. "Yet the guidelines are not that clear, so we are seeing inconsistencies."
The Common Vulnerability Scoring System consists of a base score that attempts to measure the severity of a vulnerability, as well as a temporal score that measures the exploitability of an issue over time and an environmental score that measures how dependent a company is on the vulnerable software product. In 2007, the Forum of Incident Response and Security Teams (FIRST), which took over managing the project, released version 2, adding more detailed rankings and attempting to reduce subjective scores.
While the system works fairly well for many vulnerabilities, there are a large number of issues for which the scoring system fails. In an open letter to FIRST, Brian Martin of the Open Security Foundation and Risk Based Security's Eiram, criticized the current iteration of the standards for its tendency to assign similar scores to flaws with significantly different impacts. The use of only three levels of scoring -- none, partial, and complete -- for most vulnerability attributes tends to mean that a vast middle ground gets assigned to a single value: partial. Companies, such as Oracle, have already used a crutch to work around the issue, creating a "partial+" ranking.
The lack of a fine-enough focus is something that FIRST aims to fix in CVSS version 3, says Seth Hanford, chair of the Special Interest Group for CVSS Version 3 at FIRST.
"The CVSS scores don't always line up with reality," he says. "It is hard to score, for example, a bypass of a firewall. It doesn't really affect the host dramatically, but it has a major impact on the business overall."
[A pair of reports look at the trends in vulnerability disclosure over a decade or more. Here are four lessons from the data on more than 50,000 flaws. See Lessons Learned From A Decade Of Vulnerabilities.]
Because scores are calculated from the standpoint, or scope, of the host operating system, vulnerabilities that could impact other systems, or reduce the security posture of an entire network, are difficult to score properly under version 2, Hanford says. The group is creating a number of strategies to better rate vulnerabilities in CVSS version 3, including giving more weight to a "partial" ranking and adding new metrics, such as user interaction and privileges.
Adding more attributes to the scoring system can result in better scores, but could also make the scores more subjective, and subjective ratings are a lot less useful, says Vinnie Liu, a managing partner with consultancy Stach & Liu. An IT security group that uses vulnerability rankings to triage fixes may face pushback from developers, some of whom believe that refuting the rankings is a better use of their time than fixing the issues, Liu says.
"They will bicker over things like that, rather than focus on fixing the problems," he says. "They will focus on how to argue the number down."
FIRST plans to make the Common Vulnerability Scoring System an on-going project: As soon as CVSS version 3 is released, the group will begin work on version 4, FIRST's Hanford says. Threats on the Internet keep changing, which tend to highlight different weaknesses in the system, he says.
"The nature of the threats on the Internet are very different," Hanford says. "With CVSS version 2, we were dealing primarily with network worms."
A draft of version 3 of the Common Vulnerability Scoring System is due by the end of the year, with the final draft schedule to be completed by summer 2014.
Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.