8/7/2008
11:40 PM
George V. Hulme
George V. Hulme
Commentary

Black Hat: The Microsoft Exploitability Index: More Vulnerability Madness

On Tuesday, Microsoft introduced the "The Microsoft Exploitability Index." The software maker hopes this index will help companies more effectively prioritize the patches they need to deploy. I don't believe it will. And it may even make the vulnerability madness that exists today even more maddening.



On Tuesday, Microsoft introduced the "The Microsoft Exploitability Index." The software maker hopes this index will help companies more effectively prioritize the patches they need to deploy. I don't believe it will. And it may even make the vulnerability madness that exists today even more maddening.If you'd like to read the particulars, my colleague Thomas Claburn covered the Exploitability Index in some detail in his news story. Essentially, the index, Microsoft hopes, will provide more information about vulnerabilities to help its customers better determine which to first patch.

They will do this, by adding three designations:

1) Consistent Exploit Code Likely 2) Inconsistent Exploit Code Likely, and 3) Functioning Exploit Code Unlikely

The first one means a software flaw could be attacked with highly predictable results, and would probably be very easy to exploit. This would be very bad, as exploits would surface, and would be weaponized for mass use. This would be a critical vulnerability, and would need to be patched. Designation two could be bad, or it could be not-so-bad. Maybe an attacker could create an exploit, maybe not. And how the at-risk system reacts to the attack may not be very predictable. The third designation, Functioning Exploit Code Unlikely, is obvious: Microsoft has determined that developing a useful, functional attack tool would not be likely.

Now, how does this index help security and business managers better understand the risks associated with software vulnerabilities that they don't already have, such as Microsoft's existing low, moderate, important, and critical severity ratings? Not much. How will it change how organizations decide what patches are critical and need to be deployed first? Probably very little.

Let's say it's the second Tuesday of the month, and Microsoft releases a half-dozen security patches. (I know that is very, very, very hypothetical, but stick with me.) Two of these patches are ranked Consistent Exploit Code Likely; two are ranked Inconsistent Exploit Code Likely; and the remaining eight are all rated at Functioning Exploit Code Unlikely. Do you just decide to immediately patch those at the first ranking, then those at ranking two, and then patch those rated at ranking three sometime later?

The answers are: maybe, maybe, and maybe.

What if those ranked at "Consistent Exploit Code Likely" are all sitting deep in the infrastructure on systems that are well-mitigated through good security controls like firewalls and network segmentation, etc., and the data they hold is neither regulated, or all that important to the business? While the vulnerabilities rated at Inconsistent Exploit Code Likely are on systems sitting in the DMZ, or are fairly well-mitigated through security controls inside the infrastructure, but the systems hold data that is either regulated, or valuable to the business, or would be of value to an attacker for identity theft? What do you patch first?

This new index doesn't tell you. And it doesn't tell you much more than Microsoft's existing low, moderate, important, and critical severity rating system.

Don't get me wrong, this does add some new information to the threat/vulnerability assessment security managers need to make, but it may just end up clouding the decision process, not making it more transparent.

Follow my security updates on Twitter.

 

Recommended Reading:

Comment  | 
Email This  | 
Print  | 
RSS
More Insights
Copyright © 2020 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service