When is a critical issue not a critical issue? According to Microsoft's Michael Howard, the answer is, "when it's a Vista vulnerability." To be more precise, he says that if a vulnerability affects both XP and Vista, it should be ranked differently for the two systems. There has been, as you might imagine, quite a bit of discussion about this.
On the one hand, it's easy to say that a vulnerability that exists across two or more systems is the same vulnerability wherever it lives, and so it should have a consistent ranking. On the other hand, it makes sense to take into account a system's ability to protect against vulnerability exploits when scoring the vulnerability, right?
The discussion got me thinking about the process we all follow when assigning weight to different risks. We all go through this process dozens of times a day when we decide to do everything from taking the first sip of hot coffee in the morning to stretching the amber light at the last intersection on our way home from work in the evening. We weigh the possible consequences of the act, the cost of a negative outcome, the likelihood of that negative outcome, and the benefits of a positive outcome in almost everything we do.
Thought about in this way, it seems to make sense to vary the ranking of a vulnerability by taking things like the operating system's defense capabilities into account. The problem is, that still gives us a very incomplete picture.
When we perform the calculus of risk, it's based on our own values for costs and benefits. Those values are unique to each of us (or to our organizations), and while outsiders can provide guidance, they cannot take on the responsibility of assigning those values. Just as our organizations (or our families) are living things, so are the values we assign -- they change with the circumstances.
Let me give you an example: When our son was very young, my wife and I would often travel on different airplanes. There was a small cost associated (in both dollars and time) to protect against an unlikely vulnerability (a commercial plane crash) but, for us, the cost of that vulnerability being exploited was so high that we were willing to take mitigating steps. Now that our son is much older, we fly together because the values we assign to particular variables has changed. The thing is, we were the ones doing the calculus. When you look at the risks of vulnerabilities and the costs of both exploitation and mitigation, you're the one assigning values to the variables.
Microsoft (or SANS, or CERT, or any other organization) can best help by providing constants, rather than additional variables. They cannot know how your organization has decided to enforce policy, what your training regimen involves, or what additional security measures you've put in place. If they want to help prepare you to deal with security issues, they can provide factual statements of vulnerabilities and likely exploits, and a critical ranking based on the possible attack vectors and results -- not how well a particular product might be able to be configured to deal with the issue.
I understand the impulse to acknowledge that progress has been made in the fight against unwelcome intrusion. I will even allow that significant progress can be made, though it's far from certain just how much progress the current release of Vista represents. With that said, what I want from those who find and warn about exploits is best delivered if they rank product consistently -- and leave the variables to me.
Curt Franklin is an enthusiastic security geek who used to be one of the Power Rangers (the red one, we think). His checkered past includes stints as a security consultant, managing director of a commercial IT testing lab, repo man, bull pusher, security editor at Network Computing, and various editorial positions at places like InternetWeek, Byte, and The GARS Mouth. Special to Dark Reading.