The Trouble With Security MetricsA Q&A with the author of The Security Risk Assessment Handbook
Security practitioner Doug Landoll is passionate about risk assessments and security measurements. Author of The Security Risk Assessment Handbook and CEO of Assero Security, a risk consultancy for SMBs, Landoll believes the industry engages in far too many theatrical risk assessments for the sake of audits. These assessments never return solid measurements of risk because the collection methods are faulty, he says. As organizations seek to meet risks head on, they need better visibility into which security initiatives work, which don't, and which need improvement. Done right, security metrics can help provide the estimates to plan out effective strategies. Dark Reading recently caught up with Landoll to talk about his thoughts on how organizations can improve their collection methods to create security metrics that mean something.
Dark Reading: I know you're a big believer that if you depend on a faulty method of gathering data, security metrics won't really matter for much. Can you explain a little about your philosophy when it comes to measuring risk?
Landoll: You want to measure what's important to the organization, instead of thinking what you measure is what's important. There's a difference.
It is what I call the SEIM dog wagging the governance tail. When you've got some great technology that's pumping out tons of data, the bottom-up approach is to say, 'Wow, look at all this data. Let's build a dashboard.' So you get this dashboard with all these really cool metrics and trends and everything, and now that becomes what you measure and that becomes what's important to you -- which is way backward. What's important to you should be based on your company mission. Based on your company strategy, your security strategy should have some goals you're trying to get to -- those are the things to measure.
Just because that metric is not being produced by a piece of technology, it's a mistake if we don't grab it, we don't report it, we don't trend it, we don't correct our strategy. Top down is the only way to do metrics.
Dark Reading: Can you give an example of a risk measurement that people wouldn't think of because maybe they don't have an automated dashboard to spit it out at them?
Landoll: Let's say you're going to spend money to develop a computer incident response. I think you have some expected improvements from that. Before the project even starts, you can collect metrics on it. I think you're expecting a decreased time from incident discovery to recovery. I think you're expecting to be able to demonstrate compliance with breach notification requirements, and I think you want to minimize damage from incidents. If I can prove those things, that has been a successful project.
I'm not sure that people are trying to take the time from incident discovery to recovery. I don't think it would be that hard, but I think that's not pumped out by a tool, so, therefore, you're not usually going to measure it. So how would you do that?
One example would be to grab three to five common incidents that happen and measure the various phases. I think you should be able to have a date and time for the detection phase, the analysis phase, containment, eradication, and recovery. Here's how we get the call, how long it takes them to figure out what else it is, etc., etc. And then you check to see if that is getting any better six months down the line.
That would be interesting. But as you know, there is no tool that does that. But you have a ticketing system and maybe it's in the incident report. But it's there somewhere.
[How are CISOs preparing for 2013? See 7 Risk Management Priorities For 2013.]
Dark Reading: What about metrics around security awareness? It's seems it is easy to game that system by measuring how many people have been trained versus actual results.
Landoll: Some other things you can do are testing for susceptibility to phishing attacks. Use a company like PhishMe. It's a pretty affordable service. Do it on a regular basis and get reports on the results. I would hope that in six months, when the culture gets used to these phishing emails going around and people get embarrassed by clicking on them, nobody's going to fall for that stuff anymore. Or there will be a considerable improvement.
Dark Reading: When it comes to thinking up new metrics or effective ways to measure security, what kind of mindset do you need to take into that process?
Landoll: I find it's not cookie-cutter, but it tends to be easier than you'd think. The methodology I use is, I start at the high level and say, 'What would I like to know?' Then I think about whether there is any information out there. And if it's not there, I figure out how to generate it. So let's say, for example, I want to know how many laptops might be stolen this year from my traveling consultants. I think that data's there, or at least last year's data is there, so who would have that at a company? I think, the guy that ordered those new laptops.
He'll just tell you, it was six last year and there were eight the year before. So you have a pretty good guess it's between five and 10 this year -- unless you've done some kind of change control.
I think if you start at the high level and think, here's what I'd like to know, where would that information be, or how would I create it. Very few times you'll have to create it, but then when you create it, it's not as hard as you think. It could be a survey, it could be, 'Let's talk to the guy in charge and see what value he's put on that.' People shy away from that because they feel inaccurate. But measurement is reduction of uncertainty. It is estimation. We don't have to say we'll lose exactly 7.1 laptops per year. It's PK to say it's between five and 10.
I would also recommend Doug Hubbard's book. It's called How To Measure Anything: Finding the Value of "Intangibles" in Business. There's a chapter on security. He certainly understands our industry. The mistake everyone makes is that they think their industry is unique and they've never had this problem before, and he says nonsense to that. People had all of these problems before and they thought of clever ways to grab metrics.
Dark Reading: In your opinion, what would you say are some of the biggest mistakes people make when measuring security?
Landoll: I would say letting the available data drive the metrics program. That's a huge mistake. Another one is not collecting enough data points and relying on single data points, or assuming that the data you want isn't available and settling for something else. I think that's a poor assumption.
Dark Reading: You've certainly crusaded against relying on single sources of data for risk assessments and measurable, advocating for what you call the RIIOT (Review, Interview, Inspect, Observe and Test) method of risk assessment. Why is it so important to pull metrics and assessment data from different collection methods?
Landoll: There are a lot of approaches to risk management that are not accurate. When you boil down your security program to a 50-question questionnaire, and you divide it among the people in the organization, send it out, and compile it, you know it's not accurate and they know it's not accurate. We're just checking a box and when the auditor comes, we say, 'Here, we did a risk assessment. Good luck.'
This is your plan. This is your strategy. This is where you determine which security activities you're going to be doing in the next few years. And yet you're not collecting the data, and you're making wrong conclusions, and you're getting budget, making a plan based on faulty data. That's alarming.
Let's say you do a traditional risk assessment where you're reviewing documents and you're interviewing people about security awareness training. The first interview is going to come out really good -- people tend to want to make themselves look good. If that's your only input, you have to conclude that security awareness training doesn't need to be improved. However, if you were to observe that post-it notes were on those screens, and you just tried one social engineering trick and it worked, now you've got to conclude that it's not effective. That's a complete 180. And just by doing a few more tests.
Another good example is on system hardening. It's not about finding the error; it's about finding the root cause. You can do a scan, find a vulnerability, and say you patched it. But what if I reviewed the documents and then saw that the hardening documents weren't detailed enough? And I interviewed people and they said, 'There's change management procedures, but we don't use those, and we didn't write that hardening document and don't use it.' So the recommendation from that shouldn't just be to patch the system. It's really a little training, some governance, some hardening documents.
I'm just really concerned that the one security activity that we do as professionals to help plan out our strategy suffers from way too many shortcuts.
Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.