Measuring risk isn’t as simple as some make it out to be, but there are best practices to help you embrace the complexity in a productive way. Here are five.

Daniel Gordon, Cyber Intel Analyst, Lockheed Martin Computer Incident Response Team

January 30, 2017

4 Min Read

 More on Security Live at Interop ITX More on Security
Live at Interop ITX

Broadly speaking, cybersecurity is risk identification and risk mitigation in the cyber domain.  Measuring risk quantitatively is good because it helps security teams measure their capabilities somewhat objectively, which helps everyone make better decisions. For example, when deciding whether to upgrade all your firewalls or invest in organization-wide two-factor authentication, that decision should be based, in part, on what risk exists now and what risk will be after you implement a change. It may surprise you but people are generally pretty bad at this, resulting in things like transportation disasters, major breaches, economic bubbles, wars, and bad movies. 

In the book How to Measure Anything in Cybersecurity Risk by Hubbard & Seiersen, the method for evaluating risk is, and I’m paraphrasing, identifying likelihood using modeling principles, and impact using cost estimation and the CIA (Confidentiality, Integrity and Availability) model. 

Here’s where it gets more complicated: evaluating current and future risk requires accounting for people … and people make everything harder. A good risk analysis should account for risky behaviors by users, administrators, and security personnel, both before and after you make the change. 

There is a bunch of research that shows that when you tell people about safety features, they change their behavior to be more risky. Examples include risk for traffic safety, child-proofed medicine bottles, bicycle helmet use, and mobile phone use while driving. They do it for convenience, out of boredom, and other bad reasons. Here are some hypothetical examples in cybersecurity:

{Table 1}

There’s another group of people who alter their behavior when you implement a risk mitigation - and they’re even tougher to account for. Who is it? No, it’s not furries. It’s miscreants. (OK, they might also be furries.) Risk mitigation should account for how attackers will evolve. If you’re facing a persistent threat with a lot of resources, and one attack is unsuccessful, you should anticipate that the persistent threat will evolve their TTP (Techniques, Tactics and Protocols). If you attempted to mitigate the risk of banking trojans served by botnets but failed to account for the evolution to ransomware, your risk model was probably faulty. 

Getting Risk Management Right: Five Recommendations

1. Gather threat intelligence and data about the behavior of your users.  Threat intelligence should be a description of a series of attacks that can help you understand and predict future attacks. Data could include behavior analytics from logs but might also be information based on defining groups of users and interviewing them to see how they operate.

2. Do not reveal to miscreants how they were detected if you can help it.  If miscreants don’t know that a risk mitigation exists, they will not be able to react to it. If you block/detect them, try to hide your capabilities. For insider threats, the decision on what to communicate may already be determined by your legal department or HR.

3. Be deliberate in how you publicize risk mitigations in your organization. While hiding a risk mitigation in your organization would prevent a change in behavior, it might be unethical or prevent you from getting credit for your work. A better solution is to emphasize to users and decision-makers what risks still exist to help them make informed decisions that reduce risky behaviors.

4. Be deliberate in how you share information externally.  Risk mitigations implemented by other organizations may also change the behavior of miscreants. If you’re selective with whom you share data, or share along with guidance on how it should be handled, there’s less of a chance of others being careless, and causing unexpected miscreant behavior changes. If you share publicly, account for uncertainty created by a likely change in attacker TTP.

5. Don’t spread FUD.  FUD (Fear, Uncertainty and Doubt) is incorrect data that causes improper risk or uncertainty measurement. Some people spread FUD because of sloppy work, some do it unintentionally, and some do it deliberately for business reasons. It’s bad for the cybersecurity community/industry as a whole, it’s bad for decision makers, and it’s counterproductive in the long run.

Related Content:

 

About the Author(s)

Daniel Gordon

Cyber Intel Analyst, Lockheed Martin Computer Incident Response Team

Daniel Gordon, CISSP, is a member of the Lockheed Martin Computer Incident Response Team. He has worked in IT and information security for over 10 years. He holds a BA in political science from St Mary's College of Maryland and a graduate certificate in modeling and simulation of behavioral cybersecurity from the University of Central Florida. He is currently pursuing his Master's degree in modeling & simulation.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights