Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

3/12/2015
10:30 AM
Peleus Uhley
Peleus Uhley
Commentary
Connect Directly
Twitter
RSS
E-Mail vvv
100%
0%

Deconstructing Threat Models: 3 Tips

There is no one-size-fits-all approach for creating cyber threat models. Just be flexible and keep your eye on the who, what, why, how and when.

There are a lot of theories about creating threat models. Over the years, I’ve used threat models in many ways at both the conceptual and application level. Their utility often depends on the context and the job to which they are applied.

Deconstructing the purpose of threat models requires taking a step back to examine their value with respect to any risk situation, concentrating on who, what, how, when, and why:

  • Who is the entity conducting the attack, including nation states, organized crime and activists. 
  • What is the ultimate target of the attack, such as credit card data or computer resources. 
  • How is the method by which attackers will get to the data, such as SQL injection or buffer overflows. 
  • Why captures the reason the target is important to the attacker. Does the data have monetary value, or are you just a pool of resources an attacker can leverage in pursuit of other goals?

Simply put, a threat can be described as who will target what, using how in order to achieve why.

What and How: Threat models typically put most of the emphasis on what and how. Looking at the what and how allows you to identify potential bugs that will crop up in the design, regardless of who might be conducting the attack and their motivation. However, the challenge with focusing solely on what and how is that they change over time.

Who and Why: Unlike what and how, who and why tend to be fairly constant. The assumption is that is doesn’t really matter who or why – the focus should be on stopping the attack. However, focusing on who and why can lead to new ideas for overall mitigations that provide better protection than the point fixes identified by how.

For example, we knew that attackers using advanced persistent threats (APT) (who) were fuzzing (how) Flash Player (what). To look at the problem from a different angle, we decided to stop and ask why. It wasn’t solely because of Flash Player’s ubiquity. At the time, attackers were focusing on Flash Player because they could embed it in an Office document to conduct targeted spearphishing attacks.

Targeted spearphishing is a valuable attack method because hackers can directly access a specific target with minimal exposure. By adding a Flash Player warning dialogue to alert users of a potential spearphishing attempt in Office, we addressed the issue that made Flash Player of value to them and therefore made the attack less effective. After that simple mitigation was added, the number of zero-day attacks dropped and forced the attackers had to develop new exploit methods.

When: Examining the when can also be extremely useful. Most people think of threat models as a tool for the design phase. However, threat models can also be used in developing incident response plans. You can take any given risk and consider, "When this mitigation fails or is bypassed, we will respond by...”

Threat Model Flexibility
Having a threat model for an application can be beneficial in controlling both high-level (who/why) and low-level threats (how/what). That said, the reality is that many companies have gotten away from traditional threat models. Keeping a threat model up-to-date can take a lot of effort in a rapid development environment, as Adam Shostack covers in his blog post, The Trouble with Threat Modeling.

Unfortunately, there is not a one-size-fits-all solution to this problem. From experience, the best approach has been to try and keep the spirit of threat modeling, while being flexible on the implementation. In order to achieve this, consider three factors:

  1. There should be a general high-level threat model for each overall application. This high-level model ensures everyone is headed in the same direction, and it can be updated as needed for major changes to the application. A high-level threat model is good for sharing with customers, helping new hires understand the security design of the application, and serve as a reference for the security team.
  2. Threat models don’t have to be documented in the traditional threat model format. The traditional format is very clear and organized, but it can also be complex. The goal of a threat model is to document risks and formulate plans to address them. For individual features, this can be a simple paragraph that everyone can understand. Even writing, “this feature has no security implications,” is informative.
  3. Put the information where developers are most likely to find it. For instance, if you use the simplified format referenced above, then it is easier to place the threat information in line with mitigation exists. The threat information can be included directly in the specs, in the code comments or with threat unit tests. This can help eliminate cross-referencing issues when formal threat models exist as completely separate documents.

The concept of threat modeling still serves a valid purpose by helping to ensure the design is sound. By examining the who, why, and when, the traditional approach to threat modeling can be made more effective at identifying high level mitigations and responses.  By being flexible with the approach to documentation, security information can be captured where developers are most likely to find, use, and maintain it.  These steps can help threat modeling evolve alongside our development processes.

Peleus Uhley has been a part of the security industry for more than 15 years. As the lead security strategist at Adobe, he assists the company with proactive and reactive security. He contributes to the Internet Bug Bounty, OWASP and several other community organizations. ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...