Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

2/1/2017
10:30 AM
Danelle Au
Danelle Au
Commentary
Connect Directly
Facebook
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

A New Mantra For Cybersecurity: 'Simulate, Simulate, Simulate!'

What security teams can learn from the Apollo 13 space program, a global pandemic, and major infrastructure disruptions to identify their best responses to attacks.

Over the long holidays in December (and thanks to the massive California storms) I had the chance to re-watch some great movies, including Apollo 13 - one of my all-time favorites. Apollo 13 is well-directed, has a great cast of characters (including the amazing Gary Sinise), but most importantly, it features brilliant engineering.

For those who are not familiar with the story, Apollo 13 was the seventh manned mission in the US space program, and was intended to land on the moon. Apollo 13 launched on April 11, 1970 to little fanfare until, two days later, an oxygen tank exploded. The crew abandoned plans to land on the moon, and instead focused on the new objective of returning safely to Earth despite malfunctioning equipment, limited power, loss of heat, and lack of potable water.

In April 2015, Lee Hutchison wrote an article about the Apollo 13 in Ars Technica, and analyzed what went wrong based on expert perspective from Apollo flight controller Sy Liebergot. It’s a geeky but enlightening article about everything you would ever want to know about oxygen tanks, lunar modules, command modules, flight parameters and Apollo 13. I encourage you to read it. The most poignant part of the article was this:

“The thing that saved Apollo 13 more than anything else was the fact that the controllers and the crew had both conducted hundreds—literally hundreds—of simulated missions. Each controller, plus that controller’s support staff, had finely detailed knowledge of the systems in their area of expertise, typically down to the circuit level. The Apollo crews, in addition to knowing their mission plans forward and backward, were all brilliant test pilots trained to remain calm in crisis (or "dynamic situations," as they’re called). They trained to carry out difficult procedures even while under extreme emotional and physical stress.…. The NASA mindset of simulate, simulate, simulate meant that when things did go wrong, even something of the magnitude of the Apollo 13 explosion, there was always some kind of contingency plan worked out in advance.”

In other words, simulations identify gaps and prepare teams for when sh*t hits the fan.

This is not just limited to NASA. In the fall of 2002, Congress mandated that the National Infrastructure Simulation and Analysis Center, or NISAC (officially founded in 1999 as a collaboration between two national laboratories, Sandia and Los Alamos), model disruptions to infrastructure - fuel supply lines, the electrical grid, food supply chains and more. After 9/11, Congress wanted to understand the impact of infrastructure disruptions – how much they might cost, how many lives would be lost, and how the government would respond.

In 2005, when the nation and the world was experiencing the bird flu crisis, NISAC was asked to simulate what a global pandemic would look like, and how best to respond. Based on simulations of complex economic, cultural, and geographic systems, a Sandia scientist named Robert Glass theorized that a pandemic like the bird flu "exhibits many similarities to that of a forest fire: You catch it from your neighbors." He demonstrated that high school students would be the biggest transmitters, and recommended that thinning out infected school age kids by closing schools (rather than closing borders) would be a better way to prevent the pandemic from spreading.

This is what breach or adversary simulations allow you to do in cybersecurity as well. Breach simulations is an emerging technology that simulates hacker breach methods to gain the hacker’s perspective. Simulators placed in various security zones and on endpoints play continuous war games against each other to challenge security controls and identify enterprise risks. Unlike vulnerability management systems, breach simulations are safe (simulators only attack one another), focuses on the complete breadth of hacker techniques instead of just vulnerabilities, and showcases the kill chain impact.

Breach simulations may not help you address the thousands of alerts your SOC team has to resolve every day, but you’ll be able to strategically simulate what can occur in your environment, and identify the best option to respond to potential attackers. The benefit is that you can then choose the best possible compensating control to break the kill chain or stop the attackers in their tracks (just like NISAC and the flu pandemic).

For example, if you can’t stop users from clicking on links and thus prevent infiltration, you can compensate and prevent lateral movement via very stringent segmentation and access control policies. Over time, as you proactively identify gaps and challenge your people, technology and processes, you’ll be able to improve your overall security. This is a different mindset - the proactive and continuous versus the tactical and reactive.

As we start a New Year and face another 365 days of never-ending cybersecurity headaches, consider the "simulate, simulate, simulate" mantra in your cybersecurity strategy. The only way we improve is by challenging ourselves and putting ourselves in the footsteps of the adversary – let’s simulate our adversary and increase our probability of success.

Related Content:

 

 

Danelle is vice president of strategy at SafeBreach. She has more than 15 years of experience bringing new technologies to market. Prior to SafeBreach, Danelle led strategy and marketing at Adallom, a cloud security company acquired by Microsoft. She was also responsible for ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
jeromeo1969
100%
0%
jeromeo1969,
User Rank: Apprentice
2/9/2017 | 2:07:58 PM
Simulate, Simulate, Simulate
Very interesting read!
Sammy324
100%
0%
Sammy324,
User Rank: Strategist
2/2/2017 | 2:29:34 PM
Re: Love it
A great read indeed, thank you!
jcavery
100%
0%
jcavery,
User Rank: Moderator
2/2/2017 | 11:53:23 AM
Love it
Love it, great read Danelle.
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...