Over the long holidays in December (and thanks to the massive California storms) I had the chance to re-watch some great movies, including Apollo 13 - one of my all-time favorites. Apollo 13 is well-directed, has a great cast of characters (including the amazing Gary Sinise), but most importantly, it features brilliant engineering.
For those who are not familiar with the story, Apollo 13 was the seventh manned mission in the US space program, and was intended to land on the moon. Apollo 13 launched on April 11, 1970 to little fanfare until, two days later, an oxygen tank exploded. The crew abandoned plans to land on the moon, and instead focused on the new objective of returning safely to Earth despite malfunctioning equipment, limited power, loss of heat, and lack of potable water.
In April 2015, Lee Hutchison wrote an article about the Apollo 13 in Ars Technica, and analyzed what went wrong based on expert perspective from Apollo flight controller Sy Liebergot. It’s a geeky but enlightening article about everything you would ever want to know about oxygen tanks, lunar modules, command modules, flight parameters and Apollo 13. I encourage you to read it. The most poignant part of the article was this:
“The thing that saved Apollo 13 more than anything else was the fact that the controllers and the crew had both conducted hundreds—literally hundreds—of simulated missions. Each controller, plus that controller’s support staff, had finely detailed knowledge of the systems in their area of expertise, typically down to the circuit level. The Apollo crews, in addition to knowing their mission plans forward and backward, were all brilliant test pilots trained to remain calm in crisis (or "dynamic situations," as they’re called). They trained to carry out difficult procedures even while under extreme emotional and physical stress.…. The NASA mindset of simulate, simulate, simulate meant that when things did go wrong, even something of the magnitude of the Apollo 13 explosion, there was always some kind of contingency plan worked out in advance.”
In other words, simulations identify gaps and prepare teams for when sh*t hits the fan.
This is not just limited to NASA. In the fall of 2002, Congress mandated that the National Infrastructure Simulation and Analysis Center, or NISAC (officially founded in 1999 as a collaboration between two national laboratories, Sandia and Los Alamos), model disruptions to infrastructure - fuel supply lines, the electrical grid, food supply chains and more. After 9/11, Congress wanted to understand the impact of infrastructure disruptions – how much they might cost, how many lives would be lost, and how the government would respond.
In 2005, when the nation and the world was experiencing the bird flu crisis, NISAC was asked to simulate what a global pandemic would look like, and how best to respond. Based on simulations of complex economic, cultural, and geographic systems, a Sandia scientist named Robert Glass theorized that a pandemic like the bird flu "exhibits many similarities to that of a forest fire: You catch it from your neighbors." He demonstrated that high school students would be the biggest transmitters, and recommended that thinning out infected school age kids by closing schools (rather than closing borders) would be a better way to prevent the pandemic from spreading.
This is what breach or adversary simulations allow you to do in cybersecurity as well. Breach simulations is an emerging technology that simulates hacker breach methods to gain the hacker’s perspective. Simulators placed in various security zones and on endpoints play continuous war games against each other to challenge security controls and identify enterprise risks. Unlike vulnerability management systems, breach simulations are safe (simulators only attack one another), focuses on the complete breadth of hacker techniques instead of just vulnerabilities, and showcases the kill chain impact.
Breach simulations may not help you address the thousands of alerts your SOC team has to resolve every day, but you’ll be able to strategically simulate what can occur in your environment, and identify the best option to respond to potential attackers. The benefit is that you can then choose the best possible compensating control to break the kill chain or stop the attackers in their tracks (just like NISAC and the flu pandemic).
For example, if you can’t stop users from clicking on links and thus prevent infiltration, you can compensate and prevent lateral movement via very stringent segmentation and access control policies. Over time, as you proactively identify gaps and challenge your people, technology and processes, you’ll be able to improve your overall security. This is a different mindset - the proactive and continuous versus the tactical and reactive.
As we start a New Year and face another 365 days of never-ending cybersecurity headaches, consider the "simulate, simulate, simulate" mantra in your cybersecurity strategy. The only way we improve is by challenging ourselves and putting ourselves in the footsteps of the adversary – let’s simulate our adversary and increase our probability of success.
- CISO Playbook: Games Of War & Cyber Defenses
- CISO Playbook: Suit-up & Play Offense
- To Better Defend Yourself, Think Like A Hacker