BLACK HAT USA -- LAS VEGAS -- And you thought dealing with code was hard. You thought you were smart. Well, those security researchers who hack physical systems not only need to know code, they need to know physics and chemistry, plumbing and engineering. Automation tools simply don't exist yet. One good attack may take a year to create. And when an exploit is successful, they can only try to make it look like an accident; not slip away like it never happened at all.
It's not a job for the faint of heart, but as the Internet of Things expands and smart cities become smarter, it's a job that becomes more important. Difficult though it is, a motivated, well-resourced attacker is willing to make the effort, and security pros need to know how to defend against it.
Jason Larsen, principal security consultant at IOActive, and Marina Krotofil, senior security consultant at the European Network for Cyber Security, know this all too well. Both specializing in cyber-physical attacks, Larsen presented "Remote Physical Damage 101: Bread-and-Butter Attacks" and Krotofil "Rocking the Pocket Book: Hacking Chemical Plant for Competition and Extortion" at Black Hat today. The pair will present a similar session together at DEF-CON next week.
From Hacker To Control Engineer
"The problem is, once you get access [to the environment], it is the end of the IT world, and you are now a control engineer," says Krotofil, speaking before the conference in an interview with DarkReading. "Now it's become a completely different game. ... The difference is in complexity of knowledge, complexity of fields, and the interaction of those fields." That's part of the reason why developing an effective exploit takes months, even years, she says.
Plus, computing processes work more independently of one another than mechanical and chemical processes do. Computers are easier to fool than physics: it may be tricky to make a PC believe that a Trojan is harmless, but it's even harder to alter the laws that 'what goes up must come down' and that water freezes at 32 degrees.
"If you tweak one thing, it tweaks something else," says Larsen (also speaking in an interview before the conference), explaining that altering the boiler here will definitely change something over there. And the more things the hacker throws out of whack, the more likely it will not only draw attention, but will cause the operators or the system to exercise the fail-safe shut-down mechanism -- an inconvenience but not a catastrophe. Says Larsen, the average hacker may assume "'ha-HA! Things will explode!' Well, they don't explode."
Well, sometimes they do. Usually it requires knowing many details of the target's specific environment, but Larsen's session helped make the job easier by pointing cyber-physical hackers and researchers to the first places to look -- the "bread and butter" he refers to in his session title.
He looked for the items that were the most common and the most easily overcome. For example: valves. While the valve on the kitchen sink, for example, can easily handle 30 lbs of water pressure, it won't react well if you increase the pressure to a ton. Or by simply moving enough hot air from one place to another, he says, you can cause a vessel full of steam to collapse.
"Big pipelines squished flat," says Larsen.
Damn Vulnerable Chemical Process: A New Model
The risk gets scarier as buildings and cities rely more on computer systems. Some physical devices only use electronics as an added benefit -- they may collect or share more data, for example -- but others -- the cyber-physical devices -- cannot function mechanically without input from the computer.
Either way, another challenge for the physical and cyber-physical hacker is that simply finding a vulnerability in the code isn't enough. "There must [also] be vulnerability in the process," says Krotofil. If the physical processes can continue along even without the correct input from the computer, then the exploit doesn't work.
Yet, while vulnerability scanners (and the black market bug bounty business) make it relatively easy to find holes in applications, the same tools don't exist for complex processes and environments like, for example, a chemical plant.
Krotofil is trying to address that problem. Today, she released Damn Vulnerable Chemical Process, an open-source framework for cyber-physical experimentation that provides two realistic models of chemical plants to test upon.
Some people in the cybersecurity community may dismiss a mere modeling framework, she says, but "What people don't realize is, in chemical engineering, everybody works in models ... The difference between hacking a car and hacking a chemical plant is you can show you can control a car pretty easily."
Usually, says Krotofil, modeling is extremely expensive so most of it is kept proprietary, but she's made her model open-source so that it can do things like what the Metasploit project does, but for physical systems.
False Forensic Trails
There are plenty of cyberattacks that go undetected for months, even years, maybe never. Yet, if an attacker finally gets their exploit perfected, and a chemical plant does have a meltdown or a pipeline does burst, there is no way to pretend that something didn't go horribly wrong.
So if a cyber-physical hacker wants to get away with it, they have to create a false forensic footprint to put investigators on the wrong trail.
As Larsen explains, "Do it only during one guy's shift, so they think 'Phil did it.'" Or choose your times and locations to make the Maintenance team look incompetent, or only act on rainy days or only on hot days to make them think the equipment is oversensitive, he says.
Or they may social engineer the human operators by spoofing the output or creating some diversion to distract them from what you're really doing, so that when they are questioned by investigators, their answers lead the investigators even further away from the truth.
With so many challenges standing in the way, it's almost hard to believe that physical attacks of this scale are really happening in the wild, but Krotofil and Larsen believe it.
"Most of the 'accidents' are kept closed," says Krotofil. "Things are happening ... it doesn't mean we have to sit in our chair and wait for another 120 years before we do something.
"If we know what it takes to attack the processes," she says, "then we may know what it takes to defend them."