Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

8/6/2015
07:40 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Why Cyber-Physical Hackers Have It Harder Than You

Before you pout about having to learn a new infosec application, remember you don't need to also know physics, chemistry, engineering and how to make a pipeline explosion look like an accident.

BLACK HAT USA -- LAS VEGAS -- And you thought dealing with code was hard. You thought you were smart. Well, those security researchers who hack physical systems not only need to know code, they need to know physics and chemistry, plumbing and engineering. Automation tools simply don't exist yet. One good attack may take a year to create. And when an exploit is successful, they can only try to make it look like an accident; not slip away like it never happened at all.

It's not a job for the faint of heart, but as the Internet of Things expands and smart cities become smarter, it's a job that becomes more important. Difficult though it is, a motivated, well-resourced attacker is willing to make the effort, and security pros need to know how to defend against it.

Jason Larsen, principal security consultant at IOActive, and Marina Krotofil, senior security consultant at the European Network for Cyber Security, know this all too well. Both specializing in cyber-physical attacks, Larsen presented "Remote Physical Damage 101: Bread-and-Butter Attacks" and Krotofil "Rocking the Pocket Book: Hacking Chemical Plant for Competition and Extortion" at Black Hat today. The pair will present a similar session together at DEF-CON next week.

From Hacker To Control Engineer

"The problem is, once you get access [to the environment], it is the end of the IT world, and you are now a control engineer," says Krotofil, speaking before the conference in an interview with DarkReading. "Now it's become a completely different game. ... The difference is in complexity of knowledge, complexity of fields, and the interaction of those fields." That's part of the reason why developing an effective exploit takes months, even years, she says.

Plus, computing processes work more independently of one another than mechanical and chemical processes do. Computers are easier to fool than physics: it may be tricky to make a PC believe that a Trojan is harmless, but it's even harder to alter the laws that 'what goes up must come down' and that water freezes at 32 degrees.

"If you tweak one thing, it tweaks something else," says Larsen (also speaking in an interview before the conference), explaining that altering the boiler here will definitely change something over there. And the more things the hacker throws out of whack, the more likely it will not only draw attention, but will cause the operators or the system to exercise the fail-safe shut-down mechanism -- an inconvenience but not a catastrophe. Says Larsen, the average hacker may assume "'ha-HA! Things will explode!' Well, they don't explode."

Well, sometimes they do. Usually it requires knowing many details of the target's specific environment, but Larsen's session helped make the job easier by pointing cyber-physical hackers and researchers to the first places to look -- the "bread and butter" he refers to in his session title.

He looked for the items that were the most common and the most easily overcome. For example: valves. While the valve on the kitchen sink, for example, can easily handle 30 lbs of water pressure, it won't react well if you increase the pressure to a ton. Or by simply moving enough hot air from one place to another, he says, you can cause a vessel full of steam to collapse.

"Big pipelines squished flat," says Larsen.

Damn Vulnerable Chemical Process: A New Model

The risk gets scarier as buildings and cities rely more on computer systems. Some physical devices only use electronics as an added benefit -- they may collect or share more data, for example -- but others -- the cyber-physical devices -- cannot function mechanically without input from the computer. 

Either way, another challenge for the physical and cyber-physical hacker is that simply finding a vulnerability in the code isn't enough. "There must [also] be vulnerability in the process," says Krotofil. If the physical processes can continue along even without the correct input from the computer, then the exploit doesn't work.

Yet, while vulnerability scanners (and the black market bug bounty business) make it relatively easy to find holes in applications, the same tools don't exist for complex processes and environments like, for example, a chemical plant.

Krotofil is trying to address that problem. Today, she released Damn Vulnerable Chemical Process, an open-source framework for cyber-physical experimentation that provides two realistic models of chemical plants to test upon.

Some people in the cybersecurity community may dismiss a mere modeling framework, she says, but "What people don't realize is, in chemical engineering, everybody works in models ... The difference between hacking a car and hacking a chemical plant is you can show you can control a car pretty easily."

Usually, says Krotofil, modeling is extremely expensive so most of it is kept proprietary, but she's made her model open-source so that it can do things like what the Metasploit project does, but for physical systems.

False Forensic Trails

There are plenty of cyberattacks that go undetected for months, even years, maybe never. Yet, if an attacker finally gets their exploit perfected, and a chemical plant does have a meltdown or a pipeline does burst, there is no way to pretend that something didn't go horribly wrong. 

So if a cyber-physical hacker wants to get away with it, they have to create a false forensic footprint to put investigators on the wrong trail.

As Larsen explains, "Do it only during one guy's shift, so they think 'Phil did it.'" Or choose your times and locations to make the Maintenance team look incompetent, or only act on rainy days or only on hot days to make them think the equipment is oversensitive, he says.

Or they may social engineer the human operators by spoofing the output or creating some diversion to distract them from what you're really doing, so that when they are questioned by investigators, their answers lead the investigators even further away from the truth.

With so many challenges standing in the way, it's almost hard to believe that physical attacks of this scale are really happening in the wild, but Krotofil and Larsen believe it.

"Most of the 'accidents' are kept closed," says Krotofil. "Things are happening ... it doesn't mean we have to sit in our chair and wait for another 120 years before we do something.

"If we know what it takes to attack the processes," she says, "then we may know what it takes to defend them."

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Some Guy
50%
50%
Some Guy,
User Rank: Moderator
8/7/2015 | 9:57:28 AM
But only one, not all
As STUXNET clearly showed, you only have to have a few experts working with a bevy of code monkeys. Moreover, surveilience is part of a persistent threat and can discover the information to fill that gap. Finally, I would submit you don't really have to know all that much to wreak havoc. Script kiddies launched the ILOVEYOU worm to disasterous effect. The knowledge goes up only to the extent that you care about avoiding detection, diverting blame, or the precision desired in an attack.
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-11583
PUBLISHED: 2020-08-03
A GET-based XSS reflected vulnerability in Plesk Obsidian 18.0.17 allows remote unauthenticated users to inject arbitrary JavaScript, HTML, or CSS via a GET parameter.
CVE-2020-11584
PUBLISHED: 2020-08-03
A GET-based XSS reflected vulnerability in Plesk Onyx 17.8.11 allows remote unauthenticated users to inject arbitrary JavaScript, HTML, or CSS via a GET parameter.
CVE-2020-5770
PUBLISHED: 2020-08-03
Cross-site request forgery in Teltonika firmware TRB2_R_00.02.04.01 allows a remote attacker to perform sensitive application actions by tricking legitimate users into clicking a crafted link.
CVE-2020-5771
PUBLISHED: 2020-08-03
Improper Input Validation in Teltonika firmware TRB2_R_00.02.04.01 allows a remote, authenticated attacker to gain root privileges by uploading a malicious backup archive.
CVE-2020-5772
PUBLISHED: 2020-08-03
Improper Input Validation in Teltonika firmware TRB2_R_00.02.04.01 allows a remote, authenticated attacker to gain root privileges by uploading a malicious package file.