S4x15 Conference — The real threat to a power or manufacturing plant isn't the latest vulnerability or malware variant.
"If you only consider hackers, you don’t have to be concerned that much. They won't be able to take down a power grid or blow up chemical facilities," says Ralph Langer, founder of Langner Communications and a top Stuxnet expert. The danger is when attackers have an understanding of the physical and engineering aspects of the plant or site they are targeting, he says.
"We have not seen a lot of cyber-physical attacks in the past to actually cause much damage. That requires skillsets that have nothing to do with hacking," says Langner.
Stuxnet, of course, was the first known example of a cyber-physical attack. Its mission was to derail the uranium enrichment process at Iran's Natanz nuclear facility by sabotaging the associated centrifuges.
"So we can conclude at this time that there are organizations out there already who understand this and have mastered this [cyber-physical attack model], more than like nation-states," Langner says. But that knowledge ultimately will spread more widely, he says.
Langner predicts exploit tools will emerge for attack power grids, for example, as the methodologies known by nation-states proliferate. "That's what concerns me."
Bryan Singer, principal investigator at Kenexis Security Corp., teamed up with chemical engineer Lily Glick at his company to demonstrate just what it would take to execute a remote physical attack on a power plant or manufacturing plant floor. "Software vulnerabilities are of no use if want the maximum scenario. You need to know the engineering protocols" of the targeted site, Singer said here today in a presentation.
An attacker would need to have some knowledge of the control systems running in the plant and how the process -- such as vodka distillation, which Singer and Glick featured as an example in their presentation -- works. So process control operators can't merely rely on vulnerability assessment to secure these systems, according to Singer.
That doesn't mean an attacker needs to actually have engineering expertise, however. The attacker could glean intelligence from open-source information on ICS products as well as acquire inside intelligence about the plant itself, either by stealing plant engineering diagrams or information remotely, or even by schmoozing a plant engineer.
"You could social-engineer an engineer," notes Chris Sistrunk, a senior consultant in the ICS practice at Mandiant, a FireEye company.
[ICS/SCADA systems and networks hackable but not easily cyber-sabotaged without industrial engineering know-how, experts say. Read ISIS Cyber Threat To US Under Debate.]
Other methods of reconnaissance, such as surveillance, or attacking the plant's third-party suppliers, such as systems integrators or vendors, are possible, Singer says. RFPs are also a treasure trove of intel, he says. "They're never going to touch one of those systems until they absolutely have to -- to decrease their chances of getting caught," he says.
The first phase of the actual attack could be compromising a workstation to mimic HMI traffic, for example, he says.
Singer showed how an attacker could mess with pressure release valves to release more steam from the distillation columns, for instance, or close off the valves to decrease the steam, both of which would have a financial impact on the plant.
ICS/SCADA environments are known for being well-prepared for physical safety issues, such as fires or explosions, but mostly from physical events caused by random hardware malfunction or failures -- not due to cyberattacks.
"The way we used to approach hazard analysis misses the malicious component," Langner says. "This opens up a completely new state" of hardware failure, he says. Namely, malicious attackers are more likely to make the process control systems "misbehave" while remaining operational -- much like Stuxnet aimed to do.
The key, Langner says, is to identify any possible direct paths from the cyber side to the physical side of the plant, such as a smart sensor, for example. "This has nothing to do with a buffer overflow in a web server," he says. "If you're able to compromise these 'waypoints,' it's an entry point to physical control."
He points to the MKS PR4000 calibration system attached to pressure sensors in the Natanz plant that tracked the pressure readings of the centrifuges. Langner theorizes the attackers behind Stuxnet manipulated those calibration systems so the plant workers didn't see the real pressure readings that would have flagged the problems with the devices early on.
"The MKS manual for the product shows how you can calibrate the sensors, by sending [a] command to that box. If you simply use a malicious calibration profile, the sensor never shows that it's above the threshold," he says. "I'm very confident this did happen" in the Stuxnet attack.
Langner says the sophisticated attacker would know when to attack as well to ensure maximum impact. "They would consider certain points in time when the attack would be more effective, or the process or facility more vulnerable," he says, such as when first powering up a nuclear power plant.
"We need to start thinking beyond attackers. If we consider professional engineers working on it, this is how they would go about it," says Langner, who on Friday will give a presentation on this. "I call it cyber-physical attack engineering... We'd better figure it out quickly for the defense."Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ... View Full Bio