Forget 'killer robots:' researchers demonstrate how collaborative robots, or 'cobots,' can be hacked and dangerous.
Dozens of robotics and artificial intelligence vendor executives worldwide earlier this week urged the United Nations to ban "lethal autonomous weapons," aka killer robots.
But while even Elon Musk was among the execs expressing worries over killer robots, it's collaborative robots and industrial robots that are already under the hacker microscope. Researchers today released details on how attackers could exploit glaring and serious security flaws in popular robots and robot-control software used in businesses, industrial sites, and homes that could be used to take control of a robot's movements and operations in order to spy or wreak physical damage and endanger humans.
IOActive researchers Lucas Apa and Cesar Cerrudo, who in early March said they had found 50 vulnerabilities in these collaborative robots aka cobots, today released proof-of-concept attacks that demonstrate the kind of damage rogue robots could wreak.
They tested robots and robotics control software products from Softbank Robotics (NAO and Pepper robots), UBTECH Robotics (Alpha 1S, Alpha 2 robots), Robotis (OP2 and THORMANG3 robots), Universal Robots (UR3, UR5, UR10 robots), Rethink Robotics (Baxter and Sawyer robots), and Asratec Corp.
Apa, who is a senior security consultant with IOActive, and Cerrudo, IOActive's CTO, first reported the flaws to the affected robotics vendors about seven months ago. They found authentication flaws, insecure transport mechanisms in its protocols, physical architecture flaws, as well as use of the open-source Robot Operating System platform that contains known vulnerabilities.
Rethink Robotics appears to have fixed the flaws in its February 2017 updates, according to the researchers, although they didn't test the new versions in the lab.
Apa and Cerrudo exploited authentication, memory corruption, insecure transport protocols, and physical architecture weaknesses to hack Universal Robots' UR collaborative robotic arm devices. They were able to disable the robot's safety protection settings, which would let an attacker manipulate the robot such that it could cause injury.
The researchers say the vulnerabilities have yet to be patched by UR, and its May software update did not address the flaws. UR general manager Douglas Peterson said his company is "aware" of IOActive's report.
"We have a constant focus on our product improvement and industrial hardening for the sake of our customers. This includes monitoring any potential vulnerability, not just cybersecurity. Our products undergo rigorous safety certification of our development process, design, manufacturing and functionality of our collaborative robots," he said in an emailed response to a query from Dark Reading. "In general, we have a strong focus on the safety and security features of our products and will continue to do so moving forward."
Apa says he and Cerrudo found features in various robot brands that let them execute code remotely, as well as memory corruption issues and hardware flaws such as exposed ports that are difficult to retrofit. "These [hardware] vulnerabilities are hard to mitigate on robots because of the way the hardware is designed, and they are impossible to be fixed. You need a new version of those robots," he says.
Robots are increasingly becoming "smarter" and in some cases, with more human-like qualities, which is helping increase their popularity and usability. IDC estimates that in 2020, worldwide spending on robotics will be at $188 billion. Robots today are mostly in the manufacturing industry, but the consumer and healthcare sectors are next in line to adopt these devices.
Security researchers also increasingly are testing the security limits of robots. Stefano Zanero, associate professor at Politecnico di Milano and his fellow researchers hacked an ABB Robotics IRB140 industrial robot earlier this year via a remote code bug they found in its controller software. They fed the robot a phony configuration file that altered its parameters for drawing a straight line. The tiny, 2mm data discrepancy resulted in the robot drawing a slightly crooked line, a deviation that could result in a product recall or major product defect in a manufacturing process.
Zanero says industrial robot vendors tend to be more security-minded than those in the cobot space. Downtime can cost tens of thousands of dollars per minute in a hack or disruption, he says, which makes these robot vendors more likely to be responsive to security issues.
"They [ABB] responded quickly. They issued a patch in a timely fashion," says Zanero of his team's industrial robot hack.
Meanwhile, Apa and Cerrudo say hacking the collaborative robots didn't always require their rooting out new vulnerabilities. "Some of the robots do not require any exploits. We're just using vendor-supplied tools to manipulate the robot. You can use that to send it an action to move," Cerrudo notes.
In the UR POC, they exploited six vulnerabilities in UR's family of UR3, UR5, and UR10 robots to remotely change the safety and parameter settings such that the robot arm swung around wildly in such a way that it could injure a nearby human.
They used an authentication vulnerability in the UR Dashboard Server for the robot system, and then a stack-based buffer overflow in its Modbus TCP service. That allowed them to send commands as root. Next they modified the safety settings, namely the robot-joint's movement and I/O safety values, disabling emergency sensors. They injected their own malicious code to reset the robot's settings.
The researchers released video of both the UR hack as well as humorous yet chilling attacks on home/office robots: they showed how UBTech's Alpha 2 humanoid robot and SoftBank Robotics' NAO robot could be hacked and controlled by an attacker. They wrote a Python script-based attack that turns the friendly and helpful Alpha 2 into a cackling "Chucky," who they demonstrated slashing a tomato with a screwdriver.
They also wrote a script that allows an attacker to move SoftBank Robotics' NAO robot's camera head and microphone for spying purposes.
"We wanted to show privacy issues. How a webcam and mike can walk around the house and get everything," for example, Cerrudo says. "These could be in an office or a store.
"Robots are computers with legs and they are a lot more powerful because they can 'see,'" for instance, he says.
"Our demos can be funny, but this is something to be taken very seriously," Cerrudo says. Robots can be "really dangerous when they are hacked."
Not Real Yet
The researchers say there's no evidence of any real-world robot hacks just yet, and their goal is to stay a few steps ahead of that. Robots still aren't a mainstream phenomenon in businesses and homes, either, but they are becoming popular, they say.
Politecnico di Milano's Zanero says the threats vary among different robot sectors. "Devices in consumer [robotics] can be used for some limited forms of espionage and surveillance … and can present an internal point of entry for exploitation," he says. Industrial robots, meanwhile, are at risk of sabotage (think product defects) and ransomware attacks where attackers can cash in on the value of a manufacturing operation's downtime, according to Zanero.
Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info and to register.
Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ... View Full Bio