Researchers Fool Smart Car Camera with a 2-Inch Piece of Electrical Tape
McAfee researchers say they were able to get a Tesla to autonomously accelerate by tricking its camera platform into misreading a speed-limit sign.
February 19, 2020
Operators of some older Tesla vehicles might be surprised to learn that a single piece of two-inch black electrical tape is all it takes to trick the camera sensor in their cars into misinterpreting a 35-mph speed sign as an 85-mph sign.
Researchers at McAfee who discovered the issue said they were able to get a Tesla, equipped with version EyeQ3 of the Mobileye camera platform, to autonomously accelerate 50 miles above the speed limit.
The hack — which involved extending the middle of the "3" on the traffic sign with black tape — appears to only work on Teslas equipped with Mobileye version EyeQ3 (Tesla hardware Pack 1), according to McAfee. Attempts by the researchers to re-create the attack on Tesla models with the latest version of the Mobileye camera did not work. The newest Teslas no longer implement Mobileye technology, and they don't appear to support traffic sign recognition, McAfee said.
"We are not trying to spread fear here and saying that attackers are likely going to be driving cars off the road," says Steve Povolny, head of McAfee Advanced Threat Research. A Tesla model with the particular Mobileye version will reliably misinterpret the speed limit sign and attempt to accelerate to the misclassified speed limit if the driver has engaged traffic-aware cruise control, Povolny says. But the likelihood of that happening in a real-life situation without the driver becoming aware of the issue and taking control of the vehicle is remote, he says.
The real goal of the research is to raise awareness of some of the nontraditional threat vectors that are emerging with the growing integration of artificial intelligence (AI) and machine-learning (ML) capabilities in modern technologies. At the moment, hacks like these are still in the academic realm.
"If we project 10 to 20 years into the future, at some point these issues are going to be become very real," Povolny says. "If we have completely autonomous vehicles and computing systems that are making medical diagnosis without human oversight, we have a real problem space that is coming up."
Broader Research
McAfee's research involving Mobileye is part of a broader study the company is conducting into so-called "model hacking," or adversarial machine learning. The goal is to see whether weaknesses that are present in current-generation ML algorithms can be exploited to trigger adverse results. The Berryville Institute of Machine Learning (BIML) has classified adversarial attacks as one of the biggest risks to ML systems. In a recent paper, the think tank described adversarial attacks as being designed to fool a ML system by providing it with malicious input involving very small changes to the original data.
In the past, researchers have shown how an AI-powered image classification system can be tricked into misinterpreting a stop sign as a traffic speed limit sign using a few pieces of strategically placed tape on the sign. Before the hack involving Mobileye cameras, McAfee researchers found they could use a few pieces of tape to get an in-house image classifying system to misinterpret a stop sign as an added lane sign. They also discovered they could trick the image classifier into misinterpreting speed limit signs.
The researchers wanted to find out whether they could use the same techniques to trick a proprietary system. They focused on Mobileye because the company's cameras are currently deployed in some 40 million vehicles. In some vehicles the cameras are used to determine the speed limit and to feed that data into their autonomous driving or driver-assist systems.
Initially the researchers used four stickers on the speed limit sign to confuse the camera and found they could consistently fool the system into thinking it was a different speed limit than what it really was. They kept reducing the number of stickers on the sign until they discovered all they really needed was one piece of tape.
"What we have done is trigger some weaknesses that are often inherent in all types of machine-learning systems and the underlying algorithms," Povolny says.
The algorithms used by the Mobileye cameras, for instance, are very specifically trained off a set of data they expect to see, he says – for example, things like known traffic signs or objects in the environment. But that training can often leave gaps in the ability of the system to identify unknown or even slightly nonstandard input. "We basically leverage those gaps or blindspots in the algorithms themselves to cause them to misclassify," Povolny says.
According to McAfee, it informed Tesla and Mobileye of its research in September and October 2019, respectively. "Both vendors indicated interest and were grateful for the research but have not expressed any current plans to address the issue on the existing platform," McAfee said. "Mobileye did indicate that the more recent versions of the camera system address these use cases."
Related Content:
Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: "8 Things Users Do That Make Security Pros Miserable."
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024