While research illustrates some sly threats, experts say attackers will likely focus on data exposure and finding ways to fool algorithms.

4 Min Read
The Adversarial ML Threat MatrixMITRE Corp.

Machine-learning algorithms have become a critical part of cybersecurity technology, currently used to identify malware, winnow down the number of alerts presented to security analysts, and prioritize vulnerabilities for patching. Yet such systems could be subverted by knowledgeable attackers in the future, warn experts studying the security of machine-learning (ML) and artificial-intelligence (AI) systems.

In a study published last year, researchers found that the redundant properties of neural networks could allow an attacker to hide data within a common neural network file, consuming 20% of the file size without dramatically affecting the performance of the model. In another paper from 2019, researchers showed that a compromised training service could create a backdoor in a neural network that actually persists, even if the network is trained to another task.

While these two specific research papers show potential threats, the most immediate risk are attacks that steal or modify data, says Gary McGraw, co-founder and CEO of the Berryville Institute of Machine Learning (BIML).

"When you put confidential information in a machine and make it learn that data, people forget that there is still confidential information in the machine, and that there are tricky ways of getting it out," he says. "The data matters just as much as the rest of the technology, probably more."

As ML algorithms have become a popular feature for new technology — especially in the cybersecurity industry where "artificial intelligence" and "machine learning" have become marketing must-haves — developers have focused on creating new uses for the technology, without a specific effort to make their implementations resilient to attack, McGraw and other experts say.

Adversarial ML
In 2020, Microsoft, MITRE, and other major technology companies released a catalog of potential attacks called the Adversarial ML Threat Matrix, which was recently rebranded as the Adversarial Threat Landscape for Artificial Intelligence Systems (ATLAS). In addition, last year it warned that companies need to assess systems that rely on AI or ML technology for potential risks. Some of the risks, such as hiding data in ML files, are little different from everyday risks, essentially re-creating a specialized form of steganography. Yet more ML-specific risks, such as the potential to create models that an attacker can trigger to act in a specific way, could have significant success unless companies test the resiliency of their systems.

Part of the reason is that defenders are focused on immediate attacks, not on far-future sophisticated attacks that are difficult to implement, says Joshua Saxe, chief scientist at software security firm Sophos.

"In all honesty, of all the things that we need to worry about in the IT security community, it is not clear that attacks on ML models ... will be happening in the near future," he says. "It's good that we are talking about these attacks, but this is basically people coming up with ways they think attackers will act in the future."

As more security professionals rely on ML systems to do their work, however, awareness of the threat landscape will become more important. Adversarial attacks created by researchers include evading detectors of malware command-and-control traffic, of botnet domain generation algorithms (DGAs), and of malware binaries. Actual attacks include the subversion of Microsoft's chatbot, Tay, and attempts to poison the collective antivirus service VirusTotal with data to escape detection by the service.

Data at Risk
The greatest risk is posed to data, says BIML's McGraw, an argument he made in a Dark Reading column earlier this month. Sensitive data can often be recovered from a ML system, and the resulting system often operates in an insecure manner, he says.

"There is an exposure of data during operations, in general, when queries to the machine-learning system get exposed and the returned results are often exposed," he says. "Both of those highlight a really important aspect of machine learning that is not emphasized: The data is really important."

The ML threats differ from attackers using AI/ML techniques to create better attacks, Sophos's Saxe says. AI systems, such as text-generation neural network GPT-3, can be used to generate text for phishing that seems like it was sent by a human. AI-based face generation algorithms can create profile pictures of synthetic, but real-looking, people. These are the sorts of attacks for which attackers will initially abuse ML and AI algorithms, he says.

"Generating synthetic media will be the initial place that attackers will really use AI in the next few years," Saxe says. "It will be really easy to use that technology."

While researchers show the possibility of many types of ML attacks, most are still years away because attackers still have much simpler tools in their toolbox that are still successful, he says.

"Defenders will have to make life significantly harder for attackers, before attackers start resorting to those James Bond types of attacks," Saxe says. "We are just not living in that world today. Attackers can do things that are much easier and still be successful."

The one area where ML attacks will become critical to stop: robotics and self-driving cars, which not only rely on the algorithms to operate, but convert AI decisions into physical actions, Saxe says. Subverting those algorithms becomes a much bigger problem.

"It's a different game in that world," he says

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights