Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats //

Advanced Threats

06:50 PM

Microsoft & Others Catalog Threats to Machine Learning Systems

Thirteen organizations worked together to create a dictionary of techniques used to attack ML models and warn that such malicious efforts will become more common.

In May 2016, Microsoft introduce a chatbot on Twitter, dubbed "Tay," that attempted to hold conversations with users and improve its responses through machine learning (ML). A coordinated attack on the chatbot, however, caused the algorithm to start tweeting "wildly inappropriate and reprehensible words and images" in the first 24 hours, Microsoft stated at the time.

For the software giant, the attack demonstrated that the world of ML and artificial intelligence (AI) would come with threats. Last week, the company and an interdisciplinary group of security professionals and ML researchers from a dozen other organizations took a first stab at creating a vocabulary for describing attacks on ML systems with the initial draft of the Adversarial ML Threat Matrix.

The threat matrix is an extension of MITRE's ATT&CK framework for the classification of attack techniques. The information should help secure not just the developers of ML systems but companies that are using those systems as well, says Jonathan Spring, senior member of the technical staff of the CERT Division of Carnegie Mellon University's Software Engineering Institute.

Related Content:

Using Adversarial Machine Learning, Researchers Look to Foil Facial Recognition

The Changing Face of Threat Intelligence

New on The Edge: How Can I Help Remote Workers Secure Their Home Routers?

"If you're using a machine learning system — even if you're not the one developing it — you should make sure that your broader system is fault tolerant," Spring says. "You should be looking for people pressing on [attacking] the broader machine learning part of your system. And you can do those checks on your system without really knowing too much about how the machine learning is working."

Machine learning has become a key factor in companies' plans to transform their businesses over the next decade. Yet, most firms consider adversarial attacks on ML to be a future threat, not a current risk. Only three of 28 companies surveyed by Microsoft, for example, thought they had the tools in place to secure their ML systems. 

Actual attacks on ML systems inhabit a spectrum of generic exploits of vulnerabilities to specific ML-reliant attacks on models or data. In one case, an attacker exploited a misconfiguration in the system of the facial recognition firm ClearviewAI to gain access to some of its infrastructure, which could have resulted in the attacker polluting the dataset.

"[W]e believe the first step in empowering security teams to defend against attacks on ML systems, is to have a framework that systematically organizes the techniques employed by malicious adversaries in subverting ML systems," Microsoft's researchers said in a blog post announcing the Adversarial ML Threat Matrix. "We hope that the security community can use the tabulated tactics and techniques to bolster their monitoring strategies around their organization's mission-critical ML systems."

The Adversarial ML Threat Matrix is based on the MITRE ATT&CK framework, which has grown in popularity since it was originally released in 2015. More than 80% of companies use the framework as part of their security response programs, according to an October survey published by the University of California at Berkeley and McAfee in October.

The threat matrix is the work of a baker's dozen of different organizations. Microsoft, Carnegie Mellon University's Software Engineering Institute, and MITRE are collaborating with Bosch, IBM, NVIDIA, Airbus, Deep Instinct, Two Six Labs, the University of Toronto, Cardiff University, PricewaterhouseCoopers, and Berryville Institute of Machine Learning on the framework. The team used a variety of case studies to identify the common tactics and techniques used by attackers and describe them for security researchers. 

At the DerbyCon conference in 2019, for example, two researchers showed a way to use a data-based attack against Proofpoint's email security system to extract the training data and create a system that could be used by an attacker to as a test platform for creating email attacks that would not be caught by the messaging security product. Microsoft also mined its experience with the Tay chatbot to inform the threat matrix.

While the risks to ML and AI systems are real, they aren't the most common threats, Charles Clancy, chief futurist and general manager of MITRE Labs, said in an interview. "Typically, AI isn’t the first avenue for our adversaries, particularly regarding attacking our critical infrastructure," he said. "There's a truism in the power industry that the most dangerous adversaries to our electric grid are — squirrels. Keep that in mind — there are risks to AI, but it's also extremely valuable."

The Adversarial ML Threat Matrix is only the first attempt to capture all the threats posed to ML systems. The companies and security researchers called for others to contribute their experiences as well. 

"Perhaps this first version of the Adversarial ML Threat Matrix captures the adversary behavior you have observed — [i]f not, please contribute what you can to MITRE and Microsoft so your experience can be captured," CMU's Software Engineering Institute stated in its blog post. "If the matrix does reflect your observations, is it helpful in communicating and understanding this adversary behavior and explaining threats to your constituents? Share those experiences with the authors as well, so the matrix can improve!"

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Ransomware Is Not the Problem
Adam Shostack, Consultant, Entrepreneur, Technologist, Game Designer,  6/9/2021
How Can I Test the Security of My Home-Office Employees' Routers?
John Bock, Senior Research Scientist,  6/7/2021
New Ransomware Group Claiming Connection to REvil Gang Surfaces
Jai Vijayan, Contributing Writer,  6/10/2021
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: Zero Trust doesn't have to break your budget!
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-06-16
FOGProject v1.5.9 is affected by a File Upload RCE (Authenticated).
PUBLISHED: 2021-06-16
Cross Site Scripting (XSS) in Moodle 3.10.3 allows remote attackers to execute arbitrary web script or HTML via the "Description" field.
PUBLISHED: 2021-06-16
In PageKit v1.0.18, a user can upload SVG files in the file upload portion of the CMS. These SVG files can contain malicious scripts. This file will be uploaded to the system and it will not be stripped or filtered. The user can create a link on the website pointing to "/storage/exp.svg" t...
PUBLISHED: 2021-06-16
D-Link DIR-2640-US 1.01B04 is vulnerable to Buffer Overflow. There are multiple out-of-bounds vulnerabilities in some processes of D-Link AC2600(DIR-2640). Local ordinary users can overwrite the global variables in the .bss section, causing the process crashes or changes.
PUBLISHED: 2021-06-16
D-Link DIR-2640-US 1.01B04 is vulnerable to Incorrect Access Control. Router ac2600 (dir-2640-us), when setting PPPoE, will start quagga process in the way of whole network monitoring, and this function uses the original default password and port. An attacker can easily use telnet to log in, modify ...