Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats //

Advanced Threats

Microsoft Says It's Time to Attack Your Machine-Learning Models

With access to some training data, Microsoft's red team recreated a machine-learning system and found sequences of requests that resulted in a denial-of-service.

Mature companies should conduct red team attacks against their machine-learning systems to suss out their weaknesses and shore up their defenses, a Microsoft researcher told virtual attendees at the USENIX ENIGMA Conference this week.

As part of the company's research into the impact of attacks on machine learning, Microsoft's internal red team recreated a machine-learning automated system that assigns hardware resources in response to cloud requests. Through testing their own offline version of the system, the team found adversarial examples that resulted in the system becoming over-taxed, Hyrum Anderson, principal architect of the Azure Trustworthy Machine Learning group at Microsoft, said during his presentation.

Related Content:

Microsoft & Others Catalog Threats to Machine Learning Systems

Special Report: Special Report: Understanding Your Cyber Attackers

New From The Edge: What I Wish I Knew at the Start of My InfoSec Career

Pointing at attackers' efforts to get around content-moderation algorithms or anti-spam models, Anderson stressed that attacks on machine-learning are already here.

"If you use machine learning, there is the risk for exposure, even though the threat does not currently exist in your space," he said. "The gap between machine learning and security is definitely there."

The USENIX presentation is the latest effort by Microsoft to bring attention to the issue of adversarial attacks on machine-learning models, which are often so technical that most companies do not know how to evaluate their security. While data scientists are considering the impact that adversarial attacks can have on machine learning, the security community needs to start taking the issue more seriously - but also as part of a broader threat landscape, Anderson says. 

Machine-learning researchers are focused on attacks that pollute machine learning data, epitomized by presenting two seemingly-identical image of, say, a tabby cat, and having the AI algorithm identify it as two completely different things, he said. More than 2,000 papers have been written in the last few years, citing these sorts of examples and proposing defenses, he said.

"Meanwhile, security professionals are dealing with things like SolarWinds, software updates and SSL patches, phishing and education, ransomware, and cloud credentials that you just checked into Github," Anderson said. "And they are left to wonder what the recognition of a tabby cat has to do with the problems they are dealing with today."

In November, Microsoft joined with MITRE and other organizations to release the Adversarial ML Threat Matrix, a dictionary of attack techniques created as an addition to the MITRE ATT&CK framework. Almost 90% of organizations do not know how to secure their machine-learning systems, according to a Microsoft survey released at the time.

Microsoft's Research

Anderson shared a red team exercise conducted by Microsoft where the team aimed to abuse a Web portal used for software resource requests and the internal machine-learning algorithm that determines automatically to which physical hardware it assigns a requested container or virtual machine.

The red team started with credentials for the service, under the assumption that attackers will be able to gather valid credentials - either by phishing or because an employee reuses their user name and password. The red team found that two elements of the machine-learning process could be viewed by anyone: read-only access to the training data and key pieces of the data collection part of the ML model. 

That was enough to create their own version of the machine-learning model, Anderson said.

"Even though we built a poor man's replicable model that is likely not identical to the production model, it did allow us to study—as a straw man—and formulate and test an attack strategy offline," he said. "This is important because we did not know what sort of logging and monitoring and auditing would have been attached to the deployed model service, even if we had direct access to us."

Armed with a container image that requested specific types of resources to cause an "oversubscribed" condition, the red team logged in through a different account and provisioned the cloud resources. 

"Knowing those resource requests that would guarantee an oversubscribed condition, we could then instrument a virtual machines with hungry resource payloads, high-CPU utilization and memory usage, which would be over-provisioned and cause a denial of service to the other containers on the same physical host," Anderson said. 

More information on the attack can be found on a GitHub page from Microsoft that contains adversarial ML examples.

Anderson recommends that data-science teams defensively protecting their data and model, and conduct sanity checks—such as making sure that the ML model is not over-provisioning resources—to increase robustness.

Just because a model not accessible externally does not mean it's safe, he says.

"Internal models are not safe by default—that is an argument that is simply 'security by obscurity' in disguise," he said. "Even though a model may not be directly accessible to the outside world, there are paths by which an attacker can exploit them to cause cascading downstream effects in an overall system."

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Inside the Ransomware Campaigns Targeting Exchange Servers
Kelly Sheridan, Staff Editor, Dark Reading,  4/2/2021
Beyond MITRE ATT&CK: The Case for a New Cyber Kill Chain
Rik Turner, Principal Analyst, Infrastructure Solutions, Omdia,  3/30/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-04-14
An issue was discovered in MDaemon before 20.0.4. There is Reflected XSS in Webmail (aka WorldClient). It can be exploited via a GET request. It allows performing any action with the privileges of the attacked user.
PUBLISHED: 2021-04-14
An issue was discovered in MDaemon before 20.0.4. Remote Administration allows an attacker to perform a fixation of the anti-CSRF token. In order to exploit this issue, the user has to click on a malicious URL provided by the attacker and successfully authenticate into the application. Having the va...
PUBLISHED: 2021-04-14
An issue was discovered in MDaemon before 20.0.4. There is an IFRAME injection vulnerability in Webmail (aka WorldClient). It can be exploited via an email message. It allows an attacker to perform any action with the privileges of the attacked user.
PUBLISHED: 2021-04-14
An issue was discovered in MDaemon before 20.0.4. Administrators can use Remote Administration to exploit an Arbitrary File Write vulnerability. An attacker is able to create new files in any location of the filesystem, or he may be able to modify existing files. This vulnerability may directly lead...
PUBLISHED: 2021-04-14
Pi-hole is a Linux network-level advertisement and Internet tracker blocking application. Multiple privilege escalation vulnerabilities were discovered in version 5.2.4 of Pi-hole core. See the referenced GitHub security advisory for details.