Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

7/25/2017
10:00 AM
Connect Directly
Twitter
RSS
E-Mail
100%
0%

Using AI to Break Detection Models

Pitting machine learning bots against one another is the new spy vs. spy battle in cybersecurity today.

In the spy versus spy world that pits cybersecurity defense against money-motivated attackers, it always pays to think a few steps ahead of the attackers. Any security pro who has seen the direction that detection is going these days with artificial intelligence and machine learning understands that those next attacks steps are probably going to involve some sort of subversion of the AI algorithms. If security wants to effectively use AI, it will need to find ways to harden those models

This week at Black Hat, one researcher hopes to contribute to the discipline by showing off a new automated AI agent that probes the data science behind machine learning malware detection models and looks for mathematical weaknesses.

"All machine learning models have blind spots. All of them. And a sophisticated and motivated adversary is out there trying to exploit them," says Hyrum Anderson, technical director of data science for Endgame. "We have created an artificial agent that tries to automatically discover those blind spots."

As he puts it, the agent "literally plays a game against our model and tries to beat it," essentially automating the auditing of the mathematical underpinning of detection mechanisms. The agent essentially inspects an executable file and uses a sequence of file mutations to test the detection model. This agent uses its own brand of machine learning to figure out which sequences of mutations are most likely to create a variant that evades the model. Using the information it gains from this automated test, the agent can create a policy for developing malware variants that have a high likelihood of breaking the opposing machine learning model of the detection engine.

This is the logical next step in advancement of thought about cybersecurity's auditing of machine learning efficacy, and one which Anderson hopes to encourage across the industry as vendors further refine their machine learning mechanisms. 

"You'll get no criticisms from me (about the competition). I think in general my colleagues and competitors are all paranoid and are always thinking about how to make (their models) secure. But that's usually a manual process. It's spot-checking and it's somebody looking at it," he says. "We wanted to take that to the next level. I don't believe that our adversaries are yet using this level of sophistication we are proposing in our research, but that's the point. We want to get there before they do."

At the moment, Anderson's use of the agent begins and ends with Endgame's own machine learning model. But, in concert with his presentation, he and his team are going to release code that is generic and adaptable for other vendors and researchers to inspect their own models. 

"We want to put it out there because a rising tide raises all boats," Anderson says.

The idea of machine learning and AI hardening is generally gaining momentum with data scientists and security specialists of late. In fact, this is one of several talks at Black Hat this year that will focus on problems that can arise through flawed machine learning algorithms. For example, in one talk a data scientist with Sophos will discuss how bad data can screw up detection models. Another from a group of Georgia Tech researchers will dive into a new tool that aims to sabotage detection mechanisms in Android antivirus apps. 

 

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Aviation Faces Increasing Cybersecurity Scrutiny
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/22/2019
Microsoft Tops Phishers' Favorite Brands as Facebook Spikes
Kelly Sheridan, Staff Editor, Dark Reading,  8/22/2019
MoviePass Leaves Credit Card Numbers, Personal Data Exposed Online
Kelly Sheridan, Staff Editor, Dark Reading,  8/21/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
The State of IT Operations and Cybersecurity Operations
The State of IT Operations and Cybersecurity Operations
Your enterprise's cyber risk may depend upon the relationship between the IT team and the security team. Heres some insight on what's working and what isn't in the data center.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2016-6154
PUBLISHED: 2019-08-23
The authentication applet in Watchguard Fireware 11.11 Operating System has reflected XSS (this can also cause an open redirect).
CVE-2019-5594
PUBLISHED: 2019-08-23
An Improper Neutralization of Input During Web Page Generation ("Cross-site Scripting") in Fortinet FortiNAC 8.3.0 to 8.3.6 and 8.5.0 admin webUI may allow an unauthenticated attacker to perform a reflected XSS attack via the search field in the webUI.
CVE-2019-6695
PUBLISHED: 2019-08-23
Lack of root file system integrity checking in Fortinet FortiManager VM application images of all versions below 6.2.1 may allow an attacker to implant third-party programs by recreating the image through specific methods.
CVE-2019-12400
PUBLISHED: 2019-08-23
In version 2.0.3 Apache Santuario XML Security for Java, a caching mechanism was introduced to speed up creating new XML documents using a static pool of DocumentBuilders. However, if some untrusted code can register a malicious implementation with the thread context class loader first, then this im...
CVE-2019-15092
PUBLISHED: 2019-08-23
The webtoffee "WordPress Users & WooCommerce Customers Import Export" plugin 1.3.0 for WordPress allows CSV injection in the user_url, display_name, first_name, and last_name columns in an exported CSV file created by the WF_CustomerImpExpCsv_Exporter class.