Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Attacks/Breaches

6/20/2019
12:15 PM
50%
50%

Machine Learning Boosts Defenses, but Security Pros Worry Over Attack Potential

As defenders increasingly use machine learning to remove spam, catch fraud, and block malware, concerns persist that attackers will find ways to use AI technology to their advantage.

Machine learning continues to be widely pursued by cybersecurity companies as a way to bolster defenses and speed response. 

Machine learning, for example, has helped companies such as security firm Malwarebytes improve their ability to detect attacks on consumer systems. In the first five months of 2019, about 5% of the 94 million malicious programs detected by Malwarebytes' endpoint protection software came from its machine-learning powered anomaly-detection system, according to the company. 

Such systems, and artificial intelligence (AI) technologies, in general, will be a significant component of all companies' cyberdefense, says Adam Kujawa, director of Malwarebytes' research labs.

"The future of AI for defenses goes beyond just detecting malware, but also will be used for things like finding network intrusions or just noticing that something weird is going on in your network," he says. "The reality is that good AI will not only identify that it's weird, but [it] also will let you know how it fits into the bigger scheme."

Yet, while Malwarebytes joins other cybersecurity firms as a proponent of machine learning and the promise of AI as a defensive measure, the company also warns that automated and intelligent systems can tip the balance in favor of the attacker. Initially, attackers will likely incorporate machine learning into backend systems to create more custom and widespread attacks, but they will eventually focus on ways to attack other AI systems as well.

Malwarebytes is not alone in that assessment, and it's not the first to issue a warning, as it did in a report released on June 19. From adversarial attacks on machine-learning systems to deep fakes, a range of techniques that general fall under the AI moniker are worrying security experts. 

In 2018, IBM created a proof-of-concept attack, DeepLocker, that conceals itself and its intentions until it reaches a specific target, raising the possibility of malware that infects millions of systems without taking any action until it triggers on a set of conditions.

"The shift to machine learning and AI is the next major progression in IT," Marc Ph. Stoecklin, principal researcher and manager for cognitive cybersecurity intelligence at IBM, wrote in a post last year. "However, cybercriminals are also studying AI to use it to their advantage — and weaponize it."

The first problem for both attackers and defenders is creating stable AI technology. Machine-learning algorithms require good data to train into reliable systems, and researchers and bad actors have found ways to pollute the data sets as a way to corrupt the resultant system. 

In 2016, for example, Microsoft launched a chatbot, Tay, on Twitter that could learn from messages and tweets, saying, "the more you talk the smarter Tay gets." Within 24 hours of going online, a coordinated effort by some users resulted in Tay responding to tweets with racist responses.

The incident "shows how you can train — or mistrain — AI to work in effective ways," Kujawa says.

Polluting the dataset collected by cybersecurity firms could similarly create unexpected behavior and make them perform poorly.

A number of AI researchers have already used such attacks to undermine machine-learning algorithms. A group including researchers from Pennsylvania State University, Google, the University of Wisconsin, and the US Army Research Lab used its own AI attacker to craft images that could be fed to other machine-learning systems to train the targeted systems to incorrectly identify images.

"Adversarial examples thus enable adversaries to manipulate system behaviors," the researchers stated in the paper. "Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software."

While Malwarebytes' Kujawa cannot point to a current instance of malware in the wild that used machine-learning or AI techniques, he expects to see examples soon. Rather than malware that incorporates neural networks or other AI technology, initial attempts at fusing malware with AI will likely focus on the backend: the command-and-control server, he says.

"I think we are going to see a bot that is deployed on an endpoint somewhere, communicating with the command-and-control server, [which] has the AI, has the technology that is being used to identify targets, what's going on, gives commands, and basically acts as an operator," Kujawa says.

Companies should expect attacks to become more targeted in the future as attackers increasingly use AI techniques. Similar to the way that advertisers track potential interested users, attackers will track the population to better target their intrusions and malware, he says.

"These things could create their own victim profiles internally," he says. "A dossier on each target can be created by an AI very quickly."

Related Content

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
US Mayors Commit to Just Saying No to Ransomware
Robert Lemos, Contributing Writer,  7/16/2019
A Lawyer's Guide to Cyber Insurance: 4 Basic Tips
Beth Burgin Waller, Chair, Cybersecurity & Data Privacy Practice , Woods Rogers PLC,  7/12/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Building and Managing an IT Security Operations Program
As cyber threats grow, many organizations are building security operations centers (SOCs) to improve their defenses. In this Tech Digest you will learn tips on how to get the most out of a SOC in your organization - and what to do if you can't afford to build one.
Flash Poll
The State of IT Operations and Cybersecurity Operations
The State of IT Operations and Cybersecurity Operations
Your enterprise's cyber risk may depend upon the relationship between the IT team and the security team. Heres some insight on what's working and what isn't in the data center.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-10100
PUBLISHED: 2019-07-18
Dolibarr 7.0.0 is affected by: Cross Site Request Forgery (CSRF). The impact is: allow malitious html to change user password, disable users and disable password encryptation. The component is: Function User password change, user disable and password encryptation. The attack vector is: admin access ...
CVE-2019-10100
PUBLISHED: 2019-07-18
domainmod v4.10.0 is affected by: Cross Site Request Forgery (CSRF). The impact is: There is a CSRF vulnerability that can change admin password. The component is: http://127.0.0.1/settings/password/ http://127.0.0.1/admin/users/add.php http://127.0.0.1/admin/users/edit.php?uid=2. The attack vector ...
CVE-2019-10100
PUBLISHED: 2019-07-18
domainmod(https://domainmod.org/) domainmod v4.10.0 is affected by: Cross Site Request Forgery (CSRF). The impact is: There is a CSRF vulnerability that can add the administrator account. The component is: http://127.0.0.1/admin/users/add.php. The attack vector is: After the administrator logged in,...
CVE-2019-10100
PUBLISHED: 2019-07-18
domainmod(https://domainmod.org/) domainmod v4.10.0 is affected by: Cross Site Request Forgery (CSRF). The impact is: There is a CSRF vulnerability that can change the read-only user to admin. The component is: http://127.0.0.1/admin/users/edit.php?uid=2. The attack vector is: After the administrato...
CVE-2016-10762
PUBLISHED: 2019-07-18
The CampTix Event Ticketing plugin before 1.5 for WordPress allows CSV injection when the export tool is used.