Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Attacks/Breaches

10/26/2018
12:40 PM
Connect Directly
Twitter
LinkedIn
RSS
E-Mail
50%
50%

DeepPhish: Simulating Malicious AI to Act Like an Adversary

How researchers developed an algorithm to simulate cybercriminals' use of artificial intelligence and explore the future of phishing.

An idea applies in physical space and cyberspace: When you're plotting against an adversary, you want all the intel you can get on which weapons they're using and how they're using them.

The same idea drove researchers at Cyxtera Technologies to explore the weaponization of artificial intelligence (AI) in phishing attacks, which continue to evolve as cybercriminals employ more sophisticated techniques. Encryption and Web certificates, for example, have become go-to phishing tactics as attackers alter their threats to evade security defenses.

Web certificates provide a low-cost means for attackers to convince victims their malicious sites are legitimate, explains Alejandro Correa, vice president of research at Cyxtera. It doesn't take much to get a browser to display a "secure" icon – and that little green lock can make a big difference in whether a phishing scam is successful, he says. People trust it.

By the end of 2016, less than 1% of phishing attacks leveraged Web certificates, he continues. By the end of 2017, that number had spiked to 30%. It's a telling sign for the future: If attackers can find a means to easily increase their success, they're going to take it.

"We expect by the end of this year more than half of attacks are [going to be] done using Web certificates," Correa says. "There is no challenge at all for the attacker to just include a Web certificate in their websites … but it does carry a lot of effectiveness improvements."

So far, there is no standard approach for detecting malicious TLS certificates in the wild. As attackers become more advanced, defenders must learn how they operate. Correa points to the emergence of AI and machine learning in security tools, and explains how this inspired researchers at Cyxtera to learn more about how attackers might use this tech in cybercrime.

"Nowadays, in order for us to analyze the hundreds of thousands of alerts we receive every day, we have to rely on machine-learning models in order to be more productive," he says. "There is simply not enough manpower to monitor all the possible threats."

At this year's Black Hat Europe event, taking place in London in December, Correa will present the team's findings in a session entitled "DeepPhish: Simulating Malicious AI."

As part of his presentation, Correa will demo an algorithm they developed called DeepPhish, which simulates the results of the weaponization of AI by cybercriminals.

The goal was to figure out how attackers could improve their effectiveness using open source AI and machine-learning tools available to them online. "We wanted to figure out what is the best way, from an attacker's perspective, to bypass these detection algorithms," Correa says.

Researchers collected sets of URLs manually created by attackers and built algorithms to learn which patterns make them effective, meaning the URL wasn't blocked by a blacklist or defensive machine-learning algorithm. Using these URLs as a foundation, the team created a neural network designed to learn these patterns and use them to generate new URLs, which would then have a higher chance of being effective.

To test their work, they modeled the behavior of specific threat actors. In one scenario, an actor with a 0.7% effectiveness rate jumped to 20.9% effectiveness with DeepPhish applied.

"If we're going to effectively differentiate ourselves, we need to understand how that is going to be done," Correa says. He calls the results a motivation: "[It will] enhance how we may start combatting and figuring out how to defend ourselves against attackers using AI."

Related Content:

 

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Jonathan TIP/WhoisXML API
100%
0%
Jonathan TIP/WhoisXML API,
User Rank: Author
11/2/2018 | 10:29:09 AM
Malicious AI and threats
The use of AI for malicious purposes has come a long way. ML/NLP technologies were once only available to a few, but the prevalence of the Internet means they're wide-spreading quickly.

Virtually anyone -- and cybercriminals are the first in line -- can put their hands on pre-built AI models and processes that work relatively good and modify those to conduct frauds and hacks. This means more sophisticated attacks are to be expected with a surge in performance as shown by DeepPhish and similar initiatives.
When It Comes To Security Tools, More Isn't More
Lamont Orange, Chief Information Security Officer at Netskope,  1/11/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
2020: The Year in Security
Download this Tech Digest for a look at the biggest security stories that - so far - have shaped a very strange and stressful year.
Flash Poll
Assessing Cybersecurity Risk in Today's Enterprises
Assessing Cybersecurity Risk in Today's Enterprises
COVID-19 has created a new IT paradigm in the enterprise -- and a new level of cybersecurity risk. This report offers a look at how enterprises are assessing and managing cyber-risk under the new normal.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-23836
PUBLISHED: 2021-01-15
An issue was discovered in flatCore before 2.0.0 build 139. A stored XSS vulnerability was identified in the prefs_smtp_psw HTTP request body parameter for the acp interface. An admin user can inject malicious client-side script into the affected parameter without any form of input sanitization. The...
CVE-2021-23837
PUBLISHED: 2021-01-15
An issue was discovered in flatCore before 2.0.0 build 139. A time-based blind SQL injection was identified in the selected_folder HTTP request body parameter for the acp interface. The affected parameter (which retrieves the file contents of the specified folder) was found to be accepting malicious...
CVE-2021-23838
PUBLISHED: 2021-01-15
An issue was discovered in flatCore before 2.0.0 build 139. A reflected XSS vulnerability was identified in the media_filter HTTP request body parameter for the acp interface. The affected parameter accepts malicious client-side script without proper input sanitization. For example, a malicious user...
CVE-2020-35581
PUBLISHED: 2021-01-15
A stored cross-site scripting (XSS) issue in Envira Gallery Lite before 1.8.3.3 allows remote attackers to inject arbitrary JavaScript/HTML code via a POST /wp-admin/admin-ajax.php request with the meta[title] parameter.
CVE-2020-35582
PUBLISHED: 2021-01-15
A stored cross-site scripting (XSS) issue in Envira Gallery Lite before 1.8.3.3 allows remote attackers to inject arbitrary JavaScript/HTML code via a POST /wp-admin/post.php request with the post_title parameter.