Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Attacks/Breaches

9/9/2019
09:00 AM
By Nadav Maman, Co-Founder & Chief Technical Officer at Deep Instinct
By Nadav Maman, Co-Founder & Chief Technical Officer at Deep Instinct
Sponsored Article
50%
50%

Hackers & Artificial Intelligence: A Dynamic Duo

To best defend against an AI attack, security teams will need to adopt the mindset and techniques of a malicious actor.

The amplified efficiency of artificial intelligence (AI) means that once a system is trained and deployed, malicious AI can attack a far greater number of devices and networks more quickly and cheaply than a malevolent human actor. Given sufficient computing power, an AI system could launch many attacks, be more selective in its targets and more devastating in its impact. The potential mass destruction makes a nuclear explosion sound rather limited.

Currently the use of AI for attackers is mainly pursued at an academic level and we’re yet to see AI attacks in the wild. However, there’s a lot of talk in the industry about attackers using AI in their malicious efforts, and defenders using machine learning as a defense technology.

There are three types of attacks in which an attacker can use AI:

AI-based cyberattacks: The malware operates AI algorithms as an integral part of its business logic. This is where AI algorithms are used to detect anomalies and indicate irregular user and system activity patterns. The AI algorithm is trained to identify unusual patterns indicative of malicious activity that can be used to execute malware, increase or decrease evasion and stealth configurations and communication times. An example of this is DeepLocker, demonstrated by IBM security which encrypted ransomware to autonomously decide which computer to attack based on a face recognition algorithm.

AI facilitated cyberattacks: The malicious code and malware running on the victim’s machine does not include AI algorithms, but the AI is used elsewhere in the attacker’s environment. An example of this is Info-stealer malware which uploads a lot of personal information to the C&C server, which then runs an NLP algorithm to cluster and classify sensitive information as interesting (e.g. credit card numbers). Another example of this is spear fishing where an email is sent with a façade the looks legitimate by collecting and using information specifically relevant to the target. 

Adversarial attacks: The use of malicious AI algorithms to subvert the functionality of benign AI algorithms. This is done by using the algorithms and techniques that are built into a traditional machine learning algorithm and “breaking” it by reverse engineering the algorithm. Skylight Cyber recently demonstrated an example of this when they were able to trick Cylance’s AI based antivirus product into detecting a malicious file as benign.

The constructive AI versus malicious AI trend will continue to increase and spread across the opaque border that separates academic proof of concepts from actual full-scale attacks in the wild. This will happen incrementally as computing power (GPUs) and deep learning algorithms become more and more available to the wider public.

To best defend against an AI attack, you need to adopt the mindset and techniques of a malicious actor. Machine learning and deep learning experts need to be familiar with these techniques in order to build robust systems that will defend against them. 

For more examples of how each of the AI types of attack have been discovered, click here to read the full article.

About the Author

Nadav Maman, Co-Founder & Chief Technical Officer, Deep Instinct

Nadav Maman brings 15 years of experience in customer-driven business and technical leadership to his role as co-founder and chief technical officer at Deep Instinct. He has a proven track record in managing technical complex cyber projects, including design, executions and sales. He also has vast hands-on experience with data security, network design, and implementation of complex heterogeneous environments.

 

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/17/2020
Cybersecurity Bounces Back, but Talent Still Absent
Simone Petrella, Chief Executive Officer, CyberVista,  9/16/2020
Meet the Computer Scientist Who Helped Push for Paper Ballots
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/16/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-25789
PUBLISHED: 2020-09-19
An issue was discovered in Tiny Tiny RSS (aka tt-rss) before 2020-09-16. The cached_url feature mishandles JavaScript inside an SVG document.
CVE-2020-25790
PUBLISHED: 2020-09-19
** DISPUTED ** Typesetter CMS 5.x through 5.1 allows admins to upload and execute arbitrary PHP code via a .php file inside a ZIP archive. NOTE: the vendor disputes the significance of this report because "admins are considered trustworthy"; however, the behavior "contradicts our secu...
CVE-2020-25791
PUBLISHED: 2020-09-19
An issue was discovered in the sized-chunks crate through 0.6.2 for Rust. In the Chunk implementation, the array size is not checked when constructed with unit().
CVE-2020-25792
PUBLISHED: 2020-09-19
An issue was discovered in the sized-chunks crate through 0.6.2 for Rust. In the Chunk implementation, the array size is not checked when constructed with pair().
CVE-2020-25793
PUBLISHED: 2020-09-19
An issue was discovered in the sized-chunks crate through 0.6.2 for Rust. In the Chunk implementation, the array size is not checked when constructed with From<InlineArray<A, T>>.