Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

// // //
10:00 AM
Oleg Brodt
Oleg Brodt
Connect Directly
E-Mail vvv

AI and Cybersecurity: Making Sense of the Confusion

Artificial intelligence is a maturing area in cybersecurity, but there are different concerns depending on whether you're a defender or an attacker.

The purpose of artificial intelligence (AI) is to create intelligent machines. It is used in multiple domains, including finance, manufacturing, logistics, retail, social media, healthcare, and increasingly, cybersecurity.

Related Content:

Cyber Is the New Cold War & AI Is the Arms Race

Special Report: Building the SOC of the Future

New From The Edge: 7 Skills the Transportation Sector Needs to Fuel Its Security Teams

The current discourse about AI and cybersecurity often confuses the different perspectives, as if the intersection of disciplines is monolithic and one-dimensional. Therefore, we need a common language for discussing the various and disparate intersections of AI and cybersecurity that clarifies the differences. I see three parts to the discussion: AI in the hands of defenders, AI in the hands of attackers, and adversarial AI.

AI in the Hands of Defenders
Machine learning (ML) is a subfield of AI that teaches computers to perform tasks by learning from examples rather than being explicitly programmed. Unsurprisingly, ML and its popular subbranch of deep learning (aka neural networks) are emerging as the main methods of developing cyber-defense solutions: Instead of providing a detection mechanism with predefined malware signatures, we can provide a data set of malicious and benign files and let the computer learn from them.

In simpler terms, the machine learning algorithms analyze the differences and similarities between different samples based on things like their content, how they interact with the operating system, etc., and create a model of what malware files typically look like. Then, every new file is compared against the model and classified as malicious or benign based on probability (typically). Naturally, these probabilistic solutions are far from been perfect, both in terms of failing to identify malicious behavior and in flagging benign behavior as malicious, leading to alert fatigue.

According to the latest M-Trends report, it takes 24 days, on average, to discover a network has been compromised. This is significant improvement from the average of 416 days it took blue teams to realize that an attacker was present in their network. Although we have made progress in defense, I suspect most of it can be attributed to the proliferation of ransomware attacks, where the attackers promptly expose themselves, thereby driving detection time down.

Credit: freshidea via Adobe Stock
Credit: freshidea via Adobe Stock

Since attackers remain fast and defenders remain slow, we have no choice but to delegate as many detection tasks to AI-based solutions as possible. Consequently, AI-based models are being integrated into a variety of security solutions, such as intrusion detection systems (IDS), endpoint detection and response (EDR), security information and event management (SIEM) alert prioritization, big data security analytics, and more. The main goal is to improve the performance of existing solutions, automate detection and investigation processes, and most importantly, increase detection speed by handing over tasks previously handled by human analysts.

AI in the Hands of Attackers
While AI-based technologies can improve cyber defense by creating a new generation of intelligent detection systems, they can also be misused in the hands of cyberattackers. A recent paper by Bruce Schneier emphasizes this point.

AI is, and will increasingly be, employed by cyberattackers to lower their costs and improve the effectiveness and stealth of their attacks. In fact, it is easier to justify attackers' use of AI from an economic point of view. While it is quite difficult to measure the ROI of an AI-based cyber-defense system, it is quite straightforward to measure the financial benefits for the attacker.

According to Verizon's "2021 Data Breach Investigations Report," financially motivated hacks continue to be most common — a whopping 90% of all incidents. These attacks have become commoditized, and attackers run their operations just like any other business, where the goal is to increase revenues and reduce costs. Since AI-based technologies can help with the latter, they will increasingly gain hold within cybercrime groups.

We have already witnessed how AI can help with reconnaissance, including automated high-value target discovery and phishing; it can also help with intelligent software fuzzing, yielding faster discovery of vulnerable targets. We can also expect a steep rise in deepfake social engineering attacks powered by AI-based technologies, once the technology is mature enough.

In fact, attackers can analyze every stage of the cyber kill chain and explore integrating dedicated AI-based tools into each one. While defenders' ultimate goal would be complete automation of cyber defense, for attackers, it would be complete automation of attacks.

Adversarial AI
Just like any other technology, AI can itself be vulnerable, leading to additional avenues of exploitation and a new class of cyberattacks.

We have already seen how AI-based anti-spam solutions can be fooled by a single misspelling in an email. We have also witnessed that AI-based image-recognition systems can be fooled by a single pixel change. In fact, research suggests that AI-based systems can be fooled across the board, and the more sophisticated the solution, the easier it is to successfully attack it.

Most alarming, however, is that AI-based cyber defenses are similarly vulnerable. The same attack techniques that work against other AI-based systems can be applied against AI-based malware detectors, intrusion-detection systems, and other security tools. Academic research has already demonstrated how easily most such systems can be bypassed.

In the coming years, I expect we will witness a wave of attacks against AI-based systems. Currently, however, most chief information security officers (CISO) are not paying enough attention to the security of AI-based systems. This must change before we realize — yet again — that we have delegated our most sensitive tasks to the most vulnerable systems.

Oleg serves as the R&D Director of Deutsche Telekom Innovation Labs, Israel. He also serves as the Chief Innovation Officer for [email protected] University, an umbrella organization responsible for cybersecurity-related research at Ben Gurion University, Israel. Prior to ... View Full Bio
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
Black Hat USA 2022 Attendee Report
Black Hat attendees are not sleeping well. Between concerns about attacks against cloud services, ransomware, and the growing risks to the global supply chain, these security pros have a lot to be worried about. Read our 2022 report to hear what they're concerned about now.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2022-08-12
Improper input validation on the `contains` LoopBack filter may allow for arbitrary SQL injection. When the extended filter property `contains` is permitted to be interpreted by the Postgres connector, it is possible to inject arbitrary SQL which may affect the confidentiality and integrity of data ...
PUBLISHED: 2022-08-12
undici is an HTTP/1.1 client, written from scratch for Node.js.`undici` is vulnerable to SSRF (Server-side Request Forgery) when an application takes in **user input** into the `path/pathname` option of `undici.request`. If a user specifies a URL such as `` or `//` ```js con...
PUBLISHED: 2022-08-12
BookWyrm is a social network for tracking your reading, talking about books, writing reviews, and discovering what to read next. Some links in BookWyrm may be vulnerable to tabnabbing, a form of phishing that gives attackers an opportunity to redirect a user to a malicious site. The issue was patche...
PUBLISHED: 2022-08-12
This Rails gem adds two methods to the ActiveRecord::Base class that allow you to update many records on a single database hit, using a case sql statement for it. Before version 0.1.3 `update_by_case` gem used custom sql strings, and it was not sanitized, making it vulnerable to sql injection. Upgra...
PUBLISHED: 2022-08-12
Shield is an authentication and authorization framework for CodeIgniter 4. This vulnerability may allow [SameSite Attackers](https://canitakeyoursubdomain.name/) to bypass the [CodeIgniter4 CSRF protection](https://codeigniter4.github.io/userguide/libraries/security.html) mechanism with CodeIgniter ...