Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

7/12/2021
10:00 AM
Oleg Brodt
Oleg Brodt
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

AI and Cybersecurity: Making Sense of the Confusion

Artificial intelligence is a maturing area in cybersecurity, but there are different concerns depending on whether you're a defender or an attacker.

The purpose of artificial intelligence (AI) is to create intelligent machines. It is used in multiple domains, including finance, manufacturing, logistics, retail, social media, healthcare, and increasingly, cybersecurity.

Related Content:

Cyber Is the New Cold War & AI Is the Arms Race

Special Report: Building the SOC of the Future

New From The Edge: 7 Skills the Transportation Sector Needs to Fuel Its Security Teams

The current discourse about AI and cybersecurity often confuses the different perspectives, as if the intersection of disciplines is monolithic and one-dimensional. Therefore, we need a common language for discussing the various and disparate intersections of AI and cybersecurity that clarifies the differences. I see three parts to the discussion: AI in the hands of defenders, AI in the hands of attackers, and adversarial AI.

AI in the Hands of Defenders
Machine learning (ML) is a subfield of AI that teaches computers to perform tasks by learning from examples rather than being explicitly programmed. Unsurprisingly, ML and its popular subbranch of deep learning (aka neural networks) are emerging as the main methods of developing cyber-defense solutions: Instead of providing a detection mechanism with predefined malware signatures, we can provide a data set of malicious and benign files and let the computer learn from them.

In simpler terms, the machine learning algorithms analyze the differences and similarities between different samples based on things like their content, how they interact with the operating system, etc., and create a model of what malware files typically look like. Then, every new file is compared against the model and classified as malicious or benign based on probability (typically). Naturally, these probabilistic solutions are far from been perfect, both in terms of failing to identify malicious behavior and in flagging benign behavior as malicious, leading to alert fatigue.

According to the latest M-Trends report, it takes 24 days, on average, to discover a network has been compromised. This is significant improvement from the average of 416 days it took blue teams to realize that an attacker was present in their network. Although we have made progress in defense, I suspect most of it can be attributed to the proliferation of ransomware attacks, where the attackers promptly expose themselves, thereby driving detection time down.

Credit: freshidea via Adobe Stock
Credit: freshidea via Adobe Stock

Since attackers remain fast and defenders remain slow, we have no choice but to delegate as many detection tasks to AI-based solutions as possible. Consequently, AI-based models are being integrated into a variety of security solutions, such as intrusion detection systems (IDS), endpoint detection and response (EDR), security information and event management (SIEM) alert prioritization, big data security analytics, and more. The main goal is to improve the performance of existing solutions, automate detection and investigation processes, and most importantly, increase detection speed by handing over tasks previously handled by human analysts.

AI in the Hands of Attackers
While AI-based technologies can improve cyber defense by creating a new generation of intelligent detection systems, they can also be misused in the hands of cyberattackers. A recent paper by Bruce Schneier emphasizes this point.

AI is, and will increasingly be, employed by cyberattackers to lower their costs and improve the effectiveness and stealth of their attacks. In fact, it is easier to justify attackers' use of AI from an economic point of view. While it is quite difficult to measure the ROI of an AI-based cyber-defense system, it is quite straightforward to measure the financial benefits for the attacker.

According to Verizon's "2021 Data Breach Investigations Report," financially motivated hacks continue to be most common — a whopping 90% of all incidents. These attacks have become commoditized, and attackers run their operations just like any other business, where the goal is to increase revenues and reduce costs. Since AI-based technologies can help with the latter, they will increasingly gain hold within cybercrime groups.

We have already witnessed how AI can help with reconnaissance, including automated high-value target discovery and phishing; it can also help with intelligent software fuzzing, yielding faster discovery of vulnerable targets. We can also expect a steep rise in deepfake social engineering attacks powered by AI-based technologies, once the technology is mature enough.

In fact, attackers can analyze every stage of the cyber kill chain and explore integrating dedicated AI-based tools into each one. While defenders' ultimate goal would be complete automation of cyber defense, for attackers, it would be complete automation of attacks.

Adversarial AI
Just like any other technology, AI can itself be vulnerable, leading to additional avenues of exploitation and a new class of cyberattacks.

We have already seen how AI-based anti-spam solutions can be fooled by a single misspelling in an email. We have also witnessed that AI-based image-recognition systems can be fooled by a single pixel change. In fact, research suggests that AI-based systems can be fooled across the board, and the more sophisticated the solution, the easier it is to successfully attack it.

Most alarming, however, is that AI-based cyber defenses are similarly vulnerable. The same attack techniques that work against other AI-based systems can be applied against AI-based malware detectors, intrusion-detection systems, and other security tools. Academic research has already demonstrated how easily most such systems can be bypassed.

In the coming years, I expect we will witness a wave of attacks against AI-based systems. Currently, however, most chief information security officers (CISO) are not paying enough attention to the security of AI-based systems. This must change before we realize — yet again — that we have delegated our most sensitive tasks to the most vulnerable systems.

Oleg serves as the R&D Director of Deutsche Telekom Innovation Labs, Israel. He also serves as the Chief Innovation Officer for [email protected] University, an umbrella organization responsible for cybersecurity-related research at Ben Gurion University, Israel. Prior to ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
How Data Breaches Affect the Enterprise
Data breaches continue to cause negative outcomes for companies worldwide. However, many organizations report that major impacts have declined significantly compared with a year ago, suggesting that many have gotten better at containing breach fallout. Download Dark Reading's Report "How Data Breaches Affect the Enterprise" to delve more into this timely topic.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-4020
PUBLISHED: 2021-11-27
janus-gateway is vulnerable to Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
CVE-2021-23654
PUBLISHED: 2021-11-26
This affects all versions of package html-to-csv. When there is a formula embedded in a HTML page, it gets accepted without any validation and the same would be pushed while converting it into a CSV file. Through this a malicious actor can embed or generate a malicious link or execute commands via C...
CVE-2021-43785
PUBLISHED: 2021-11-26
@joeattardi/emoji-button is a Vanilla JavaScript emoji picker component. In affected versions there are two vectors for XSS attacks: a URL for a custom emoji, and an i18n string. In both of these cases, a value can be crafted such that it can insert a `script` tag into the page and execute malicious...
CVE-2021-43776
PUBLISHED: 2021-11-26
Backstage is an open platform for building developer portals. In affected versions the auth-backend plugin allows a malicious actor to trick another user into visiting a vulnerable URL that executes an XSS attack. This attack can potentially allow the attacker to exfiltrate access tokens or other se...
CVE-2021-41243
PUBLISHED: 2021-11-26
There is a Potential Zip Slip Vulnerability and OS Command Injection Vulnerability on the management system of baserCMS. Users with permissions to upload files may upload crafted zip files which may execute arbitrary commands on the host operating system. This is a vulnerability that needs to be add...