Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

7/12/2021
10:00 AM
Oleg Brodt
Oleg Brodt
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

AI and Cybersecurity: Making Sense of the Confusion

Artificial intelligence is a maturing area in cybersecurity, but there are different concerns depending on whether you're a defender or an attacker.

The purpose of artificial intelligence (AI) is to create intelligent machines. It is used in multiple domains, including finance, manufacturing, logistics, retail, social media, healthcare, and increasingly, cybersecurity.

Related Content:

Cyber Is the New Cold War & AI Is the Arms Race

Special Report: Building the SOC of the Future

New From The Edge: 7 Skills the Transportation Sector Needs to Fuel Its Security Teams

The current discourse about AI and cybersecurity often confuses the different perspectives, as if the intersection of disciplines is monolithic and one-dimensional. Therefore, we need a common language for discussing the various and disparate intersections of AI and cybersecurity that clarifies the differences. I see three parts to the discussion: AI in the hands of defenders, AI in the hands of attackers, and adversarial AI.

AI in the Hands of Defenders
Machine learning (ML) is a subfield of AI that teaches computers to perform tasks by learning from examples rather than being explicitly programmed. Unsurprisingly, ML and its popular subbranch of deep learning (aka neural networks) are emerging as the main methods of developing cyber-defense solutions: Instead of providing a detection mechanism with predefined malware signatures, we can provide a data set of malicious and benign files and let the computer learn from them.

In simpler terms, the machine learning algorithms analyze the differences and similarities between different samples based on things like their content, how they interact with the operating system, etc., and create a model of what malware files typically look like. Then, every new file is compared against the model and classified as malicious or benign based on probability (typically). Naturally, these probabilistic solutions are far from been perfect, both in terms of failing to identify malicious behavior and in flagging benign behavior as malicious, leading to alert fatigue.

According to the latest M-Trends report, it takes 24 days, on average, to discover a network has been compromised. This is significant improvement from the average of 416 days it took blue teams to realize that an attacker was present in their network. Although we have made progress in defense, I suspect most of it can be attributed to the proliferation of ransomware attacks, where the attackers promptly expose themselves, thereby driving detection time down.

Credit: freshidea via Adobe Stock
Credit: freshidea via Adobe Stock

Since attackers remain fast and defenders remain slow, we have no choice but to delegate as many detection tasks to AI-based solutions as possible. Consequently, AI-based models are being integrated into a variety of security solutions, such as intrusion detection systems (IDS), endpoint detection and response (EDR), security information and event management (SIEM) alert prioritization, big data security analytics, and more. The main goal is to improve the performance of existing solutions, automate detection and investigation processes, and most importantly, increase detection speed by handing over tasks previously handled by human analysts.

AI in the Hands of Attackers
While AI-based technologies can improve cyber defense by creating a new generation of intelligent detection systems, they can also be misused in the hands of cyberattackers. A recent paper by Bruce Schneier emphasizes this point.

AI is, and will increasingly be, employed by cyberattackers to lower their costs and improve the effectiveness and stealth of their attacks. In fact, it is easier to justify attackers' use of AI from an economic point of view. While it is quite difficult to measure the ROI of an AI-based cyber-defense system, it is quite straightforward to measure the financial benefits for the attacker.

According to Verizon's "2021 Data Breach Investigations Report," financially motivated hacks continue to be most common — a whopping 90% of all incidents. These attacks have become commoditized, and attackers run their operations just like any other business, where the goal is to increase revenues and reduce costs. Since AI-based technologies can help with the latter, they will increasingly gain hold within cybercrime groups.

We have already witnessed how AI can help with reconnaissance, including automated high-value target discovery and phishing; it can also help with intelligent software fuzzing, yielding faster discovery of vulnerable targets. We can also expect a steep rise in deepfake social engineering attacks powered by AI-based technologies, once the technology is mature enough.

In fact, attackers can analyze every stage of the cyber kill chain and explore integrating dedicated AI-based tools into each one. While defenders' ultimate goal would be complete automation of cyber defense, for attackers, it would be complete automation of attacks.

Adversarial AI
Just like any other technology, AI can itself be vulnerable, leading to additional avenues of exploitation and a new class of cyberattacks.

We have already seen how AI-based anti-spam solutions can be fooled by a single misspelling in an email. We have also witnessed that AI-based image-recognition systems can be fooled by a single pixel change. In fact, research suggests that AI-based systems can be fooled across the board, and the more sophisticated the solution, the easier it is to successfully attack it.

Most alarming, however, is that AI-based cyber defenses are similarly vulnerable. The same attack techniques that work against other AI-based systems can be applied against AI-based malware detectors, intrusion-detection systems, and other security tools. Academic research has already demonstrated how easily most such systems can be bypassed.

In the coming years, I expect we will witness a wave of attacks against AI-based systems. Currently, however, most chief information security officers (CISO) are not paying enough attention to the security of AI-based systems. This must change before we realize — yet again — that we have delegated our most sensitive tasks to the most vulnerable systems.

Oleg serves as the R&D Director of Deutsche Telekom Innovation Labs, Israel. He also serves as the Chief Innovation Officer for [email protected] University, an umbrella organization responsible for cybersecurity-related research at Ben Gurion University, Israel. Prior to ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Enterprise Cybersecurity Plans in a Post-Pandemic World
Download the Enterprise Cybersecurity Plans in a Post-Pandemic World report to understand how security leaders are maintaining pace with pandemic-related challenges, and where there is room for improvement.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-41393
PUBLISHED: 2021-09-18
Teleport before 4.4.11, 5.x before 5.2.4, 6.x before 6.2.12, and 7.x before 7.1.1 allows forgery of SSH host certificates in some situations.
CVE-2021-41394
PUBLISHED: 2021-09-18
Teleport before 4.4.11, 5.x before 5.2.4, 6.x before 6.2.12, and 7.x before 7.1.1 allows alteration of build artifacts in some situations.
CVE-2021-41395
PUBLISHED: 2021-09-18
Teleport before 6.2.12 and 7.x before 7.1.1 allows attackers to control a database connection string, in some situations, via a crafted database name or username.
CVE-2021-3806
PUBLISHED: 2021-09-18
A path traversal vulnerability on Pardus Software Center's "extractArchive" function could allow anyone on the same network to do a man-in-the-middle and write files on the system.
CVE-2021-41392
PUBLISHED: 2021-09-17
static/main-preload.js in Boost Note through 0.22.0 allows remote command execution. A remote attacker may send a crafted IPC message to the exposed vulnerable ipcRenderer IPC interface, which invokes the dangerous openExternal Electron API.