Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

02:00 PM
Erik Zouave
Erik Zouave
Connect Directly
E-Mail vvv

Malicious Use of AI Poses a Real Cybersecurity Threat

We should prepare for a future in which artificially intelligent cyberattacks become more common.

Could the same automated technologies cybersecurity professionals are increasingly using to protect their enterprises also fuel attacks against them? The research bears that out, according to a report my colleague Marc Bruce and I recently completed for the Swedish Defence Research Agency.

The use of artificial intelligence (AI) tools to analyze data and predict outcomes has been a boon for many industries, including the cybersecurity and defense industries. More and more, antivirus and cyberthreat intelligence systems are using machine learning to become more efficient. For example, both the US Defence Advanced Research Projects Agency (DARPA) and the European Defence Agency (EDA) are seeking to integrate AI technologies into their cyberdefense response capabilities.

However, AI could be a double-edged sword, with some of the industry's most prominent thinkers warning of AI-supported cyberattacks. In this game of cat and mouse, foreseeing how AI might be used in malicious cyberattacks — and understanding its future potential — will better prepare and equip responders. 

This article summarizes some of the important takeaways from our report, "Artificially Intelligent Cyberattacks."

How Did We Get Here?
The vast and varied potential of AI misuse came into focus in a landmark report by several research institutions in 2018. The report showed the general potential for digital, physical, and social malicious uses of AI.

However, it had already been established that AI could play a prominent role in cyberattacks. The first AI-supported cyberattack, recorded in 2007, came courtesy of a dating chatbot conspicuously dubbed CyberLover, described as displaying an "unprecedented level of social engineering." The bot relied on natural language processing (NLP) to profile targets and generate customized chat responses containing fraudulent hyperlinks, becoming notorious for its personal data thefts. It was estimated that CyberLover could establish a new relationship every third minute.

Fast forward to 2016: DARPA organized the Grand Cyber Challenge, in which machines, not humans, were the main contestants. During the contest, AI-supported solutions were used to detect, exploit, and patch vulnerabilities. It is noteworthy that the challenge not only attracted contestants from research institutions, but also the defense industrial complex.  

More recently, amid increasing concerns about future AI misuse, the United Nations Institute for Disarmament Research reported on the normative and legal aspects of AI in cyber operations, reaffirming government responsibility to enact policies about use and misuse of new technology. Concomitantly, cybersecurity firm Darktrace, and IBM began looking into specific technical use cases for AI in cyberattacks.

Malicious Uses of AI in Cyberattacks
With that as our backdrop, it is vital to anchor the way forward in our response to AI misuse. Based on our extensive, peer-reviewed research of mainly experimental AI prototypes, AI's data aggregation capabilities appear to be top-of-mind for cyberattackers who want to leverage the technology to inform their attack plans. In the short term, the absolute strongest cases for this is in the initial reconnaissance stage of cyberattacks. Through a multitude of applications, AI technologies have shown to be supremely effective at data analysis. AI cyberthreat intelligence solutions are already available, including IBM's Watson for cybersecurity and offerings from Cylance and CrowdStrike. Hence, we can expect that AI-supported antagonists have the ability to efficiently generate intelligence on threat mitigation trends, profile targets, and generate libraries of (known) vulnerabilities at scale. 

Another malicious capability to watch is the efficiency of AI in conducting repetitive tasks. As seen in the Ticketmaster incident dating back to 2010, AI tools to defeat Captchas are readily available. The experimental research on Captcha-defeating is likewise well-established. However, repetitive tasks — such as password-guessing, brute-forcing and stealing, as well as automating exploit generation — should also be considered promising grounds where prototypical testing might mature into more advanced solutions. For example, some experiments, such as password brute-forcing and password-stealing, have displayed success rates of over 50% and 90%, respectively.

Finally, deception and manipulation seem likely capability developments stemming from AI. The case for increasingly sophisticated AI-supported phishing may seem imminent when looking at a relatively old case such as CyberLover. In reality, research about AI-supported phishing, along with AI tools to bypass phishing detection, has produced mixed findings. However, this does not negate AI's potential to statistically surpass the efficiency of human social-engineering attempts.

Already, AI-supported attacks have allegedly begun to mimic patterns of normal behavior on target networks, making them harder to detect. While network behavior analysis technology is already in use for security, research indicates this technology also could be twisted for malicious ends. Furthermore, an emergent research domain concerns the ability to attack different types of classifiers that identify patterns in data, such as spam filters that identify spam email. With new uses for NLP and other AI classifiers on the horizon, security concerns become more diverse. 

Looking Ahead
We should prepare for a future in which artificially intelligent cyberattacks become more common. As mentioned, AI's advanced data aggregation capabilities could help malicious actors make more informed choices. While these capabilities may not necessarily push a shift toward more sophisticated attacks, the potential to increase the scale of malicious activities should be of concern. Even the automation of simple attacks could aggravate trends such as data theft and fraud.

Long term, we should not discount the possibility that developments in deception and manipulation capabilities might increase and diversify the sophistication of attacks. Even experts developing AI are beginning to worry about various potential for deception, and those concerns span the fields of AI text, image, and audio generation. In a future of physical systems implementing AI, such as smart cars, an AI arms race between defenders and attackers may ultimately impact risks of physical harm.

While researchers are putting their minds to the task of securing the cyber domain with AI, one of the worrying conclusions from our research is that these efforts might not be enough to protect against the malicious use of AI. However, by anticipating what to expect from the misuse of AI, we can now begin to prepare and tailor diverse and more comprehensive countermeasures.

Marc Bruce, a research analyst intern at the Swedish Defence Research Agency, co-authored this article.

Related Articles:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: "5 Ways to Prove Security's Worth in the Age of COVID-19"

Erik Zouave is an analyst with the Swedish Defense Research Agency FOI, where he researches legal aspects of technology and security. He has been a Research Fellow with the Citizen Lab, Munk School of Global Affairs, at the University of Toronto, a Google Policy Fellow, and ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
How SolarWinds Busted Up Our Assumptions About Code Signing
Dr. Jethro Beekman, Technical Director,  3/3/2021
'ObliqueRAT' Now Hides Behind Images on Compromised Websites
Jai Vijayan, Contributing Writer,  3/2/2021
Attackers Turn Struggling Software Projects Into Trojan Horses
Robert Lemos, Contributing Writer,  2/26/2021
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: George has not accepted that the technology age has come to an end.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-03-06
Wazuh API in Wazuh from 4.0.0 to 4.0.3 allows authenticated users to execute arbitrary code with administrative privileges via /manager/files URI. An authenticated user to the service may exploit incomplete input validation on the /manager/files API to inject arbitrary code within the API service sc...
PUBLISHED: 2021-03-05
The Blog module in Kentico CMS 5.5 R2 build 5.5.3996 allows SQL injection via the tagname parameter.
PUBLISHED: 2021-03-05
Deutsche Post Mailoptimizer 4.3 before 2020-11-09 allows Directory Traversal via a crafted ZIP archive to the Upload feature or the MO Connect component. This can lead to remote code execution.
PUBLISHED: 2021-03-05
ssh-agent in OpenSSH before 8.5 has a double free that may be relevant in a few less-common scenarios, such as unconstrained agent-socket access on a legacy operating system, or the forwarding of an agent to an attacker-controlled host.
PUBLISHED: 2021-03-05
The npm package ansi_up converts ANSI escape codes into HTML. In ansi_up v4, ANSI escape codes can be used to create HTML hyperlinks. Due to insufficient URL sanitization, this feature is affected by a cross-site scripting (XSS) vulnerability. This issue is fixed in v5.0.0.