Sponsored By

News, news analysis, and commentary on the latest trends in cybersecurity technology.

Are AI-Engineered Threats FUD or Reality?

The rise of generative AI is creating new ways to both attack and defend assets. Which threats are solid and which are vapor?

John Dwyer

July 24, 2023

5 Min Read
Augmented photo illustration of businessman hiding behind chair to avoid the Internet
Source: Federico Caputo via Alamy Stock Photo

The moment that generative AI applications hit the market, it changed the pace of business — not only for security teams, but for cybercriminals too. Today, not embracing innovations in artificial intelligence (AI) can mean falling behind your competitors and putting your cyber defense at a disadvantage against cyberattacks powered by AI. But when discussing how AI will or won't impact cybercrime, it's important that we look at things through a pragmatic and sober lens — not feeding into hype that reads more like science fiction.

Today's AI advancements and maturity signal a significant leap forward for enterprise security. Cybercriminals can't easily match the size and scale of enterprises' resources, skills, and motivation, making it harder for them to keep up with the current speed of AI innovation. Private venture investment in AI exploded to $93.5 billion in 2021; the bad guys don't have that level of capital. They also don't have the manpower, computing power, and innovations that afford commercial companies or governments more time and opportunity to fail quickly, learn fast, and get it right first.

Make no mistake, though: Cybercrime will catch up. This is not the first time the security industry has had a brief edge. When ransomware started driving more defenders to adopt endpoint detection and response (EDR) technologies, attackers needed some time to figure out how to circumvent and evade those detections. That interim "grace period" gave businesses time to better shield themselves. The same applies now: Businesses need to maximize on their lead in the AI race, advancing their threat detection and response capabilities and leveraging the speed and precision that current AI innovations afford them.

So how is AI changing cybercrime? Well, it won't change it substantially anytime soon, but it will scale it in certain instances. Let's take a look at where malicious use of AI will and won't make the most immediate impact.

Fully Automated Malware Campaigns: FUD

In recent months, we've seen claims regarding various malicious use cases of AI, but just because a scenario is possible does not make it probable. Take fully automated malware campaigns, for example. Logic says that it is possible to leverage AI to achieve that outcome, but given that leading tech companies have yet to pioneer fully automated software development cycles, it's unlikely that financially constrained cybercrime groups will achieve this sooner. Even partial automation can enable the scaling of cybercrime, however — a tactic we've already seen used in Bazar campaigns. This is not an innovation but a tried-and-true technique that defenders are already taking on.

AI-Engineered Phishing: Reality (But Context Is Key)

Another use case to consider is AI-engineered phishing attacks. Not only is this one possible, but we're already beginning to see these attacks in the wild. This next generation of phishing may achieve higher levels of persuasiveness and click rate, but a human-engineered phish and AI-engineered phish still drive toward the same goal. In other words, an AI-engineered phish is still a phish searching for a click, and it requires the same detection and response readiness.

However, while the problem remains the same, the scale is vastly different. AI acts as a force multiplier to scale phishing campaigns, so if an enterprise is seeing a spike in inbound phishing emails — and those malicious emails are significantly more persuasive — then it's likely looking at a high click-rate probability and potential for compromise. AI models can also increase targeting efficacy, helping attackers determine who is the most susceptible target for a specific phish within an organization and ultimately reaching a higher ROI from their campaigns. Phishing attacks have historically been among the most successful tactics that attackers have used to infiltrate enterprises. The scaling of this type of attack emphasizes the critical role that EDR, managed detection and response (MDR), extended detection and response (XDR), and identity and access management (IAM) technologies play in detecting anomalous behavior before it achieves impact.

AI Poisoning Attacks: FUD-ish

AI poisoning attacks — in other words, programmatically manipulating the code and data on which AI models are built — may be the "holy grail" of attacks for cybercriminals. The impact of a successful poisoning attack could range anywhere from misinformation attempts to Die Hard 4.0. Why? Because by poisoning the model, an attacker can make it behave or function in whatever way they want, and it's not easily detectable. However, these attacks aren't easy to carry out. They require gaining access to the data the AI model is training on at the time of training, which is no small feat. As more models become open source, the risk of these attacks will increase, but it will remain low for the time being.

The Unknown

While it's important to separate hype from reality, it's also important to ensure we're asking the right questions about AI's impact on the threat landscape. There are lots of unknowns regarding AI's potential — how it may change adversaries' goals and objectives is one we mustn't overlook. It remains unknown how new abilities may help serve new purposes for adversaries and recalibrate their motives.

We may not see an immediate spike in novel AI-enabled attacks, but the scaling of cybercrime thanks to AI will have a substantial impact on organizations that aren't prepared. Speed and scale are intrinsic characteristics of AI, and just as defenders are seeking to benefit from them, so are attackers. Security teams are already understaffed and overwhelmed — seeing a spike in malicious traffic or incident response engagements is a substantial weight added onto their workload.

This reaffirms more than ever the need for enterprises to invest in their defenses, using AI to drive speed and precision in their threat detection and response capabilities. Enterprises that take advantage of this "grace period" will find themselves much more prepared and resilient for the day attackers actually do catch up in the AI cyber race.

About the Author(s)

John Dwyer

Head of Research, IBM Security X-Force

John Dwyer is the head of research for IBM Security X-Force, where he leads a team of security researchers focused on the areas of adversary trend analysis, threat hunting, detection engineering, incident response technology, and integrating partner technologies into X-Force's ecosystem.

As a researcher within X-Force, Dwyer focused his efforts on tracking and modeling adversary operations to develop immersive simulation exercises to help drive improvements in the areas of incident response, threat hunting, and detection engineering. Prior to joining X-Force, he was a defensive cyber operations researcher working with the US Army and US Air Force to develop incident response capabilities.

Dwyer has spoken at multiple events, including Black Hat, SANS Threat Hunting Summits, ISC2 Security Congress, and Fulbright Commission Cybersecurity Exchange.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights