Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

Many cybersecurity products claim to incorporate AI in some way, but in some cases, those claims aren't completely accurate.

Pam Baker, Contributing Writer

February 16, 2022

4 Min Read
Photo of two toy robot heads, one silver and one red, sitting on a table
Source: Charles Taylor via Adobe Stock

Editor's Note: This piece is part two of a two-part point/counterpoint series looking at the role AI plays in cybersecurity. Part 1 looks at the practical uses of AI in cybersecurity. This article examines the challenges of distinguishing between AI technology and AI marketing.

Artificial intelligence (AI) burst onto the scene like a superhero trying out his first cape, accompanied by a loud and sustained PR ruckus. But the cybersecurity crowd's oohs and ahs soon began to fade into ums and uhs as the scoreboard ticked off more fails than wins for AI.

Don't expect any sympathy from the spectator seats. Industry observers are pulling back their cheers for the beleaguered tech star. For example, Gartner describes the state of cyber AI as immature and advises security analysts to "treat AI offerings as experimental, complementary controls."

My, what a long way we've come from fearing AI as mankind's overlord or a formidable adversary in the upcoming AI wars. To hear some folks tell it, AI is hardly fit to be the mastermind behind a Roomba.

But there are other perspectives to consider, too. AI has its place in cybersecurity, and there are plenty of examples of AI technology successfully applied, as Part 1 shows.

"Some would argue that those spectacular failings are proof that the AI engines are working as expected," says Aviram Jenik, co-founder and CEO of Beyond Security (acquired by HelpSystems). "The premise of AI is to create a more developed level of logic, and the fact that we can't immediately understand why an AI engine fails means it is already 'smarter' than the average human."

Maybe so, but ouch: "AI failed because it's smarter than us" is a hard excuse to choke down.

When the Term Gets Misused
To some degree, cyber AI suffers from the pressures exerted by the quest for never-ending sales growth. 

"The tough part is figuring out which vendors are actually doing this [AI] versus the ones who simply claim to be, in the spirit of marketing buzz," says Keatron Evans, principal security researcher at cybersecurity training provider Infosec Institute.

There are also those who do have machine learning (ML) in their products, but another tool would have served just as well.

"There are some situations where machine learning is being used just to say machine learning is being used. In other words, the problem being solved is not solved better, or faster, with machine learning or AI — it just sounds cooler to use it," Evans says.

The challenge lies in separating what the technology can do from what marketing says it can do.

"I must admit that I'm concerned that too many companies are overstating the capabilities of their tools when it comes to machine learning or artificial intelligence," says David Hoelzer, faculty fellow at cybersecurity education house SANS Institute and operations chief at managed security provider Enclave Forensics. "More than once, I have come to understand that what a vendor is calling 'machine learning' is actually a human at a data center writing signatures. When a customer realizes that a product billed as machine intelligence isn't really that, it undermines the entire market."

There are numerous ways to try to impersonate AI to fool people into thinking they have a real weapon with near magical capabilities.

"As an industry, we talk heavily about AI, but in reality we mostly just implement heuristics or rules with no real AI behind them," says Lucas Budman, founder and CEO of TruU, an AI-based identity verification provider.

How AI Can Backfire
The situation can backfire for some who are peddling AI in their cybersecurity products.

"AI is its own attack surface. Hacking is simply the act of causing systems to do what they are supposed to, in ways their designers didn't intend," Fernick explains. "Machine learning and AI are not special — they are software systems with exploitable flaws and logic."

Sometimes the problems with AI are the same old issues that bedevil other technologies.

"Using AI to rapidly recognize and respond to threats without human intervention hasn't been adopted due to technology challenges across platform and product vendors and a general lack of confidence in the AI technology itself," says Doug Saylors, partner and cybersecurity co-lead with global technology research and advisory firm ISG.

"When integrating ML/AI into a system, we need to be mindful of the fact that the system is learning from its continued experience and that an adversary could teach it to be OK with dangerous things," warns Jennifer Fernick, global head of research at security consulting firm NCC Group and a governing board member of The Linux Foundation's Open Source Security Foundation. "In practice, a motivated attacker may choose to continually expose progressively more extreme input to the system, with the goal of desensitizing it to malicious inputs."

That's right — the first adversarial AI you encounter may be your own.

About the Author(s)

Pam Baker

Contributing Writer

A prolific writer and analyst, Pam Baker's published work appears in many leading publications. She's also the author of several books, the most recent of which is "Data Divination: Big Data Strategies." Baker is also a popular speaker at technology conferences and a member of the National Press Club, Society of Professional Journalists, and the Internet Press Guild.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights