Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security

// // //
8/16/2017
03:27 PM
Willy Leichter
Willy Leichter
News Analysis-Security Now

Magical Thinking Drives the Myth of AI Solving Security

AI is being called the solution to future security problems, but we shouldn't rely on the technology for too much, too soon.

Touring the Black Hat show recently in Las Vegas, I was struck by how the cybersecurity and Vegas entertainment industries seem to be converging: They both seem to love magic shows. While the IT versions aren't as glitzy, vendors continually pitch the next generation of technology as the magic cure to our growing cybersecurity challenges.

Let's face it -- we've invested billions of dollars over decades to improve security, yet the problems seem to continually get worse. The continual back and forth between clever hackers and reactive security products never seems to end. No doubt that we've gotten faster at identifying attacks and patching vulnerabilities, but the bad guys are upping their game dramatically using sophisticated tools created by well organized crime syndicates and, of course, the NSA. It's hard to watch WannaCry, Petya, Industroyer and the other weekly attacks and say that we're winning.

In this environment, a healthy dose of skepticism is warranted when new vendors claim to have found the cure, especially when it all depends on the "magic" of artificial intelligence (AI). One security vendor laying it on thick in a flowery blog post describes the security advantages of AI as being like "a science fiction story" and "the effects are indeed magical." Seeing their demo at Black Hat I asked for a bit more detail, and apparently, the secret to their success with AI is... (wait for it...) mathematics.

Artificial intelligence and machine learning are indeed powerful and transformative in many fields that require finding patterns in vast quantities of data. For the antivirus industry that has grown up around signatures and pattern matching, this does indeed seem like a breakthrough, and no doubt will reduce analysis time. But automating a flawed model doesn't always yield better results.

The antivirus model is fundamentally flawed because it is always looking backwards -- reacting to malware and creating signatures to capture the same virus when it returns. The underlying assumption is that bad actors fall back to the same old tactics over and over again, but nothing could be further from the truth. Reducing the reaction and signature update time is important with this model, and AI will likely help. But the larger problem is that pattern matching is easily fooled. Sophisticated hackers continually change tactics, modify tools and increasingly use fileless attacks, manipulating native scripts and blocks of memory to trick legitimate applications into doing the wrong thing. And no matter how fast the reaction time is, the largest threats come from vulnerabilities that have not yet been discovered, named and added to the catalog of known patterns. For example, WannaCry exploited the SMBv1 vulnerability that had existing unnoticed for 16 years, and flew under the radar of most security products until massive damage was done.

The other fundamental challenge with AI is that we're not fighting a static threat. We are fighting extremely resourceful humans who know they're battling AI and look for innovative ways to bypass controls, and confuse machine learning models. This challenge is called "adversarial AI," and acknowledges that the "magical" tool is less effective when fighting itself. Steve Grobman, CTO at McAfee describes this problem with a good analogy:

"If you have a motion sensor over your garage hooked up to your alarm system -- say every day I drove by your garage on a bicycle at 11 p.m., intentionally setting off the sensor. After about a month of the alarm going off regularly, you'd get frustrated and make it less sensitive, or just turn it off altogether. Then that gives me the opportunity to break in."

The fundamental problem is that the world of known bad stuff, while growing, is infinitely smaller than the realm of present and future unknown bad. While AI may deliver exponential progress in expanding our catalog of known bad stuff, the unknown continues to grow at an even faster pace.


Get real-world answers to virtualization challenges from industry leaders. Join us for the NFV & Carrier SDN event in Denver. Register now for this exclusive opportunity to learn from and network with industry experts -- communications service providers get in free!

A new school of thought is emerging. Rather than using the past to guess the future, new solutions are looking at the present -- the actual functioning of applications, for indicators of attack. Using deterministic methods, these solutions can map the known good activity of applications and take preventative action if anything goes off the rails.

Related posts:

Willy Leichter is vice president of marketing for Virsec and he has worked with a wide range of global enterprises to help them meet evolving security challenges. With extensive experience in a range of IT domains including network security, global data privacy laws, data loss prevention, access control, email security and cloud applications, he is a frequent speaker at industry events and author on IT security and compliance issues.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
How Machine Learning, AI & Deep Learning Improve Cybersecurity
Machine intelligence is influencing all aspects of cybersecurity. Organizations are implementing AI-based security to analyze event data using ML models that identify attack patterns and increase automation. Before security teams can take advantage of AI and ML tools, they need to know what is possible. This report covers: -How to assess the vendor's AI/ML claims -Defining success criteria for AI/ML implementations -Challenges when implementing AI
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2022-42247
PUBLISHED: 2022-10-03
pfSense v2.5.2 was discovered to contain a cross-site scripting (XSS) vulnerability in the browser.php component. This vulnerability allows attackers to execute arbitrary web scripts or HTML via a crafted payload injected into a file name.
CVE-2022-41443
PUBLISHED: 2022-10-03
phpipam v1.5.0 was discovered to contain a header injection vulnerability via the component /admin/subnets/ripe-query.php.
CVE-2022-33882
PUBLISHED: 2022-10-03
Under certain conditions, an attacker could create an unintended sphere of control through a vulnerability present in file delete operation in Autodesk desktop app (ADA). An attacker could leverage this vulnerability to escalate privileges and execute arbitrary code.
CVE-2022-42306
PUBLISHED: 2022-10-03
An issue was discovered in Veritas NetBackup through 8.2 and related Veritas products. An attacker with local access can send a crafted packet to pbx_exchange during registration and cause a NULL pointer exception, effectively crashing the pbx_exchange process.
CVE-2022-42307
PUBLISHED: 2022-10-03
An issue was discovered in Veritas NetBackup through 10.0.0.1 and related Veritas products. The NetBackup Primary server is vulnerable to an XML External Entity (XXE) Injection attack through the DiscoveryService service.