Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security

8/16/2017
03:27 PM
Willy Leichter
Willy Leichter
News Analysis-Security Now
50%
50%

Magical Thinking Drives the Myth of AI Solving Security

AI is being called the solution to future security problems, but we shouldn't rely on the technology for too much, too soon.

Touring the Black Hat show recently in Las Vegas, I was struck by how the cybersecurity and Vegas entertainment industries seem to be converging: They both seem to love magic shows. While the IT versions aren't as glitzy, vendors continually pitch the next generation of technology as the magic cure to our growing cybersecurity challenges.

Let's face it -- we've invested billions of dollars over decades to improve security, yet the problems seem to continually get worse. The continual back and forth between clever hackers and reactive security products never seems to end. No doubt that we've gotten faster at identifying attacks and patching vulnerabilities, but the bad guys are upping their game dramatically using sophisticated tools created by well organized crime syndicates and, of course, the NSA. It's hard to watch WannaCry, Petya, Industroyer and the other weekly attacks and say that we're winning.

In this environment, a healthy dose of skepticism is warranted when new vendors claim to have found the cure, especially when it all depends on the "magic" of artificial intelligence (AI). One security vendor laying it on thick in a flowery blog post describes the security advantages of AI as being like "a science fiction story" and "the effects are indeed magical." Seeing their demo at Black Hat I asked for a bit more detail, and apparently, the secret to their success with AI is... (wait for it...) mathematics.

Artificial intelligence and machine learning are indeed powerful and transformative in many fields that require finding patterns in vast quantities of data. For the antivirus industry that has grown up around signatures and pattern matching, this does indeed seem like a breakthrough, and no doubt will reduce analysis time. But automating a flawed model doesn't always yield better results.

The antivirus model is fundamentally flawed because it is always looking backwards -- reacting to malware and creating signatures to capture the same virus when it returns. The underlying assumption is that bad actors fall back to the same old tactics over and over again, but nothing could be further from the truth. Reducing the reaction and signature update time is important with this model, and AI will likely help. But the larger problem is that pattern matching is easily fooled. Sophisticated hackers continually change tactics, modify tools and increasingly use fileless attacks, manipulating native scripts and blocks of memory to trick legitimate applications into doing the wrong thing. And no matter how fast the reaction time is, the largest threats come from vulnerabilities that have not yet been discovered, named and added to the catalog of known patterns. For example, WannaCry exploited the SMBv1 vulnerability that had existing unnoticed for 16 years, and flew under the radar of most security products until massive damage was done.

The other fundamental challenge with AI is that we're not fighting a static threat. We are fighting extremely resourceful humans who know they're battling AI and look for innovative ways to bypass controls, and confuse machine learning models. This challenge is called "adversarial AI," and acknowledges that the "magical" tool is less effective when fighting itself. Steve Grobman, CTO at McAfee describes this problem with a good analogy:

"If you have a motion sensor over your garage hooked up to your alarm system -- say every day I drove by your garage on a bicycle at 11 p.m., intentionally setting off the sensor. After about a month of the alarm going off regularly, you'd get frustrated and make it less sensitive, or just turn it off altogether. Then that gives me the opportunity to break in."

The fundamental problem is that the world of known bad stuff, while growing, is infinitely smaller than the realm of present and future unknown bad. While AI may deliver exponential progress in expanding our catalog of known bad stuff, the unknown continues to grow at an even faster pace.


Get real-world answers to virtualization challenges from industry leaders. Join us for the NFV & Carrier SDN event in Denver. Register now for this exclusive opportunity to learn from and network with industry experts -- communications service providers get in free!

A new school of thought is emerging. Rather than using the past to guess the future, new solutions are looking at the present -- the actual functioning of applications, for indicators of attack. Using deterministic methods, these solutions can map the known good activity of applications and take preventative action if anything goes off the rails.

Related posts:

Willy Leichter is vice president of marketing for Virsec and he has worked with a wide range of global enterprises to help them meet evolving security challenges. With extensive experience in a range of IT domains including network security, global data privacy laws, data loss prevention, access control, email security and cloud applications, he is a frequent speaker at industry events and author on IT security and compliance issues.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...