Threat Intelligence

7/20/2018
10:30 AM
Tomas Honzak
Tomas Honzak
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity

Like any technology, AI and machine learning have limitations. Three are detection, power, and people.

A recent Cisco survey found that 39% of CISOs say their organizations are reliant on automation for cybersecurity, another 34% say they are reliant on machine learning, and 32% report they are highly reliant on artificial intelligence (AI). I'm impressed by the optimism these CISOs have about AI, but good luck with that. I think it's unlikely that AI will be used for much beyond spotting malicious behavior.

To be fair, AI definitely has a few clear advantages for cybersecurity. With malware that self-modifies like the flu virus, it would be close to impossible to develop a response strategy without using AI. It's also handy for financial institutions like banks or credit card providers who are always on the hunt for ways to improve their fraud detection and prevention; once properly trained, AI can heavily enhance their SIEM systems. But AI is not the cybersecurity silver bullet that everyone wants you to believe. In reality, like any technology, AI has its limitations.

1. Fool Me Once: AI Can Be Used to Fool Other AIs
This is the big one for me. If you're using AI to better detect threats, there's an attacker out there who had the exact same thought. Where a company is using AI to detect attacks with greater accuracy, an attacker is using AI to develop malware that's smarter and evolves to avoid detection. Basically, the malware escapes being detected by an AI ... by using AI. Once attackers make it past the company's AI, it's easy for them to remain unnoticed while mapping the environment, behavior that a company's AI would rule out as a statistical error. Even when the malware is detected, security already has been compromised and damage might already have been done.

2. Power Matters: With Low-Power Devices, AI Might Be Too Little, Too Late
Internet of Things (IoT) networks are typically low power with a small amount of data. If an attacker manages to deploy malware at this level, then chances are that AI won't be able to help. AI needs a lot of memory, computing power, and, most importantly, big data to run successfully. There is no way this can be done on an IoT device; the data will have to be sent to the cloud for processing before the AI can respond. By then, it's already too late. It's like your car calling 911 for you and reporting your location at the time of crash, but you've still crashed. It might report the crash a little faster than a bystander would have, but it didn't do anything to actually prevent the collision. At best, AI might be helpful in detecting that something's going wrong before you lose control over the device, or, in the worst case, over your whole IoT infrastructure.

3. The Known Unknown: AI Can't Analyze What It Does Not Know
While AI is likely to work quite well over a strictly controlled network, the reality is much more colorful and much less controlled. AI's Four Horsemen of the Apocalypse are the proliferation of shadow IT, bring-your-own-device programs, software-as-a-service systems, and, as always, employees. Regardless of how much big data you have for your AI, you need to tame all four of these simultaneously — a difficult or near-impossible task. There will always be a situation where an employee catches up on Gmail-based company email from a personal laptop over an unsecured Wi-Fi network and boom! There goes your sensitive data without AI even getting the chance to know about it. In the end, your own application might be protected by AI that prevents you from misusing it, but how do you secure it for the end user who might be using a device that you weren't even aware of? Or, how do you introduce AI to a cloud-based system that offers only smartphone apps and no corporate access control, not to mention real-time logs? There's simply no way for a company to successfully employ machine learning in this type of situation.

AI does help, but it's not a game changer. AI can be used to detect malware or an attacker in the system it controls, but it's hard to prevent malware from being distributed through company systems, and there's no way it can help unless you ensure it can control all your endpoint devices and systems. We're still fighting the same battle we've always been fighting, but we — and the attackers — are using different weapons, and the defenses we have are efficient only when properly deployed and managed.

Rather than looking to AI as the Cyber Savior, we need to keep the focus on the same old boring problems we've always had: the lack of control, lack of monitoring, and lack of understanding of potential threats. Only by understanding who your users are and which devices they have for what purposes and then ensuring the systems used actually can be protected by AI can you start deploying and training it.

Related Content:

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Tomáš Honzák serves as the head of security, privacy and compliance at GoodData, where he built an information security management system compliant with security and privacy management standards and regulations such as SOC 2, HIPAA and U.S.-EU Privacy ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
ameliagomes
50%
50%
ameliagomes,
User Rank: Apprentice
11/13/2018 | 1:35:44 AM
Re: I agree AI is not a fix all
As Cyber-Security is one of the big issues. We always very much conscious of the security purpose. I am using a Linksys router. I am facing Linksys login issue. Is this happen due to any cyber issue?
Patrick Ciavolella
100%
0%
Patrick Ciavolella,
User Rank: Author
7/23/2018 | 7:50:06 AM
I agree AI is not a fix all
AI is extremely beneficial in the security world and does greatly assist in our defenses, but this is not technology that should be heavily relied upon.  Human will always be necessary in checking and confirming the data gathered by AI.  This feedback will keep AI assisting analysts and reduce the false positive rate from AI allowing it to become more evolved and better suited to fit the needs of the security teams.  I am a strong believer in human analysis and verification of data, but as the attackers evolve so must the defenders in hopes to always stay 1 step ahead.
Valentine's Emails Laced with Gandcrab Ransomware
Kelly Sheridan, Staff Editor, Dark Reading,  2/14/2019
High Stress Levels Impacting CISOs Physically, Mentally
Jai Vijayan, Freelance writer,  2/14/2019
Mozilla, Internet Society and Others Pressure Retailers to Demand Secure IoT Products
Curtis Franklin Jr., Senior Editor at Dark Reading,  2/14/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
5 Emerging Cyber Threats to Watch for in 2019
Online attackers are constantly developing new, innovative ways to break into the enterprise. This Dark Reading Tech Digest gives an in-depth look at five emerging attack trends and exploits your security team should look out for, along with helpful recommendations on how you can prevent your organization from falling victim.
Flash Poll
How Enterprises Are Attacking the Cybersecurity Problem
How Enterprises Are Attacking the Cybersecurity Problem
Data breach fears and the need to comply with regulations such as GDPR are two major drivers increased spending on security products and technologies. But other factors are contributing to the trend as well. Find out more about how enterprises are attacking the cybersecurity problem by reading our report today.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-8933
PUBLISHED: 2019-02-19
In DedeCMS 5.7SP2, attackers can upload a .php file to the uploads/ directory (without being blocked by the Web Application Firewall), and then execute this file, via this sequence of steps: visiting the management page, clicking on the template, clicking on Default Template Management, clicking on ...
CVE-2019-7629
PUBLISHED: 2019-02-18
Stack-based buffer overflow in the strip_vt102_codes function in TinTin++ 2.01.6 and WinTin++ 2.01.6 allows remote attackers to execute arbitrary code by sending a long message to the client.
CVE-2019-8919
PUBLISHED: 2019-02-18
The seadroid (aka Seafile Android Client) application through 2.2.13 for Android always uses the same Initialization Vector (IV) with Cipher Block Chaining (CBC) Mode to encrypt private data, making it easier to conduct chosen-plaintext attacks or dictionary attacks.
CVE-2019-8917
PUBLISHED: 2019-02-18
SolarWinds Orion NPM before 12.4 suffers from a SYSTEM remote code execution vulnerability in the OrionModuleEngine service. This service establishes a NetTcpBinding endpoint that allows remote, unauthenticated clients to connect and call publicly exposed methods. The InvokeActionMethod method may b...
CVE-2019-8908
PUBLISHED: 2019-02-18
An issue was discovered in WTCMS 1.0. It allows remote attackers to execute arbitrary PHP code by going to the "Setting -> Mailbox configuration -> Registration email template" screen, and uploading an image file, as demonstrated by a .php filename and the "Content-Type: image/g...