Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

7/20/2018
10:30 AM
Tomas Honzak
Tomas Honzak
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity

Like any technology, AI and machine learning have limitations. Three are detection, power, and people.

A recent Cisco survey found that 39% of CISOs say their organizations are reliant on automation for cybersecurity, another 34% say they are reliant on machine learning, and 32% report they are highly reliant on artificial intelligence (AI). I'm impressed by the optimism these CISOs have about AI, but good luck with that. I think it's unlikely that AI will be used for much beyond spotting malicious behavior.

To be fair, AI definitely has a few clear advantages for cybersecurity. With malware that self-modifies like the flu virus, it would be close to impossible to develop a response strategy without using AI. It's also handy for financial institutions like banks or credit card providers who are always on the hunt for ways to improve their fraud detection and prevention; once properly trained, AI can heavily enhance their SIEM systems. But AI is not the cybersecurity silver bullet that everyone wants you to believe. In reality, like any technology, AI has its limitations.

1. Fool Me Once: AI Can Be Used to Fool Other AIs
This is the big one for me. If you're using AI to better detect threats, there's an attacker out there who had the exact same thought. Where a company is using AI to detect attacks with greater accuracy, an attacker is using AI to develop malware that's smarter and evolves to avoid detection. Basically, the malware escapes being detected by an AI ... by using AI. Once attackers make it past the company's AI, it's easy for them to remain unnoticed while mapping the environment, behavior that a company's AI would rule out as a statistical error. Even when the malware is detected, security already has been compromised and damage might already have been done.

2. Power Matters: With Low-Power Devices, AI Might Be Too Little, Too Late
Internet of Things (IoT) networks are typically low power with a small amount of data. If an attacker manages to deploy malware at this level, then chances are that AI won't be able to help. AI needs a lot of memory, computing power, and, most importantly, big data to run successfully. There is no way this can be done on an IoT device; the data will have to be sent to the cloud for processing before the AI can respond. By then, it's already too late. It's like your car calling 911 for you and reporting your location at the time of crash, but you've still crashed. It might report the crash a little faster than a bystander would have, but it didn't do anything to actually prevent the collision. At best, AI might be helpful in detecting that something's going wrong before you lose control over the device, or, in the worst case, over your whole IoT infrastructure.

3. The Known Unknown: AI Can't Analyze What It Does Not Know
While AI is likely to work quite well over a strictly controlled network, the reality is much more colorful and much less controlled. AI's Four Horsemen of the Apocalypse are the proliferation of shadow IT, bring-your-own-device programs, software-as-a-service systems, and, as always, employees. Regardless of how much big data you have for your AI, you need to tame all four of these simultaneously — a difficult or near-impossible task. There will always be a situation where an employee catches up on Gmail-based company email from a personal laptop over an unsecured Wi-Fi network and boom! There goes your sensitive data without AI even getting the chance to know about it. In the end, your own application might be protected by AI that prevents you from misusing it, but how do you secure it for the end user who might be using a device that you weren't even aware of? Or, how do you introduce AI to a cloud-based system that offers only smartphone apps and no corporate access control, not to mention real-time logs? There's simply no way for a company to successfully employ machine learning in this type of situation.

AI does help, but it's not a game changer. AI can be used to detect malware or an attacker in the system it controls, but it's hard to prevent malware from being distributed through company systems, and there's no way it can help unless you ensure it can control all your endpoint devices and systems. We're still fighting the same battle we've always been fighting, but we — and the attackers — are using different weapons, and the defenses we have are efficient only when properly deployed and managed.

Rather than looking to AI as the Cyber Savior, we need to keep the focus on the same old boring problems we've always had: the lack of control, lack of monitoring, and lack of understanding of potential threats. Only by understanding who your users are and which devices they have for what purposes and then ensuring the systems used actually can be protected by AI can you start deploying and training it.

Related Content:

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Tomáš Honzák serves as the head of security, privacy and compliance at GoodData, where he built an information security management system compliant with security and privacy management standards and regulations such as SOC 2, HIPAA and U.S.-EU Privacy ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
ameliagomes
50%
50%
ameliagomes,
User Rank: Apprentice
11/13/2018 | 1:35:44 AM
Re: I agree AI is not a fix all
As Cyber-Security is one of the big issues. We always very much conscious of the security purpose. I am using a Linksys router. I am facing Linksys login issue. Is this happen due to any cyber issue?
Patrick Ciavolella
100%
0%
Patrick Ciavolella,
User Rank: Author
7/23/2018 | 7:50:06 AM
I agree AI is not a fix all
AI is extremely beneficial in the security world and does greatly assist in our defenses, but this is not technology that should be heavily relied upon.  Human will always be necessary in checking and confirming the data gathered by AI.  This feedback will keep AI assisting analysts and reduce the false positive rate from AI allowing it to become more evolved and better suited to fit the needs of the security teams.  I am a strong believer in human analysis and verification of data, but as the attackers evolve so must the defenders in hopes to always stay 1 step ahead.
Florida Town Pays $600K to Ransomware Operators
Curtis Franklin Jr., Senior Editor at Dark Reading,  6/20/2019
Pledges to Not Pay Ransomware Hit Reality
Robert Lemos, Contributing Writer,  6/21/2019
AWS CISO Talks Risk Reduction, Development, Recruitment
Kelly Sheridan, Staff Editor, Dark Reading,  6/25/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Building and Managing an IT Security Operations Program
As cyber threats grow, many organizations are building security operations centers (SOCs) to improve their defenses. In this Tech Digest you will learn tips on how to get the most out of a SOC in your organization - and what to do if you can't afford to build one.
Flash Poll
The State of IT Operations and Cybersecurity Operations
The State of IT Operations and Cybersecurity Operations
Your enterprise's cyber risk may depend upon the relationship between the IT team and the security team. Heres some insight on what's working and what isn't in the data center.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-10133
PUBLISHED: 2019-06-26
A flaw was found in Moodle before 3.7, 3.6.4, 3.5.6, 3.4.9 and 3.1.18. The form to upload cohorts contained a redirect field, which was not restricted to internal URLs.
CVE-2019-10134
PUBLISHED: 2019-06-26
A flaw was found in Moodle before 3.7, 3.6.4, 3.5.6, 3.4.9 and 3.1.18. The size of users' private file uploads via email were not correctly checked, so their quota allowance could be exceeded.
CVE-2019-10154
PUBLISHED: 2019-06-26
A flaw was found in Moodle before versions 3.7, 3.6.4. A web service fetching messages was not restricted to the current user's conversations.
CVE-2019-9039
PUBLISHED: 2019-06-26
The Couchbase Sync Gateway 2.1.2 in combination with a Couchbase Server is affected by a previously undisclosed N1QL-injection vulnerability in the REST API. An attacker with access to the public REST API can insert additional N1QL statements through the parameters ?startkey? and ?endkey? of the ?_a...
CVE-2018-20846
PUBLISHED: 2019-06-26
Out-of-bounds accesses in the functions pi_next_lrcp, pi_next_rlcp, pi_next_rpcl, pi_next_pcrl, pi_next_rpcl, and pi_next_cprl in openmj2/pi.c in OpenJPEG through 2.3.0 allow remote attackers to cause a denial of service (application crash).