Threat Intelligence

7/20/2018
10:30 AM
Tomas Honzak
Tomas Honzak
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity

Like any technology, AI and machine learning have limitations. Three are detection, power, and people.

A recent Cisco survey found that 39% of CISOs say their organizations are reliant on automation for cybersecurity, another 34% say they are reliant on machine learning, and 32% report they are highly reliant on artificial intelligence (AI). I'm impressed by the optimism these CISOs have about AI, but good luck with that. I think it's unlikely that AI will be used for much beyond spotting malicious behavior.

To be fair, AI definitely has a few clear advantages for cybersecurity. With malware that self-modifies like the flu virus, it would be close to impossible to develop a response strategy without using AI. It's also handy for financial institutions like banks or credit card providers who are always on the hunt for ways to improve their fraud detection and prevention; once properly trained, AI can heavily enhance their SIEM systems. But AI is not the cybersecurity silver bullet that everyone wants you to believe. In reality, like any technology, AI has its limitations.

1. Fool Me Once: AI Can Be Used to Fool Other AIs
This is the big one for me. If you're using AI to better detect threats, there's an attacker out there who had the exact same thought. Where a company is using AI to detect attacks with greater accuracy, an attacker is using AI to develop malware that's smarter and evolves to avoid detection. Basically, the malware escapes being detected by an AI ... by using AI. Once attackers make it past the company's AI, it's easy for them to remain unnoticed while mapping the environment, behavior that a company's AI would rule out as a statistical error. Even when the malware is detected, security already has been compromised and damage might already have been done.

2. Power Matters: With Low-Power Devices, AI Might Be Too Little, Too Late
Internet of Things (IoT) networks are typically low power with a small amount of data. If an attacker manages to deploy malware at this level, then chances are that AI won't be able to help. AI needs a lot of memory, computing power, and, most importantly, big data to run successfully. There is no way this can be done on an IoT device; the data will have to be sent to the cloud for processing before the AI can respond. By then, it's already too late. It's like your car calling 911 for you and reporting your location at the time of crash, but you've still crashed. It might report the crash a little faster than a bystander would have, but it didn't do anything to actually prevent the collision. At best, AI might be helpful in detecting that something's going wrong before you lose control over the device, or, in the worst case, over your whole IoT infrastructure.

3. The Known Unknown: AI Can't Analyze What It Does Not Know
While AI is likely to work quite well over a strictly controlled network, the reality is much more colorful and much less controlled. AI's Four Horsemen of the Apocalypse are the proliferation of shadow IT, bring-your-own-device programs, software-as-a-service systems, and, as always, employees. Regardless of how much big data you have for your AI, you need to tame all four of these simultaneously — a difficult or near-impossible task. There will always be a situation where an employee catches up on Gmail-based company email from a personal laptop over an unsecured Wi-Fi network and boom! There goes your sensitive data without AI even getting the chance to know about it. In the end, your own application might be protected by AI that prevents you from misusing it, but how do you secure it for the end user who might be using a device that you weren't even aware of? Or, how do you introduce AI to a cloud-based system that offers only smartphone apps and no corporate access control, not to mention real-time logs? There's simply no way for a company to successfully employ machine learning in this type of situation.

AI does help, but it's not a game changer. AI can be used to detect malware or an attacker in the system it controls, but it's hard to prevent malware from being distributed through company systems, and there's no way it can help unless you ensure it can control all your endpoint devices and systems. We're still fighting the same battle we've always been fighting, but we — and the attackers — are using different weapons, and the defenses we have are efficient only when properly deployed and managed.

Rather than looking to AI as the Cyber Savior, we need to keep the focus on the same old boring problems we've always had: the lack of control, lack of monitoring, and lack of understanding of potential threats. Only by understanding who your users are and which devices they have for what purposes and then ensuring the systems used actually can be protected by AI can you start deploying and training it.

Related Content:

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Tomáš Honzák serves as the head of security, privacy and compliance at GoodData, where he built an information security management system compliant with security and privacy management standards and regulations such as SOC 2, HIPAA and U.S.-EU Privacy ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
ameliagomes
50%
50%
ameliagomes,
User Rank: Apprentice
11/13/2018 | 1:35:44 AM
Re: I agree AI is not a fix all
As Cyber-Security is one of the big issues. We always very much conscious of the security purpose. I am using a Linksys router. I am facing Linksys login issue. Is this happen due to any cyber issue?
Patrick Ciavolella
100%
0%
Patrick Ciavolella,
User Rank: Author
7/23/2018 | 7:50:06 AM
I agree AI is not a fix all
AI is extremely beneficial in the security world and does greatly assist in our defenses, but this is not technology that should be heavily relied upon.  Human will always be necessary in checking and confirming the data gathered by AI.  This feedback will keep AI assisting analysts and reduce the false positive rate from AI allowing it to become more evolved and better suited to fit the needs of the security teams.  I am a strong believer in human analysis and verification of data, but as the attackers evolve so must the defenders in hopes to always stay 1 step ahead.
Veterans Find New Roles in Enterprise Cybersecurity
Kelly Sheridan, Staff Editor, Dark Reading,  11/12/2018
Empathy: The Next Killer App for Cybersecurity?
Shay Colson, CISSP, Senior Manager, CyberClarity360,  11/13/2018
Understanding Evil Twin AP Attacks and How to Prevent Them
Ryan Orsi, Director of Product Management for Wi-Fi at WatchGuard Technologies,  11/14/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Flash Poll
Online Malware and Threats: A Profile of Today's Security Posture
Online Malware and Threats: A Profile of Today's Security Posture
This report offers insight on how security professionals plan to invest in cybersecurity, and how they are prioritizing their resources. Find out what your peers have planned today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-15769
PUBLISHED: 2018-11-16
RSA BSAFE Micro Edition Suite versions prior to 4.0.11 (in 4.0.x series) and versions prior to 4.1.6.2 (in 4.1.x series) contain a key management error issue. A malicious TLS server could potentially cause a Denial Of Service (DoS) on TLS clients during the handshake when a very large prime value is...
CVE-2018-18955
PUBLISHED: 2018-11-16
In the Linux kernel 4.15.x through 4.19.x before 4.19.2, map_write() in kernel/user_namespace.c allows privilege escalation because it mishandles nested user namespaces with more than 5 UID or GID ranges. A user who has CAP_SYS_ADMIN in an affected user namespace can bypass access controls on resour...
CVE-2018-19311
PUBLISHED: 2018-11-16
Centreon 3.4.x allows XSS via the Service field to the main.php?p=20201 URI, as demonstrated by the "Monitoring > Status Details > Services" screen.
CVE-2018-19312
PUBLISHED: 2018-11-16
Centreon 3.4.x allows SQL Injection via the searchVM parameter to the main.php?p=20408 URI.
CVE-2018-19318
PUBLISHED: 2018-11-16
SRCMS 3.0.0 allows CSRF via admin.php?m=Admin&c=manager&a=update to change the username and password of the super administrator account.