Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

11/28/2017
10:30 AM
Derek Manky
Derek Manky
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

The Looming War of Good AI vs. Bad AI

The rise of artificial intelligence, machine learning, hivenets, and next-generation morphic malware is leading to an arms race that enterprises must prepare for now.

The tech industry — and by extension, the global economy — is at a precipice with artificial intelligence (AI) as cybercriminals adopt AI technology to more effectively detect and exploit vulnerabilities, evade detection, adapt to complex network environments, and maximize profitability.

This is the first time that adversaries and white hats will have the same tools. It is leading to an AI arms race that organizations must prepare for now. Here's what the good guys will be up against.

1. A Wave of Machine Learning
Over the past year, our industry has seen cybercriminals weaponize millions of unsecured IoT devices and use them to take out systems and networks. Supervised AI incubators can spend years carefully cultivating an AI to perform specific tasks in a predictable way. Cybercriminals, however, are not willing to go slowly. The unsupervised learning models they are likely to use to develop AI-based attacks, where speed of development is more important than predictability, are especially dangerous — and could potentially be devastating because of their complexity and unpredictability. As attack methodologies become more intelligent, there is the real potential to create swarms of compromised Internet of Things devices that could wreak indiscriminate havoc. Think Africanized bees.

If the best and the brightest within the cybersecurity research community are calling for regulation, it is because they see that the cybercriminal community is looking seriously at building these AI-based attacks and are likely to release them unsupervised into the wild.

2. Next-Generation Morphic Malware
If not next year, we will soon begin to see malware created completely by machines based on automated vulnerability detection and complex data analysis. Morphic malware is not new, but it is about to take on a new face by leveraging AI to create sophisticated new code that can learn to evade detection through machine-written routines. With the natural evolution of tools that already exist, adversaries will be able to develop the best possible exploit based on the characteristics of each unique weakness. Malware is already able to use learning models to evade security and can produce more than a million virus variations in a day. But so far, this is all just based on an algorithm, and there is very little sophistication or control over the output.

3. The Rise of Hivenets and Swarmbots
We have seen the development of predictive software systems programmed using AI techniques. The latest advances in these sorts of tools leverage massive databases of expert knowledge made up of billions of constantly updated bits of data in order to make accurate predictions. This sort of predictive analysis represents the new paradigm for how computing resources will be used to transform our world.

Building on what the industry has already seen, it is likely that cybercriminals will replace botnets with intelligent clusters of compromised devices built around deep learning technology to create more effective attack vectors. Traditional botnets are slaves — they wait for commands from the bot master in order to execute an attack. But what if these nodes were able to make decisions with minimal supervision, or even autonomously, instead of waiting for master commands?

This would become a hivenet, instead of a botnet, that could leverage peer-based self-learning to effectively target vulnerable systems at an unprecedented scale. Hivenets will be able to use swarms of compromised devices, or swarmbots, to identify and tackle different attack vectors all at once. Hivenets would be able to grow exponentially, widening their ability to simultaneously attack multiple victims.

What Lies Ahead: Intelligent Warfare
Protecting networks and services, including things as important as critical infrastructure, will require a systemic approach based on intentionally engineering vulnerabilities out of a network and then applying an adaptive layer of meshed security tools, unlike the separate and isolated security devices most organizations currently have in place. This integration could provide visibility across the distributed network to detect unknown threats, share and correlate threat intelligence in real time, dynamically segment the network and isolate compromised devices and systems, and respond to attacks in a coordinated fashion.

Artificial intelligence promises incredible benefits to organizations that can harness its power, but it also portends disaster as cybercriminals use it for malicious purposes. Whoever can leverage technologies like machine learning and AI will have the quintessential security defense system to survive the escalating AI war.

Related Content:

Derek Manky formulates security strategy with more than 15 years of cyber security experience behind him. His ultimate goal to make a positive impact in the global war on cybercrime. Manky provides thought leadership to industry, and has presented research and strategy ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
11/30/2017 | 2:53:36 PM
I could not resist - again
First thought - Glinda to Dorothy "Are you a good AI or a bad AI?"   I'm just a girl.   Glinda points down to Toto "Well is that an AI?"   (Toto is probably smarter than anybody by this point).
Cloud Security Startup Lightspin Emerges From Stealth
Kelly Sheridan, Staff Editor, Dark Reading,  11/24/2020
Look Beyond the 'Big 5' in Cyberattacks
Robert Lemos, Contributing Writer,  11/25/2020
Why Vulnerable Code Is Shipped Knowingly
Chris Eng, Chief Research Officer, Veracode,  11/30/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: We are really excited about our new two tone authentication system!
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-4126
PUBLISHED: 2020-12-01
HCL iNotes is susceptible to a sensitive cookie exposure vulnerability. This can allow an unauthenticated remote attacker to capture the cookie by intercepting its transmission within an http session. Fixes are available in HCL Domino and iNotes versions 10.0.1 FP6 and 11.0.1 FP2 and later.
CVE-2020-4129
PUBLISHED: 2020-12-01
HCL Domino is susceptible to a lockout policy bypass vulnerability in the LDAP service. An unauthenticated attacker could use this vulnerability to mount a brute force attack against the LDAP service. Fixes are available in HCL Domino versions 9.0.1 FP10 IF6, 10.0.1 FP6 and 11.0.1 FP1 and later.
CVE-2020-9115
PUBLISHED: 2020-12-01
ManageOne versions 6.5.1.1.B010, 6.5.1.1.B020, 6.5.1.1.B030, 6.5.1.1.B040, ,6.5.1.1.B050, 8.0.0 and 8.0.1 have a command injection vulnerability. An attacker with high privileges may exploit this vulnerability through some operations on the plug-in component. Due to insufficient input validation of ...
CVE-2020-9116
PUBLISHED: 2020-12-01
Huawei FusionCompute versions 6.5.1 and 8.0.0 have a command injection vulnerability. An authenticated, remote attacker can craft specific request to exploit this vulnerability. Due to insufficient verification, this could be exploited to cause the attackers to obtain higher privilege.
CVE-2020-14193
PUBLISHED: 2020-11-30
Affected versions of Automation for Jira - Server allowed remote attackers to read and render files as mustache templates in files inside the WEB-INF/classes & <jira-installation>/jira/bin directories via a template injection vulnerability in Jira smart values using mustache partials. The ...