Threat Intelligence

11/28/2017
10:30 AM
Derek Manky
Derek Manky
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

The Looming War of Good AI vs. Bad AI

The rise of artificial intelligence, machine learning, hivenets, and next-generation morphic malware is leading to an arms race that enterprises must prepare for now.

The tech industry — and by extension, the global economy — is at a precipice with artificial intelligence (AI) as cybercriminals adopt AI technology to more effectively detect and exploit vulnerabilities, evade detection, adapt to complex network environments, and maximize profitability.

This is the first time that adversaries and white hats will have the same tools. It is leading to an AI arms race that organizations must prepare for now. Here's what the good guys will be up against.

1. A Wave of Machine Learning
Over the past year, our industry has seen cybercriminals weaponize millions of unsecured IoT devices and use them to take out systems and networks. Supervised AI incubators can spend years carefully cultivating an AI to perform specific tasks in a predictable way. Cybercriminals, however, are not willing to go slowly. The unsupervised learning models they are likely to use to develop AI-based attacks, where speed of development is more important than predictability, are especially dangerous — and could potentially be devastating because of their complexity and unpredictability. As attack methodologies become more intelligent, there is the real potential to create swarms of compromised Internet of Things devices that could wreak indiscriminate havoc. Think Africanized bees.

If the best and the brightest within the cybersecurity research community are calling for regulation, it is because they see that the cybercriminal community is looking seriously at building these AI-based attacks and are likely to release them unsupervised into the wild.

2. Next-Generation Morphic Malware
If not next year, we will soon begin to see malware created completely by machines based on automated vulnerability detection and complex data analysis. Morphic malware is not new, but it is about to take on a new face by leveraging AI to create sophisticated new code that can learn to evade detection through machine-written routines. With the natural evolution of tools that already exist, adversaries will be able to develop the best possible exploit based on the characteristics of each unique weakness. Malware is already able to use learning models to evade security and can produce more than a million virus variations in a day. But so far, this is all just based on an algorithm, and there is very little sophistication or control over the output.

3. The Rise of Hivenets and Swarmbots
We have seen the development of predictive software systems programmed using AI techniques. The latest advances in these sorts of tools leverage massive databases of expert knowledge made up of billions of constantly updated bits of data in order to make accurate predictions. This sort of predictive analysis represents the new paradigm for how computing resources will be used to transform our world.

Building on what the industry has already seen, it is likely that cybercriminals will replace botnets with intelligent clusters of compromised devices built around deep learning technology to create more effective attack vectors. Traditional botnets are slaves — they wait for commands from the bot master in order to execute an attack. But what if these nodes were able to make decisions with minimal supervision, or even autonomously, instead of waiting for master commands?

This would become a hivenet, instead of a botnet, that could leverage peer-based self-learning to effectively target vulnerable systems at an unprecedented scale. Hivenets will be able to use swarms of compromised devices, or swarmbots, to identify and tackle different attack vectors all at once. Hivenets would be able to grow exponentially, widening their ability to simultaneously attack multiple victims.

What Lies Ahead: Intelligent Warfare
Protecting networks and services, including things as important as critical infrastructure, will require a systemic approach based on intentionally engineering vulnerabilities out of a network and then applying an adaptive layer of meshed security tools, unlike the separate and isolated security devices most organizations currently have in place. This integration could provide visibility across the distributed network to detect unknown threats, share and correlate threat intelligence in real time, dynamically segment the network and isolate compromised devices and systems, and respond to attacks in a coordinated fashion.

Artificial intelligence promises incredible benefits to organizations that can harness its power, but it also portends disaster as cybercriminals use it for malicious purposes. Whoever can leverage technologies like machine learning and AI will have the quintessential security defense system to survive the escalating AI war.

Related Content:

Derek Manky formulates security strategy with more than 15 years of cyber security experience behind him. His ultimate goal to make a positive impact in the global war on cybercrime. Manky provides thought leadership to industry, and has presented research and strategy ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
REISEN1955
50%
50%
REISEN1955,
User Rank: Ninja
11/30/2017 | 2:53:36 PM
I could not resist - again
First thought - Glinda to Dorothy "Are you a good AI or a bad AI?"   I'm just a girl.   Glinda points down to Toto "Well is that an AI?"   (Toto is probably smarter than anybody by this point).
12 Free, Ready-to-Use Security Tools
Steve Zurier, Freelance Writer,  10/12/2018
Most IT Security Pros Want to Change Jobs
Dark Reading Staff 10/12/2018
6 Security Trends for 2018/2019
Curtis Franklin Jr., Senior Editor at Dark Reading,  10/15/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Flash Poll
The Risk Management Struggle
The Risk Management Struggle
The majority of organizations are struggling to implement a risk-based approach to security even though risk reduction has become the primary metric for measuring the effectiveness of enterprise security strategies. Read the report and get more details today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-10839
PUBLISHED: 2018-10-16
Qemu emulator <= 3.0.0 built with the NE2000 NIC emulation support is vulnerable to an integer overflow, which could lead to buffer overflow issue. It could occur when receiving packets over the network. A user inside guest could use this flaw to crash the Qemu process resulting in DoS.
CVE-2018-13399
PUBLISHED: 2018-10-16
The Microsoft Windows Installer for Atlassian Fisheye and Crucible before version 4.6.1 allows local attackers to escalate privileges because of weak permissions on the installation directory.
CVE-2018-18381
PUBLISHED: 2018-10-16
Z-BlogPHP 1.5.2.1935 (Zero) has a stored XSS Vulnerability in zb_system/function/c_system_admin.php via the Content-Type header during the uploading of image attachments.
CVE-2018-18382
PUBLISHED: 2018-10-16
Advanced HRM 1.6 allows Remote Code Execution via PHP code in a .php file to the user/update-user-avatar URI, which can be accessed through an "Update Profile" "Change Picture" (aka user/edit-profile) action.
CVE-2018-18374
PUBLISHED: 2018-10-16
XSS exists in the MetInfo 6.1.2 admin/index.php page via the anyid parameter.