Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

8/13/2018
08:05 AM
Alan
 Zeichick
Alan Zeichick
Alan Zeichick
50%
50%

Artificial Malevolence: Bad Actors Know Computer Science, Too

Artificial intelligence and machine learning have many useful applications in legitimate security prevention. However, the buzz at this year's Black Hat is that bad guys are already catching up.

LAS VEGAS -- The cybersecurity industry has embraced artificial intelligence and machine learning. Seemingly every exhibitor at this year's Black Hat conference is touting AI, whether it's for scanning email attachments for malware, detecting patterns in network access patterns, filtering alerts for rapid incident triage or finding anomalies in user behaviors.

The unsaid belief is that AI is a good-actor's tool.

Due to the complexity of the algorithms, the need for large data sets for training or real-time learning, and expensive servers with tons of memory, the best use for AI and machine learning would be for enterprise, government or service-provider defense. Where AI might have a role in offensive operations, the thinking goes, is strictly in the realm of organizations like all those three-letter organizations near the Washington, DC, Beltway.

Not necessarily -- and that's also part of the buzz here at Black Hat.

Every conversation I had about AI acknowledged the possibility -- no, the probability -- that these technologies can be turned against us. The good guys have AI-powered cyber software. The bad actors do too, or if not, they will soon.

IBM got everyone talking
The conversation was driven by a well-publicized presentation by IBM of what Big Blue calls DeepLocker, which has enough intelligence to hide in plain sight.

"The DeepLocker class of malware stands in stark contrast to existing evasion techniques used by malware seen in the wild. While many malware variants try to hide their presence and malicious intent, none are as effective at doing so as DeepLocker," Marc Ph. Stoecklin, a senior IBM research scientist, wrote in a paper released simultaneously with the Black Hat presentation.

What is DeepLocker and how does it work? Stoecklin explains:

DeepLocker hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners. What is unique about DeepLocker is that the use of AI makes the "trigger conditions" to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model. The AI model is trained to behave normally unless it is presented with a specific input: the trigger conditions identifying specific victims.

DeepLocker is one type of attacker, but there are many AI applications that can take the offensive in cyber warfare.

Machine learning, for example, can analyze the results of port scans, looking for weaknesses -- or identifying traps like honeypots. AI-enhanced image processing can help identify humans as potential identity-theft or blackmail targets.

Indeed, big data is already used to piece together bits of personal data, as well as relationships to help with identity theft and social engineering.

One advantage that good actors have always held over bad actors: vast amounts of computing infrastructure. Well, not anymore.

Botnets can do a lot more than execute distributed denial-of-service (DDoS) attacks; there's no reason why they can't be harnessed for decrypting digital signatures or running deep learning applications.

We can feel safe in that that top-tier cloud computing providers -- the so-called hyperscalers -- won't be willing to license their CPUs, GPUs, storage, and bandwidth to bad actors.

However, that assumes that the hyperscalers know what's going on. With attacks sponsored by entities like foreign governments, who knows what types of workloads are running on Amazon Web Services or Google Cloud Platform? In terms of software sophistication, AI open-source libraries such as TensorFlow or Apache Spark MLlib can be downloaded and run by anyone, friend or foe.

Like every other technology, artificial intelligence has become weaponized. Get ready for artificial malevolence, coming to a hacker near you.

Related posts:

Alan Zeichick is principal analyst at Camden Associates, a technology consultancy in Phoenix, Arizona, specializing in enterprise networking, cybersecurity and software development. Follow him @zeichick.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 11/19/2020
New Proposed DNS Security Features Released
Kelly Jackson Higgins, Executive Editor at Dark Reading,  11/19/2020
How to Identify Cobalt Strike on Your Network
Zohar Buber, Security Analyst,  11/18/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: He hits the gong anytime he sees someone click on an email link.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-29071
PUBLISHED: 2020-11-25
An XSS issue was found in the Shares feature of LiquidFiles before 3.3.19. The issue arises from the insecure rendering of HTML files uploaded to the platform as attachments, when the -htmlview URL is directly accessed. The impact ranges from executing commands as root on the server to retrieving se...
CVE-2020-29072
PUBLISHED: 2020-11-25
A Cross-Site Script Inclusion vulnerability was found on LiquidFiles before 3.3.19. This client-side attack requires user interaction (opening a link) and successful exploitation could lead to encrypted e-mail content leakage via messages/sent?format=js and popup?format=js.
CVE-2020-26241
PUBLISHED: 2020-11-25
Go Ethereum, or "Geth", is the official Golang implementation of the Ethereum protocol. This is a Consensus vulnerability in Geth before version 1.9.17 which can be used to cause a chain-split where vulnerable nodes reject the canonical chain. Geth's pre-compiled dataCopy (at 0x00...04) co...
CVE-2020-26242
PUBLISHED: 2020-11-25
Go Ethereum, or "Geth", is the official Golang implementation of the Ethereum protocol. In Geth before version 1.9.18, there is a Denial-of-service (crash) during block processing. This is fixed in 1.9.18.
CVE-2020-26240
PUBLISHED: 2020-11-25
Go Ethereum, or "Geth", is the official Golang implementation of the Ethereum protocol. An ethash mining DAG generation flaw in Geth before version 1.9.24 could cause miners to erroneously calculate PoW in an upcoming epoch (estimated early January, 2021). This happened on the ETC chain on...