Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

8/13/2018
08:05 AM
Alan
 Zeichick
Alan Zeichick
Alan Zeichick
50%
50%

Artificial Malevolence: Bad Actors Know Computer Science, Too

Artificial intelligence and machine learning have many useful applications in legitimate security prevention. However, the buzz at this year's Black Hat is that bad guys are already catching up.

LAS VEGAS -- The cybersecurity industry has embraced artificial intelligence and machine learning. Seemingly every exhibitor at this year's Black Hat conference is touting AI, whether it's for scanning email attachments for malware, detecting patterns in network access patterns, filtering alerts for rapid incident triage or finding anomalies in user behaviors.

The unsaid belief is that AI is a good-actor's tool.

Due to the complexity of the algorithms, the need for large data sets for training or real-time learning, and expensive servers with tons of memory, the best use for AI and machine learning would be for enterprise, government or service-provider defense. Where AI might have a role in offensive operations, the thinking goes, is strictly in the realm of organizations like all those three-letter organizations near the Washington, DC, Beltway.

Not necessarily -- and that's also part of the buzz here at Black Hat.

Every conversation I had about AI acknowledged the possibility -- no, the probability -- that these technologies can be turned against us. The good guys have AI-powered cyber software. The bad actors do too, or if not, they will soon.

IBM got everyone talking
The conversation was driven by a well-publicized presentation by IBM of what Big Blue calls DeepLocker, which has enough intelligence to hide in plain sight.

"The DeepLocker class of malware stands in stark contrast to existing evasion techniques used by malware seen in the wild. While many malware variants try to hide their presence and malicious intent, none are as effective at doing so as DeepLocker," Marc Ph. Stoecklin, a senior IBM research scientist, wrote in a paper released simultaneously with the Black Hat presentation.

What is DeepLocker and how does it work? Stoecklin explains:

DeepLocker hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners. What is unique about DeepLocker is that the use of AI makes the "trigger conditions" to unlock the attack almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model. The AI model is trained to behave normally unless it is presented with a specific input: the trigger conditions identifying specific victims.

DeepLocker is one type of attacker, but there are many AI applications that can take the offensive in cyber warfare.

Machine learning, for example, can analyze the results of port scans, looking for weaknesses -- or identifying traps like honeypots. AI-enhanced image processing can help identify humans as potential identity-theft or blackmail targets.

Indeed, big data is already used to piece together bits of personal data, as well as relationships to help with identity theft and social engineering.

One advantage that good actors have always held over bad actors: vast amounts of computing infrastructure. Well, not anymore.

Botnets can do a lot more than execute distributed denial-of-service (DDoS) attacks; there's no reason why they can't be harnessed for decrypting digital signatures or running deep learning applications.

We can feel safe in that that top-tier cloud computing providers -- the so-called hyperscalers -- won't be willing to license their CPUs, GPUs, storage, and bandwidth to bad actors.

However, that assumes that the hyperscalers know what's going on. With attacks sponsored by entities like foreign governments, who knows what types of workloads are running on Amazon Web Services or Google Cloud Platform? In terms of software sophistication, AI open-source libraries such as TensorFlow or Apache Spark MLlib can be downloaded and run by anyone, friend or foe.

Like every other technology, artificial intelligence has become weaponized. Get ready for artificial malevolence, coming to a hacker near you.

Related posts:

Alan Zeichick is principal analyst at Camden Associates, a technology consultancy in Phoenix, Arizona, specializing in enterprise networking, cybersecurity and software development. Follow him @zeichick.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Attackers Leave Stolen Credentials Searchable on Google
Kelly Sheridan, Staff Editor, Dark Reading,  1/21/2021
How to Better Secure Your Microsoft 365 Environment
Kelly Sheridan, Staff Editor, Dark Reading,  1/25/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: We need more votes, check the obituaries.
Current Issue
2020: The Year in Security
Download this Tech Digest for a look at the biggest security stories that - so far - have shaped a very strange and stressful year.
Flash Poll
Assessing Cybersecurity Risk in Today's Enterprises
Assessing Cybersecurity Risk in Today's Enterprises
COVID-19 has created a new IT paradigm in the enterprise -- and a new level of cybersecurity risk. This report offers a look at how enterprises are assessing and managing cyber-risk under the new normal.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-3272
PUBLISHED: 2021-01-27
jp2_decode in jp2/jp2_dec.c in libjasper in JasPer 2.0.24 has a heap-based buffer over-read when there is an invalid relationship between the number of channels and the number of image components.
CVE-2021-3317
PUBLISHED: 2021-01-26
KLog Server through 2.4.1 allows authenticated command injection. async.php calls shell_exec() on the original value of the source parameter.
CVE-2013-2512
PUBLISHED: 2021-01-26
The ftpd gem 0.2.1 for Ruby allows remote attackers to execute arbitrary OS commands via shell metacharacters in a LIST or NLST command argument within FTP protocol traffic.
CVE-2021-3165
PUBLISHED: 2021-01-26
SmartAgent 3.1.0 allows a ViewOnly attacker to create a SuperUser account via the /#/CampaignManager/users URI.
CVE-2021-1070
PUBLISHED: 2021-01-26
NVIDIA Jetson AGX Xavier Series, Jetson Xavier NX, TX1, TX2, Nano and Nano 2GB, L4T versions prior to 32.5, contains a vulnerability in the apply_binaries.sh script used to install NVIDIA components into the root file system image, in which improper access control is applied, which may lead to an un...