Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats //

Advanced Threats

End of Bibblio RCM includes -->

Expect an Increase in Attacks on AI Systems

Companies are quickly adopting machine learning but not focusing on how to verify systems and produce trustworthy results, new report shows.

Research into methods of attacking machine-learning and artificial-intelligence systems has surged—with nearly 2,000 papers published on the topic in one repository over the last decade—but organizations have not adopted commensurate strategies to ensure that the decisions made by AI systems are trustworthy

A new report from AI research firm Adversa looked at a number of measurements of the adoption of AI systems, from the number and types of research papers on the topic, to government initiatives that aim to provide policy frameworks for the technology. They found that AI is being rapidly adopted but often without the necessary defenses needed to protect AI systems from targeted attacks. So-called adversarial AI attacks include bypassing AI systems, manipulating results, and exfiltrating the data that the model is based on.

Related Content:

Microsoft Says It's Time to Attack Your Machine-Learning Models

Special Report: Tech Insights: Detecting and Preventing Insider Data Leaks

New From The Edge: Cybersecurity and the Way to a Balanced Life

These sorts of attacks are not yet numerous, but have happened, and will happen with greater frequency, says Eugene Neelou, co-founder and chief technology officer of Adversa.

"Although our research corpus is mostly collected from academia, they have attack cases against AI systems such as smart devices, online services, or tech giant's APIs," he says. "It's only a question of time when we see an explosion of new attacks against real-world AI systems and they will become as common as spam or ransomware."

Research into adversarial attacks on machine learning and AI systems has exploded in recent years, with more than 1,500 papers on AI security published in 2019 on the scientific publishing site ArXiv.org, up from 56 in 2016, according to Adversa's Secure and Trusted AI report

Yet, that is only a single type of threat. Adversarial attacks on AI systems may be the largest case—and it's certainly the one garnering the most attention—but there are other major cases as well, says Gary McGraw, co-founder and director of the Berryville Institute of Machine Learning (BIML). The group of machine-learning researchers at BIML identified 78 different threats to machine-learning models and AI systems. Top threats also include data poisoning, online system manipulation, attacks on common ML models, and data exfiltration, according to the BIML report, An Architectural Risk Analysis of Machine Learning Systems

Late last year, Mitre, Microsoft, and other organizations—including BIML—released the Adversarial ML Threat Matrix, which includes 16 categories of threats.

"One of the things you should do right off the bat is to familiarize yourself with those risks, and think about whether any of those risks affect your company," McGraw says. "If you don't think about them while you are coding up you ML systems, you are going to be playing catch up later."

Image Problems

The variety of potential attacks is staggering. To date, however, researchers have focused mainly on attacking image-recognition algorithms and other vision-related machine learning models, with 65% of adversarial machine-learning papers having a vision focus, according to the Adversa analysis. In July, for example, researchers found a variety of ways to attack facial recognition algorithms. The other third of papers focused on analytical attacks (18%), language attacks (13%), and the autonomy of the algorithms (4%), according to Adversa.

The popularity of using adversarial machine learning to undermine image- and video-related algorithms is not because other applications of machine learning are less vulnerable, but because the attacks are, by definition, easier to see, Adversa stated in the report.

"Image data is the most popular target because it is easier to attack and more convincing to demonstrate vulnerabilities in AI systems with visible evidence," the report stated. "This is also correlated to the attractiveness of attacking computer vision systems due to their rising adoption."

In addition, the report also showed researchers focused on dozens of applications, with the largest share—43%—comprising image classification applications, with facial recognition and data analytics application coming in a distant second and third place with a 7% and 6% share, respectively.

Companies should raise awareness of the security and trust considerations of machine-learning algorithms with everyone involved in developing AI systems. In addition, businesses should conduct AI security assessments based on threat models, and implement continuous security monitoring of AI systems, Adversa AI's Neelou says.

"Organizations should start an AI security program and develop practices for a secure AI lifecycle," he says. "This is relevant regardless of whether they develop their own AIs and use external AI capabilities."

In addition, firms should investigate the broader range of threats that affect their use of machine learning systems, says BIML's McGraw. By considering the full range of AI threat, companies will be more ready for, not just future attacks, but poorly created AI and machine-learning systems that could lead to poor business decisions.

"The more people that think about this, the better it will be for everyone," he says.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Comment  | 
Print  | 
More Insights
//Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Practical Network Security Approaches for a Multicloud, Hybrid IT World
The report covers areas enterprises should focus on for their multicloud/hybrid cloud security strategy: -increase visibility over the environment -learning cloud-specific skills -relying on established security frameworks -re-architecting the network
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2022-30333
PUBLISHED: 2022-05-09
RARLAB UnRAR before 6.12 on Linux and UNIX allows directory traversal to write to files during an extract (aka unpack) operation, as demonstrated by creating a ~/.ssh/authorized_keys file. NOTE: WinRAR and Android RAR are unaffected.
CVE-2022-23066
PUBLISHED: 2022-05-09
In Solana rBPF versions 0.2.26 and 0.2.27 are affected by Incorrect Calculation which is caused by improper implementation of sdiv instruction. This can lead to the wrong execution path, resulting in huge loss in specific cases. For example, the result of a sdiv instruction may decide whether to tra...
CVE-2022-28463
PUBLISHED: 2022-05-08
ImageMagick 7.1.0-27 is vulnerable to Buffer Overflow.
CVE-2022-28470
PUBLISHED: 2022-05-08
marcador package in PyPI 0.1 through 0.13 included a code-execution backdoor.
CVE-2022-1620
PUBLISHED: 2022-05-08
NULL Pointer Dereference in function vim_regexec_string at regexp.c:2729 in GitHub repository vim/vim prior to 8.2.4901. NULL Pointer Dereference in function vim_regexec_string at regexp.c:2729 allows attackers to cause a denial of service (application crash) via a crafted input.