Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats //

Advanced Threats

End of Bibblio RCM includes -->

Expect an Increase in Attacks on AI Systems

Companies are quickly adopting machine learning but not focusing on how to verify systems and produce trustworthy results, new report shows.

Research into methods of attacking machine-learning and artificial-intelligence systems has surged—with nearly 2,000 papers published on the topic in one repository over the last decade—but organizations have not adopted commensurate strategies to ensure that the decisions made by AI systems are trustworthy

A new report from AI research firm Adversa looked at a number of measurements of the adoption of AI systems, from the number and types of research papers on the topic, to government initiatives that aim to provide policy frameworks for the technology. They found that AI is being rapidly adopted but often without the necessary defenses needed to protect AI systems from targeted attacks. So-called adversarial AI attacks include bypassing AI systems, manipulating results, and exfiltrating the data that the model is based on.

Related Content:

Microsoft Says It's Time to Attack Your Machine-Learning Models

Special Report: Tech Insights: Detecting and Preventing Insider Data Leaks

New From The Edge: Cybersecurity and the Way to a Balanced Life

These sorts of attacks are not yet numerous, but have happened, and will happen with greater frequency, says Eugene Neelou, co-founder and chief technology officer of Adversa.

"Although our research corpus is mostly collected from academia, they have attack cases against AI systems such as smart devices, online services, or tech giant's APIs," he says. "It's only a question of time when we see an explosion of new attacks against real-world AI systems and they will become as common as spam or ransomware."

Research into adversarial attacks on machine learning and AI systems has exploded in recent years, with more than 1,500 papers on AI security published in 2019 on the scientific publishing site ArXiv.org, up from 56 in 2016, according to Adversa's Secure and Trusted AI report

Yet, that is only a single type of threat. Adversarial attacks on AI systems may be the largest case—and it's certainly the one garnering the most attention—but there are other major cases as well, says Gary McGraw, co-founder and director of the Berryville Institute of Machine Learning (BIML). The group of machine-learning researchers at BIML identified 78 different threats to machine-learning models and AI systems. Top threats also include data poisoning, online system manipulation, attacks on common ML models, and data exfiltration, according to the BIML report, An Architectural Risk Analysis of Machine Learning Systems

Late last year, Mitre, Microsoft, and other organizations—including BIML—released the Adversarial ML Threat Matrix, which includes 16 categories of threats.

"One of the things you should do right off the bat is to familiarize yourself with those risks, and think about whether any of those risks affect your company," McGraw says. "If you don't think about them while you are coding up you ML systems, you are going to be playing catch up later."

Image Problems

The variety of potential attacks is staggering. To date, however, researchers have focused mainly on attacking image-recognition algorithms and other vision-related machine learning models, with 65% of adversarial machine-learning papers having a vision focus, according to the Adversa analysis. In July, for example, researchers found a variety of ways to attack facial recognition algorithms. The other third of papers focused on analytical attacks (18%), language attacks (13%), and the autonomy of the algorithms (4%), according to Adversa.

The popularity of using adversarial machine learning to undermine image- and video-related algorithms is not because other applications of machine learning are less vulnerable, but because the attacks are, by definition, easier to see, Adversa stated in the report.

"Image data is the most popular target because it is easier to attack and more convincing to demonstrate vulnerabilities in AI systems with visible evidence," the report stated. "This is also correlated to the attractiveness of attacking computer vision systems due to their rising adoption."

In addition, the report also showed researchers focused on dozens of applications, with the largest share—43%—comprising image classification applications, with facial recognition and data analytics application coming in a distant second and third place with a 7% and 6% share, respectively.

Companies should raise awareness of the security and trust considerations of machine-learning algorithms with everyone involved in developing AI systems. In addition, businesses should conduct AI security assessments based on threat models, and implement continuous security monitoring of AI systems, Adversa AI's Neelou says.

"Organizations should start an AI security program and develop practices for a secure AI lifecycle," he says. "This is relevant regardless of whether they develop their own AIs and use external AI capabilities."

In addition, firms should investigate the broader range of threats that affect their use of machine learning systems, says BIML's McGraw. By considering the full range of AI threat, companies will be more ready for, not just future attacks, but poorly created AI and machine-learning systems that could lead to poor business decisions.

"The more people that think about this, the better it will be for everyone," he says.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Comment  | 
Print  | 
More Insights
//Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Everything You Need to Know About DNS Attacks
It's important to understand DNS, potential attacks against it, and the tools and techniques required to defend DNS infrastructure. This report answers all the questions you were afraid to ask. Domain Name Service (DNS) is a critical part of any organization's digital infrastructure, but it's also one of the least understood. DNS is designed to be invisible to business professionals, IT stakeholders, and many security professionals, but DNS's threat surface is large and widely targeted. Attackers are causing a great deal of damage with an array of attacks such as denial of service, DNS cache poisoning, DNS hijackin, DNS tunneling, and DNS dangling. They are using DNS infrastructure to take control of inbound and outbound communications and preventing users from accessing the applications they are looking for. To stop attacks on DNS, security teams need to shore up the organization's security hygiene around DNS infrastructure, implement controls such as DNSSEC, and monitor DNS traffic
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2023-33196
PUBLISHED: 2023-05-26
Craft is a CMS for creating custom digital experiences. Cross site scripting (XSS) can be triggered by review volumes. This issue has been fixed in version 4.4.7.
CVE-2023-33185
PUBLISHED: 2023-05-26
Django-SES is a drop-in mail backend for Django. The django_ses library implements a mail backend for Django using AWS Simple Email Service. The library exports the `SESEventWebhookView class` intended to receive signed requests from AWS to handle email bounces, subscriptions, etc. These requests ar...
CVE-2023-33187
PUBLISHED: 2023-05-26
Highlight is an open source, full-stack monitoring platform. Highlight may record passwords on customer deployments when a password html input is switched to `type="text"` via a javascript "Show Password" button. This differs from the expected behavior which always obfuscates `ty...
CVE-2023-33194
PUBLISHED: 2023-05-26
Craft is a CMS for creating custom digital experiences on the web.The platform does not filter input and encode output in Quick Post validation error message, which can deliver an XSS payload. Old CVE fixed the XSS in label HTML but didn’t fix it when clicking save. This issue was...
CVE-2023-2879
PUBLISHED: 2023-05-26
GDSDB infinite loop in Wireshark 4.0.0 to 4.0.5 and 3.6.0 to 3.6.13 allows denial of service via packet injection or crafted capture file