Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Security Management

End of Bibblio RCM includes -->
7/10/2019
10:00 AM
Steve Durbin
Steve Durbin
Steve Durbin
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv

Manipulated Machine Learning Sows Confusion

Machine learning, and neural networks in particular, will become a prime target for those aiming to manipulate or disrupt dependent products and services.

Machine learning, and neural networks in particular, will become a prime target for those aiming to manipulate or disrupt dependent products and services. Attackers will exploit vulnerabilities and flaws in machine learning systems by confusing and deceiving algorithms in order to manipulate outcomes for nefarious purposes.

Impacts will be felt across a range of industries. Malicious attacks may result in automated vehicles changing direction unexpectedly, high-frequency trading applications making poor financial decisions and airport facial recognition software failing to recognize terrorists. Organizations will face significant financial, regulatory and reputational damage and lives will be put at risk if machine learning systems are compromised.

Nation states, terrorists, hacking groups, hacktivists and even rogue competitors will turn their attention to manipulating machine learning systems that underpin products and services. Attacks that are undetectable by humans will target the integrity of information -- widespread chaos will ensue for those dependent on services powered primarily by machine learning.

What is the justification for this threat?
A range of industries will increasingly adopt machine learning systems and neural networks over the coming years in order to help make faster, smarter decisions. They will be embedded into a series of business operations such as marketing, medicine, retail, automated vehicles and military applications. The explosion of data from connected sensors, IoT devices and social media outputs will drive companies to use machine learning to automate processes, with minimal human oversight. As these technologies begin to underpin business models, they will become a prime target.

Academics have already provided several proof of concept studies highlighting how machine learning can be confused. Students at MIT developed a means of tricking neural network-based image recognition software built on Google’s open source software library TensorFlow, forcing the neural network to miscategorize an image of a turtle as a gun. The real-world implications of deliberately miscategorizing images used as inputs to machine learning systems would transcend all industries that adopt machine learning for processes and services, significantly threatening human life and damaging corporate reputations.

Attackers can place ‘noise’ over an image or sound, which is undetectable by humans but is picked up by the neural network. This noise fools the neural network into miscategorizing the image it sees. Academics could, for example, fool a neural network in a connected vehicle causing it to miscategorize a stop sign as a different road sign. By spraying graffiti over the stop sign, the neural network is unable to categorize it correctly; whereas the human eye could make assumptions, even though the sign had been vandalized. If connected vehicles were fooled in this way, there would be significant risk to human life.

Neural networks used in connected vehicles typically gain significant media scrutiny. In March of 2018, a self-driving Uber vehicle travelling at 40mph in Tempe, Arizona, fatally struck a pedestrian crossing the street in the dark as a result of the vehicle’s perception system miscategorizing a bicycle she was wheeling as something else. There have also been two cases where image recognition software in Tesla vehicles failed to categorize fire trucks and other vehicles, causing a fatality in one case. While these examples were accidental, they highlight the potential for malicious attacks.

How should your organization prepare?
The damage a compromised machine learning system may bring could be life threatening. Organizations should assess their offerings and dependency on machine learning systems before attackers exploit related vulnerabilities.

Short-term actions include identifying systems which use machine learning, especially recognition-based neural networks, and determine their criticality to the business, employing technical experts in machine learning and recognition-based neural networks and gaining assurance that machine learning and recognition-based neural networks provided by external parties are secure by design. In the longer term, continuously monitor the dark web for vulnerabilities and exploit code, and lobby governments and regulators to introduce regulation around reporting requirements, demanding clear outlines of usage and protection of data used by algorithms in machine learning systems.

— Steve Durbin is managing director of the Information Security Forum (ISF). His main areas of focus include the emerging security threat landscape, cybersecurity, BYOD, the cloud and social media across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Improving Enterprise Cybersecurity With XDR
Enterprises are looking at eXtended Detection and Response technologies to improve their abilities to detect, and respond to, threats. While endpoint detection and response is not new to enterprise security, organizations have to improve network visibility, expand data collection and expand threat hunting capabilites if they want their XDR deployments to succeed. This issue of Tech Insights also includes: a market overview for XDR from Omdia, questions to ask before deploying XDR, and an XDR primer.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2022-31099
PUBLISHED: 2022-06-27
rulex is a new, portable, regular expression language. When parsing untrusted rulex expressions, the stack may overflow, possibly enabling a Denial of Service attack. This happens when parsing an expression with several hundred levels of nesting, causing the process to abort immediately. This is a s...
CVE-2022-31101
PUBLISHED: 2022-06-27
prestashop/blockwishlist is a prestashop extension which adds a block containing the customer's wishlists. In affected versions an authenticated customer can perform SQL injection. This issue is fixed in version 2.1.1. Users are advised to upgrade. There are no known workarounds for this issue.
CVE-2022-31103
PUBLISHED: 2022-06-27
lettersanitizer is a DOM-based HTML email sanitizer for in-browser email rendering. All versions of lettersanitizer below 1.0.2 are affected by a denial of service issue when processing a CSS at-rule `@keyframes`. This package is depended on by [react-letter](https://github.com/mat-sz/react-letter),...
CVE-2022-32994
PUBLISHED: 2022-06-27
Halo CMS v1.5.3 was discovered to contain an arbitrary file upload vulnerability via the component /api/admin/attachments/upload.
CVE-2022-32995
PUBLISHED: 2022-06-27
Halo CMS v1.5.3 was discovered to contain a Server-Side Request Forgery (SSRF) via the template remote download function.