Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Security Management

7/10/2019
10:00 AM
Steve Durbin
Steve Durbin
Steve Durbin
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Manipulated Machine Learning Sows Confusion

Machine learning, and neural networks in particular, will become a prime target for those aiming to manipulate or disrupt dependent products and services.

Machine learning, and neural networks in particular, will become a prime target for those aiming to manipulate or disrupt dependent products and services. Attackers will exploit vulnerabilities and flaws in machine learning systems by confusing and deceiving algorithms in order to manipulate outcomes for nefarious purposes.

Impacts will be felt across a range of industries. Malicious attacks may result in automated vehicles changing direction unexpectedly, high-frequency trading applications making poor financial decisions and airport facial recognition software failing to recognize terrorists. Organizations will face significant financial, regulatory and reputational damage and lives will be put at risk if machine learning systems are compromised.

Nation states, terrorists, hacking groups, hacktivists and even rogue competitors will turn their attention to manipulating machine learning systems that underpin products and services. Attacks that are undetectable by humans will target the integrity of information -- widespread chaos will ensue for those dependent on services powered primarily by machine learning.

What is the justification for this threat?
A range of industries will increasingly adopt machine learning systems and neural networks over the coming years in order to help make faster, smarter decisions. They will be embedded into a series of business operations such as marketing, medicine, retail, automated vehicles and military applications. The explosion of data from connected sensors, IoT devices and social media outputs will drive companies to use machine learning to automate processes, with minimal human oversight. As these technologies begin to underpin business models, they will become a prime target.

Academics have already provided several proof of concept studies highlighting how machine learning can be confused. Students at MIT developed a means of tricking neural network-based image recognition software built on Google’s open source software library TensorFlow, forcing the neural network to miscategorize an image of a turtle as a gun. The real-world implications of deliberately miscategorizing images used as inputs to machine learning systems would transcend all industries that adopt machine learning for processes and services, significantly threatening human life and damaging corporate reputations.

Attackers can place ‘noise’ over an image or sound, which is undetectable by humans but is picked up by the neural network. This noise fools the neural network into miscategorizing the image it sees. Academics could, for example, fool a neural network in a connected vehicle causing it to miscategorize a stop sign as a different road sign. By spraying graffiti over the stop sign, the neural network is unable to categorize it correctly; whereas the human eye could make assumptions, even though the sign had been vandalized. If connected vehicles were fooled in this way, there would be significant risk to human life.

Neural networks used in connected vehicles typically gain significant media scrutiny. In March of 2018, a self-driving Uber vehicle travelling at 40mph in Tempe, Arizona, fatally struck a pedestrian crossing the street in the dark as a result of the vehicle’s perception system miscategorizing a bicycle she was wheeling as something else. There have also been two cases where image recognition software in Tesla vehicles failed to categorize fire trucks and other vehicles, causing a fatality in one case. While these examples were accidental, they highlight the potential for malicious attacks.

How should your organization prepare?
The damage a compromised machine learning system may bring could be life threatening. Organizations should assess their offerings and dependency on machine learning systems before attackers exploit related vulnerabilities.

Short-term actions include identifying systems which use machine learning, especially recognition-based neural networks, and determine their criticality to the business, employing technical experts in machine learning and recognition-based neural networks and gaining assurance that machine learning and recognition-based neural networks provided by external parties are secure by design. In the longer term, continuously monitor the dark web for vulnerabilities and exploit code, and lobby governments and regulators to introduce regulation around reporting requirements, demanding clear outlines of usage and protection of data used by algorithms in machine learning systems.

— Steve Durbin is managing director of the Information Security Forum (ISF). His main areas of focus include the emerging security threat landscape, cybersecurity, BYOD, the cloud and social media across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/3/2020
'BootHole' Vulnerability Exposes Secure Boot Devices to Attack
Kelly Sheridan, Staff Editor, Dark Reading,  7/29/2020
Average Cost of a Data Breach: $3.86 Million
Jai Vijayan, Contributing Writer,  7/29/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2017-18112
PUBLISHED: 2020-08-05
Affected versions of Atlassian Fisheye allow remote attackers to view the HTTP password of a repository via an Information Disclosure vulnerability in the logging feature. The affected versions are before version 4.8.3.
CVE-2020-15109
PUBLISHED: 2020-08-04
In solidus before versions 2.8.6, 2.9.6, and 2.10.2, there is an bility to change order address without triggering address validations. This vulnerability allows a malicious customer to craft request data with parameters that allow changing the address of the current order without changing the shipm...
CVE-2020-16847
PUBLISHED: 2020-08-04
Extreme Analytics in Extreme Management Center before 8.5.0.169 allows unauthenticated reflected XSS via a parameter in a GET request, aka CFD-4887.
CVE-2020-15135
PUBLISHED: 2020-08-04
save-server (npm package) before version 1.05 is affected by a CSRF vulnerability, as there is no CSRF mitigation (Tokens etc.). The fix introduced in version version 1.05 unintentionally breaks uploading so version v1.0.7 is the fixed version. This is patched by implementing Double submit. The CSRF...
CVE-2020-13522
PUBLISHED: 2020-08-04
An exploitable arbitrary file delete vulnerability exists in SoftPerfect RAM Disk 4.1 spvve.sys driver. A specially crafted I/O request packet (IRP) can allow an unprivileged user to delete any file on the filesystem. An attacker can send a malicious IRP to trigger this vulnerability.