Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

12/27/2018
10:30 AM
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Toxic Data: How 'Deepfakes' Threaten Cybersecurity

The joining of 'deep learning' and 'fake news' makes it possible to create audio and video of real people saying words they never spoke or things they never did.

"Fake news" is one of the most widely used phrases of our times. Never has there been such focus on the importance of being able to trust and validate the authenticity of shared information. But its lesser-understood counterpart, "deepfake," poses a much more insidious threat to the cybersecurity landscape — far more dangerous than a simple hack or data breach.

Deepfake activity was mostly limited to the artificial intelligence (AI) research community until late 2017, when a Reddit user who went by "Deepfakes" — a portmanteau of "deep learning" and "fake" — started posting digitally altered pornographic videos. This machine learning technique makes it possible to create audio and video of real people saying and doing things they never said or did. But Buzzfeed brought more visibility to Deepfakes and the ability to digitally manipulate content when it created a video that supposedly showed President Barack Obama mocking Donald Trump. In reality, deepfake technology had been used to superimpose President Obama's face onto footage of Jordan Peele, the Hollywood filmmaker.  

This is just one example of a new wave of attacks that are growing quickly. They have the potential to cause significant harm to society overall and to organizations within the private and public sectors because they are hard to detect and equally hard to disprove.

The ability to manipulate content in such unprecedented ways generates a fundamental trust problem for consumers and brands, for decision makers and politicians, and for all media as information providers. The emerging era of AI and deep learning technologies will make the creation of deepfakes easier and more "realistic," to an extent where a new perceived reality is created. As a result, the potential to undermine trust and spread misinformation increases like never before.

To date, the industry has been focused on the unauthorized access of data. But the motivation behind and the anatomy of an attack has changed. Instead of stealing information or holding it ransom, a new breed of hackers now attempts to modify data while leaving it in place.

One study from Sonatype, a provider of DevOps-native tools, predicts that, by 2020, 50% of organizations will have suffered damage caused by fraudulent data and software. Companies today must safeguard the chain of custody for every digital asset in order to detect and deter data tampering.

The True Cost of Data Manipulation
There are many scenarios in which altered data can serve cybercriminals better than stolen information. One is financial gain: A competitor could tamper with financial account databases using a simple attack to multiply all the company's account receivables by a small random number. While a seemingly small variability in the data could go unnoticed by a casual observer, it could completely sabotage earnings reporting, which would ruin the company's relationship with its customers, partners, and investors.

Another motivation is changing perception. Nation-states could intercept news reports that are coming from an event and change those reports before they reach their destination. Intrusions that undercut data integrity have the potential to be a powerful arm of propaganda and misinformation by foreign governments.

Data tampering can also have a very real effect on the lives of individuals, especially within the healthcare and pharmaceutical industries. Attackers could alter information about the medications that patients are prescribed, instructions on how and when to take them, or records detailing allergies.

What do organizations need to consider to ensure that their digital assets remain safe from tampering? First, software developers must focus on building trust into every product, process, and transaction by looking more deeply into the enterprise systems and processes that store and exchange data. In the same way that data is backed up, mirrored, or encrypted, it continually needs to be validated to ensure its authenticity. This is especially critical if that data is being used by AI or machine learning applications to run simulations, to interact with consumers or partners, or for mission-critical decision-making and business operations.

The consequences of deepfake attacks are too large to ignore. It's no longer enough to install and maintain security systems in order to know that digital assets have been hacked and potentially stolen. The recent hacks on Marriott and Quora are the latest on the growing list of companies that have had their consumer data exposed. Now, companies also need to be able to validate the authenticity of their data, processes, and transactions.

If they can't, it's toxic.

Related Content:

Dirk Kanngiesser is the co-founder and CEO of Cryptowerk, a provider of data integrity solutions that make it easy to seal digital assets and prove their authenticity at scale using blockchain technology. With more than 25 years of technology leadership experience, Dirk has ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Navigating Security in the Cloud
Diya Jolly, Chief Product Officer, Okta,  12/4/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-16772
PUBLISHED: 2019-12-07
The serialize-to-js NPM package before version 3.0.1 is vulnerable to Cross-site Scripting (XSS). It does not properly mitigate against unsafe characters in serialized regular expressions. This vulnerability is not affected on Node.js environment since Node.js's implementation of RegExp.prototype.to...
CVE-2019-9464
PUBLISHED: 2019-12-06
In various functions of RecentLocationApps.java, DevicePolicyManagerService.java, and RecognitionService.java, there is an incorrect warning indicating an app accessed the user's location. This could dissolve the trust in the platform's permission system, with no additional execution privileges need...
CVE-2019-2220
PUBLISHED: 2019-12-06
In checkOperation of AppOpsService.java, there is a possible bypass of user interaction requirements due to mishandling application suspend. This could lead to local information disclosure no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVers...
CVE-2019-2221
PUBLISHED: 2019-12-06
In hasActivityInVisibleTask of WindowProcessController.java there?s a possible bypass of user interaction requirements due to incorrect handling of top activities in INITIALIZING state. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction ...
CVE-2019-2222
PUBLISHED: 2019-12-06
n ihevcd_parse_slice_data of ihevcd_parse_slice.c, there is a possible out of bounds write due to a missing bounds check. This could lead to remote code execution with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-8.0 Android...