Toxic Data: How 'Deepfakes' Threaten Cybersecurity
The joining of 'deep learning' and 'fake news' makes it possible to create audio and video of real people saying words they never spoke or things they never did.
"Fake news" is one of the most widely used phrases of our times. Never has there been such focus on the importance of being able to trust and validate the authenticity of shared information. But its lesser-understood counterpart, "deepfake," poses a much more insidious threat to the cybersecurity landscape — far more dangerous than a simple hack or data breach.
Deepfake activity was mostly limited to the artificial intelligence (AI) research community until late 2017, when a Reddit user who went by "Deepfakes" — a portmanteau of "deep learning" and "fake" — started posting digitally altered pornographic videos. This machine learning technique makes it possible to create audio and video of real people saying and doing things they never said or did. But Buzzfeed brought more visibility to Deepfakes and the ability to digitally manipulate content when it created a video that supposedly showed President Barack Obama mocking Donald Trump. In reality, deepfake technology had been used to superimpose President Obama's face onto footage of Jordan Peele, the Hollywood filmmaker.
This is just one example of a new wave of attacks that are growing quickly. They have the potential to cause significant harm to society overall and to organizations within the private and public sectors because they are hard to detect and equally hard to disprove.
The ability to manipulate content in such unprecedented ways generates a fundamental trust problem for consumers and brands, for decision makers and politicians, and for all media as information providers. The emerging era of AI and deep learning technologies will make the creation of deepfakes easier and more "realistic," to an extent where a new perceived reality is created. As a result, the potential to undermine trust and spread misinformation increases like never before.
To date, the industry has been focused on the unauthorized access of data. But the motivation behind and the anatomy of an attack has changed. Instead of stealing information or holding it ransom, a new breed of hackers now attempts to modify data while leaving it in place.
One study from Sonatype, a provider of DevOps-native tools, predicts that, by 2020, 50% of organizations will have suffered damage caused by fraudulent data and software. Companies today must safeguard the chain of custody for every digital asset in order to detect and deter data tampering.
The True Cost of Data Manipulation
There are many scenarios in which altered data can serve cybercriminals better than stolen information. One is financial gain: A competitor could tamper with financial account databases using a simple attack to multiply all the company's account receivables by a small random number. While a seemingly small variability in the data could go unnoticed by a casual observer, it could completely sabotage earnings reporting, which would ruin the company's relationship with its customers, partners, and investors.
Another motivation is changing perception. Nation-states could intercept news reports that are coming from an event and change those reports before they reach their destination. Intrusions that undercut data integrity have the potential to be a powerful arm of propaganda and misinformation by foreign governments.
Data tampering can also have a very real effect on the lives of individuals, especially within the healthcare and pharmaceutical industries. Attackers could alter information about the medications that patients are prescribed, instructions on how and when to take them, or records detailing allergies.
What do organizations need to consider to ensure that their digital assets remain safe from tampering? First, software developers must focus on building trust into every product, process, and transaction by looking more deeply into the enterprise systems and processes that store and exchange data. In the same way that data is backed up, mirrored, or encrypted, it continually needs to be validated to ensure its authenticity. This is especially critical if that data is being used by AI or machine learning applications to run simulations, to interact with consumers or partners, or for mission-critical decision-making and business operations.
The consequences of deepfake attacks are too large to ignore. It's no longer enough to install and maintain security systems in order to know that digital assets have been hacked and potentially stolen. The recent hacks on Marriott and Quora are the latest on the growing list of companies that have had their consumer data exposed. Now, companies also need to be able to validate the authenticity of their data, processes, and transactions.
If they can't, it's toxic.
Related Content:
About the Author
You May Also Like
The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024Safeguarding GitHub Data to Fuel Web Innovation
Nov 21, 2024