How to Navigate the Mitigation of Deepfakes
Deepfakes are already several steps ahead of the technology that can detect and warn us about them.
Roughly 10 years ago, the world of cybercrime had a celestial alignment. Cybercriminals had already been around for decades, routinely using phishing and malware. But two other technologies created the cybercrime boom.
One was the usage of anonymous networks, or darknets, such as Tor. The other was the introduction of cryptocurrency, in the form of Bitcoin. These two innovations — darknets and cryptocurrency — allowed cybercriminals to securely communicate and trade, creating a cascading effect in which new cybercrime services were being offered, which in turn lowered the bar for launching phishing and malware attacks. The opportunity to earn cash without the risk of detection lured newcomers into cybercrime. And today, cybercrime poses the biggest online threat to businesses.
Misinformation and disinformation campaigns are heading in the same direction. Psyops might be a modern term, but influence campaigns have been around for centuries. However, never before was it so easy to reach a massive number of targets, amplify a message, and, if needed, even distort reality.
How? Social media, bots, and deepfakes.
The process of creating online personas and bots as well as injecting the message that you want your targets to see into fringe forums and niche discussion groups has been automated and perfected. Once the information is seeded, it's just a matter of time until it grows and branches out, hitting mainstream social networks and media, and getting organic amplification.
To make things worse, as discussed in Whitney Phillips' "The Oxygen of Amplification," merely reporting on false claims and fake news, with the intention of proving them baseless, amplifies the original message and helps their distribution to the masses. And now we have technology that allows us to create deepfakes relatively easily, without any need for writing code. A low bar to use the tech, methods to distribute, a method of monetizing — the cybercrime cycle pattern reemerges.
While some view the usage of deepfake technology as a future threat, the FBI warned businesses in March they should expect to be hit with different forms of synthetic content.
Unfortunately, these types of attacks have already happened — most notably, the deepfake audio heist that landed the threat actors $35 million. Voice synthesis, the sampling and use of a person's voice to commit such a crime, is a stark warning for authentication that relies on voice recognition, as well as, perhaps, an early warning for face recognition solutions.
With deepfakes moving into real time capabilities (a good example is the deepfake attack against the Dutch parliament) as well as the continuous proliferation of fake videos for fraud and shaming combined with ease of access to the technology, the question is: What can we do about this problem? If seeing is believing but we can't trust what we see, how can we establish a common truth or reality?
Things get even more complicated when you consider the huge number of news and information media sources that fight for ratings and views. Given their business models, they may sometimes prioritize being first rather than being accurate.
Applying Zero Trust to Deepfakes
How do you mitigate such a threat? Perhaps we should consider the fundamental concepts from zero trust — never trust, always verify, and assume there's been a breach. I have been using these concepts when dealing with videos I see in different online media; they offer a more condensed version of some of the core concepts of critical thinking, such as challenging assumptions, suspending immediate judgment, and revising conclusions based on new data.
In the world of network security, assuming a breach means you must assume the attacker is already in your network. The attacker might have gotten in via a vulnerability that already has been patched but was able to establish persistency on the network. Maybe it is an insider threat — intentionally or not. You need to assume there is malicious activity conducted covertly on your network.
How does this apply to deepfakes? I start with "assume breach." My assumption is that someone I know has already been exposed to fake videos or disinformation campaigns. This might not be a friend or a family member but maybe a friend of theirs who read something on a forum they bumped into, didn't bother to check the facts, and is now an organic amplifier. I also assume my closest circles are exposed, which leads me to never trust, always verify. I always try to get at least two additional sources to confirm the data I am exposed to, especially when it comes to videos and articles that support what I think.
Deepfakes are several steps ahead of the technology that can detect and warn us about them. Threat actors will almost always have the lead and initiative. Applying approaches from the hard-learned lessons of cybersecurity to deepfakes, while not stopping the threat, may help us mitigate these threats and minimize their damage and exposure. Assume breach and never trust, always verify!
About the Author
You May Also Like
DevSecOps/AWS
Oct 17, 2024Social Engineering: New Tricks, New Threats, New Defenses
Oct 23, 202410 Emerging Vulnerabilities Every Enterprise Should Know
Oct 30, 2024Simplify Data Security with Automation
Oct 31, 2024