Emerging technologies have been known to cause unwarranted mass hysteria. That said, and at risk of sounding hyperbolic, the concerns over deepfakes' potential effects are absolutely warranted. As the FBI's cyber division noted in its recent private industry notification, malicious actors have already begun to incorporate deepfake audio and video into their existing spear-fishing and social engineering campaigns. With deepfake technologies becoming more accessible and convincing every day, synthetic media will spread, potentially resulting in serious geopolitical consequences.
Current State of Deepfakes
Much like consumer photo and video editing software, deepfake technologies are neither inherently good nor bad, and they will eventually become mainstream. In fact, there are already a host of popular, ready-to-use applications, including FaceApp, FaceSwap, Avatarify, and Zao. Although many of these apps come with disclaimers, this synthetic content is completely protected under the First Amendment. That is, until the content is used to further illegal efforts, and of course, we are already seeing this happen. On Dark Web forums, deepfake communities share intelligence, offer deepfakes as a service (DaaS), and to a lesser extent, buy and sell content.
At the moment, deepfake audio is arguably more dangerous than deepfake video. Without visual cues to rely on, users have a difficult time recognizing synthetic audio, making this form of deepfake particularly effective from a social engineering standpoint. In March 2019, cybercriminals successfully conducted a deepfake audio attack, duping the CEO of a UK-based energy firm into transferring $243,000 to a Hungarian supplier. And last year in Philadelphia, a man was targeted by an audio-spoofing attack. These examples show that bad actors are actively using deepfake audio in the wild for monetary gain.
Nonetheless, fear of deepfake video attacks is outpacing actual attacks. Although it was initially reported that European politicians were victims of deepfake video calls, as it turns out, the attacks were conducted by two Russian pranksters, one of whom shares a remarkable resemblance to Leonid Volkov, chief of staff for anti-Putin politician Alexei Navalny. Nevertheless, this geopolitical incident, and the reaction to it, shows just how fearful we've become of deepfake technologies. Headlines such as Deepfake Attacks Are About to Surge and Deepfake Satellite Images Pose Serious Military and Political Challenges are becoming increasingly common. It does, indeed, feel as if the fear of deepfakes is outpacing actual attacks; however, this doesn't mean that the concern is unwarranted.
Some of the most celebrated deepfakes still take a great deal of effort and a high level of sophistication. The viral Tom Cruise deepfake was a collaboration between Belgium video effects specialist Chris Ume and actor Miles Fisher. Although Ume used DeepFaceLab, the open source deepfake platform responsible for 95% of deepfakes currently created, he cautions people that this video was not easy to make. Ume trained his AI-based model for months, then incorporated Fisher's mannerisms and CGI tools.
Seeing as deepfakes are going to be used as an extension of existing spear-phishing and social engineering campaigns, it's vital to keep employees vigilant and cognizant of such attacks. It's important to have a healthy skepticism of media content, especially if the source of the media is questionable.
It's important to look for different tells, including overly consistent eye spacing; syncing issues between a subject's lips and their face; and, according to the FBI, visual distortions around the subject's pupils and earlobes. Lastly, blurry backgrounds — or blurry portions of a background — are a red flag. As a caveat, these tells are constantly changing. When deepfakes first circulated, weird breathing patterns and blinking eyes were the most common signs. However, the technology subsequently improved, making these tells obsolete.
What's In Store
We have seen some deepfake detection initiatives from big tech, namely Microsoft's video authentication tool and Facebook's deepfake detection challenge; however, a lot of promising work is being done in academia. In 2019, scholars noted that discrepancies between head movements and facial expressions could be used to identify deepfakes.
More recently, scholars have focused on mouth shapes failing to match the proper sounds, and perhaps most groundbreaking, a recent project has zeroed in on generator signals. This proposed approach not only separates authentic videos from deepfakes, but it also attempts to identify the specific generative models behind fake videos.
In real time, we're seeing a back and forth between those using generative adversarial networks for good and those using them to do harm. In February, researchers found that systems designed to identify deepfakes can be tricked. Thus, not to belabor the point, but concerns over deepfakes are well-founded.
Protect Yourself and Your Company
As is the case with any new technology, regulatory and legal systems are unable to move as quickly as the emerging technology. Like Photoshop before them, deepfake tools will eventually become mainstream. In the short term, the onus is on all of us to remain vigilant and cognizant of deepfake-powered social engineering attacks.
In the longer term, regulatory agencies will have to intervene. A few states — California, Texas, and Virginia — have already passed criminal legislation against certain types of deepfakes, and social media companies have engaged in self-regulation as well.
In January 2020, Facebook issued a manipulated media policy, and the following month, Twitter and YouTube followed suit with policies of their own. That said, these companies don't have the best track records when it comes to self-regulation. Until deepfake detection tools become mainstream and federal cybersecurity laws are enacted, it's wise to maintain a healthy skepticism of certain media, especially if the media source is suspicious, or if that phone call request doesn't sound quite right.John Donegan is an enterprise analyst at ManageEngine. He covers infosec and cybersecurity, addressing technology-related issues and their impact on business. John holds several degrees, including a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. ... View Full Bio