Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

01:00 PM
John Donegan
John Donegan
Connect Directly
E-Mail vvv

Deepfakes Are on the Rise, but Don't Panic Just Yet

Deepfakes will likely give way to deep suspicion, as users try to sort legitimate media from malicious.

Emerging technologies have been known to cause unwarranted mass hysteria. That said, and at risk of sounding hyperbolic, the concerns over deepfakes' potential effects are absolutely warranted. As the FBI's cyber division noted in its recent private industry notification, malicious actors have already begun to incorporate deepfake audio and video into their existing spear-fishing and social engineering campaigns. With deepfake technologies becoming more accessible and convincing every day, synthetic media will spread, potentially resulting in serious geopolitical consequences.

Related Content:

Defending Against Deepfakes: From Tells to Crypto

Special Report: Assessing Cybersecurity Risk in Today's Enterprises

New From The Edge: A View From Inside a Deception

Current State of Deepfakes
Much like consumer photo and video editing software, deepfake technologies are neither inherently good nor bad, and they will eventually become mainstream. In fact, there are already a host of popular, ready-to-use applications, including FaceApp, FaceSwap, Avatarify, and Zao. Although many of these apps come with disclaimers, this synthetic content is completely protected under the First Amendment. That is, until the content is used to further illegal efforts, and of course, we are already seeing this happen. On Dark Web forums, deepfake communities share intelligence, offer deepfakes as a service (DaaS), and to a lesser extent, buy and sell content

At the moment, deepfake audio is arguably more dangerous than deepfake video. Without visual cues to rely on, users have a difficult time recognizing synthetic audio, making this form of deepfake particularly effective from a social engineering standpoint. In March 2019, cybercriminals successfully conducted a deepfake audio attack, duping the CEO of a UK-based energy firm into transferring $243,000 to a Hungarian supplier. And last year in Philadelphia, a man was targeted by an audio-spoofing attack. These examples show that bad actors are actively using deepfake audio in the wild for monetary gain.

Nonetheless, fear of deepfake video attacks is outpacing actual attacks. Although it was initially reported that European politicians were victims of deepfake video calls, as it turns out, the attacks were conducted by two Russian pranksters, one of whom shares a remarkable resemblance to Leonid Volkov, chief of staff for anti-Putin politician Alexei Navalny. Nevertheless, this geopolitical incident, and the reaction to it, shows just how fearful we've become of deepfake technologies. Headlines such as Deepfake Attacks Are About to Surge and Deepfake Satellite Images Pose Serious Military and Political Challenges are becoming increasingly common. It does, indeed, feel as if the fear of deepfakes is outpacing actual attacks; however, this doesn't mean that the concern is unwarranted.

Some of the most celebrated deepfakes still take a great deal of effort and a high level of sophistication. The viral Tom Cruise deepfake was a collaboration between Belgium video effects specialist Chris Ume and actor Miles Fisher. Although Ume used DeepFaceLab, the open source deepfake platform responsible for 95% of deepfakes currently created, he cautions people that this video was not easy to make. Ume trained his AI-based model for months, then incorporated Fisher's mannerisms and CGI tools.

Credit: freshidea via Adobe Stock
Credit: freshidea via Adobe Stock

Seeing as deepfakes are going to be used as an extension of existing spear-phishing and social engineering campaigns, it's vital to keep employees vigilant and cognizant of such attacks. It's important to have a healthy skepticism of media content, especially if the source of the media is questionable.

It's important to look for different tells, including overly consistent eye spacing; syncing issues between a subject's lips and their face; and, according to the FBI, visual distortions around the subject's pupils and earlobes. Lastly, blurry backgrounds — or blurry portions of a background — are a red flag. As a caveat, these tells are constantly changing. When deepfakes first circulated, weird breathing patterns and blinking eyes were the most common signs. However, the technology subsequently improved, making these tells obsolete.

What's In Store
We have seen some deepfake detection initiatives from big tech, namely Microsoft's video authentication tool and Facebook's deepfake detection challenge; however, a lot of promising work is being done in academia. In 2019, scholars noted that discrepancies between head movements and facial expressions could be used to identify deepfakes.

More recently, scholars have focused on mouth shapes failing to match the proper sounds, and perhaps most groundbreaking, a recent project has zeroed in on generator signals. This proposed approach not only separates authentic videos from deepfakes, but it also attempts to identify the specific generative models behind fake videos.

In real time, we're seeing a back and forth between those using generative adversarial networks for good and those using them to do harm. In February, researchers found that systems designed to identify deepfakes can be tricked. Thus, not to belabor the point, but concerns over deepfakes are well-founded.

Protect Yourself and Your Company
As is the case with any new technology, regulatory and legal systems are unable to move as quickly as the emerging technology. Like Photoshop before them, deepfake tools will eventually become mainstream. In the short term, the onus is on all of us to remain vigilant and cognizant of deepfake-powered social engineering attacks.

In the longer term, regulatory agencies will have to intervene. A few states — California, Texas, and Virginia — have already passed criminal legislation against certain types of deepfakes, and social media companies have engaged in self-regulation as well.

In January 2020, Facebook issued a manipulated media policy, and the following month, Twitter and YouTube followed suit with policies of their own. That said, these companies don't have the best track records when it comes to self-regulation. Until deepfake detection tools become mainstream and federal cybersecurity laws are enacted, it's wise to maintain a healthy skepticism of certain media, especially if the media source is suspicious, or if that phone call request doesn't sound quite right.

John Donegan is an enterprise analyst at ManageEngine. He covers infosec and cybersecurity, addressing technology-related issues and their impact on business. John holds several degrees, including a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
How Data Breaches Affect the Enterprise
Data breaches continue to cause negative outcomes for companies worldwide. However, many organizations report that major impacts have declined significantly compared with a year ago, suggesting that many have gotten better at containing breach fallout. Download Dark Reading's Report "How Data Breaches Affect the Enterprise" to delve more into this timely topic.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-11-29
An unspecified version of phpWhois is affected by a Cross Site Scripting (XSS) vulnerability. In file example.php, the exit function will terminate the script and print the message to the user. The message will contain $_GET['query'] then there is a XSS vulnerability.
PUBLISHED: 2021-11-29
The Smash Balloon Social Post Feed WordPress plugin before 4.0.1 did not have any privilege or nonce validation before saving the plugin's setting. As a result, any logged-in user on a vulnerable site could update the settings and store rogue JavaScript on each of its posts and pages.
PUBLISHED: 2021-11-29
The My Calendar WordPress plugin before 3.2.18 does not sanitise and escape the callback parameter of the mc_post_lookup AJAX action (available to any authenticated user) before outputting it back in the response, leading to a Reflected Cross-Site Scripting issue
PUBLISHED: 2021-11-29
The myCred WordPress plugin before 1.7.8 does not sanitise and escape the user parameter before outputting it back in the Points Log admin dashboard, leading to a Reflected Cross-Site Scripting
PUBLISHED: 2021-11-29
The About Author Box WordPress plugin before 1.0.2 does not sanitise and escape the Social Profiles field values before outputting them in attributes, which could allow user with a role as low as contributor to perform Cross-Site Scripting attacks.