Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

// // //
6/10/2021
01:00 PM
John Donegan
John Donegan
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv

Deepfakes Are on the Rise, but Don't Panic Just Yet

Deepfakes will likely give way to deep suspicion, as users try to sort legitimate media from malicious.

Emerging technologies have been known to cause unwarranted mass hysteria. That said, and at risk of sounding hyperbolic, the concerns over deepfakes' potential effects are absolutely warranted. As the FBI's cyber division noted in its recent private industry notification, malicious actors have already begun to incorporate deepfake audio and video into their existing spear-fishing and social engineering campaigns. With deepfake technologies becoming more accessible and convincing every day, synthetic media will spread, potentially resulting in serious geopolitical consequences.

Related Content:

Defending Against Deepfakes: From Tells to Crypto

Special Report: Assessing Cybersecurity Risk in Today's Enterprises

New From The Edge: A View From Inside a Deception

Current State of Deepfakes
Much like consumer photo and video editing software, deepfake technologies are neither inherently good nor bad, and they will eventually become mainstream. In fact, there are already a host of popular, ready-to-use applications, including FaceApp, FaceSwap, Avatarify, and Zao. Although many of these apps come with disclaimers, this synthetic content is completely protected under the First Amendment. That is, until the content is used to further illegal efforts, and of course, we are already seeing this happen. On Dark Web forums, deepfake communities share intelligence, offer deepfakes as a service (DaaS), and to a lesser extent, buy and sell content

At the moment, deepfake audio is arguably more dangerous than deepfake video. Without visual cues to rely on, users have a difficult time recognizing synthetic audio, making this form of deepfake particularly effective from a social engineering standpoint. In March 2019, cybercriminals successfully conducted a deepfake audio attack, duping the CEO of a UK-based energy firm into transferring $243,000 to a Hungarian supplier. And last year in Philadelphia, a man was targeted by an audio-spoofing attack. These examples show that bad actors are actively using deepfake audio in the wild for monetary gain.

Nonetheless, fear of deepfake video attacks is outpacing actual attacks. Although it was initially reported that European politicians were victims of deepfake video calls, as it turns out, the attacks were conducted by two Russian pranksters, one of whom shares a remarkable resemblance to Leonid Volkov, chief of staff for anti-Putin politician Alexei Navalny. Nevertheless, this geopolitical incident, and the reaction to it, shows just how fearful we've become of deepfake technologies. Headlines such as Deepfake Attacks Are About to Surge and Deepfake Satellite Images Pose Serious Military and Political Challenges are becoming increasingly common. It does, indeed, feel as if the fear of deepfakes is outpacing actual attacks; however, this doesn't mean that the concern is unwarranted.

Some of the most celebrated deepfakes still take a great deal of effort and a high level of sophistication. The viral Tom Cruise deepfake was a collaboration between Belgium video effects specialist Chris Ume and actor Miles Fisher. Although Ume used DeepFaceLab, the open source deepfake platform responsible for 95% of deepfakes currently created, he cautions people that this video was not easy to make. Ume trained his AI-based model for months, then incorporated Fisher's mannerisms and CGI tools.

Credit: freshidea via Adobe Stock
Credit: freshidea via Adobe Stock

Seeing as deepfakes are going to be used as an extension of existing spear-phishing and social engineering campaigns, it's vital to keep employees vigilant and cognizant of such attacks. It's important to have a healthy skepticism of media content, especially if the source of the media is questionable.

It's important to look for different tells, including overly consistent eye spacing; syncing issues between a subject's lips and their face; and, according to the FBI, visual distortions around the subject's pupils and earlobes. Lastly, blurry backgrounds — or blurry portions of a background — are a red flag. As a caveat, these tells are constantly changing. When deepfakes first circulated, weird breathing patterns and blinking eyes were the most common signs. However, the technology subsequently improved, making these tells obsolete.

What's In Store
We have seen some deepfake detection initiatives from big tech, namely Microsoft's video authentication tool and Facebook's deepfake detection challenge; however, a lot of promising work is being done in academia. In 2019, scholars noted that discrepancies between head movements and facial expressions could be used to identify deepfakes.

More recently, scholars have focused on mouth shapes failing to match the proper sounds, and perhaps most groundbreaking, a recent project has zeroed in on generator signals. This proposed approach not only separates authentic videos from deepfakes, but it also attempts to identify the specific generative models behind fake videos.

In real time, we're seeing a back and forth between those using generative adversarial networks for good and those using them to do harm. In February, researchers found that systems designed to identify deepfakes can be tricked. Thus, not to belabor the point, but concerns over deepfakes are well-founded.

Protect Yourself and Your Company
As is the case with any new technology, regulatory and legal systems are unable to move as quickly as the emerging technology. Like Photoshop before them, deepfake tools will eventually become mainstream. In the short term, the onus is on all of us to remain vigilant and cognizant of deepfake-powered social engineering attacks.

In the longer term, regulatory agencies will have to intervene. A few states — California, Texas, and Virginia — have already passed criminal legislation against certain types of deepfakes, and social media companies have engaged in self-regulation as well.

In January 2020, Facebook issued a manipulated media policy, and the following month, Twitter and YouTube followed suit with policies of their own. That said, these companies don't have the best track records when it comes to self-regulation. Until deepfake detection tools become mainstream and federal cybersecurity laws are enacted, it's wise to maintain a healthy skepticism of certain media, especially if the media source is suspicious, or if that phone call request doesn't sound quite right.

John Donegan is an enterprise analyst at ManageEngine. He covers infosec and cybersecurity, addressing technology-related issues and their impact on business. John holds several degrees, including a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The 10 Most Impactful Types of Vulnerabilities for Enterprises Today
Managing system vulnerabilities is one of the old est - and most frustrating - security challenges that enterprise defenders face. Every software application and hardware device ships with intrinsic flaws - flaws that, if critical enough, attackers can exploit from anywhere in the world. It's crucial that defenders take stock of what areas of the tech stack have the most emerging, and critical, vulnerabilities they must manage. It's not just zero day vulnerabilities. Consider that CISA's Known Exploited Vulnerabilities (KEV) catalog lists vulnerabilitlies in widely used applications that are "actively exploited," and most of them are flaws that were discovered several years ago and have been fixed. There are also emerging vulnerabilities in 5G networks, cloud infrastructure, Edge applications, and firmwares to consider.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2023-1172
PUBLISHED: 2023-03-17
The Bookly plugin for WordPress is vulnerable to Stored Cross-Site Scripting via the full name value in versions up to, and including, 21.5 due to insufficient input sanitization and output escaping. This makes it possible for unauthenticated attackers to inject arbitrary web scripts in pages that w...
CVE-2023-1469
PUBLISHED: 2023-03-17
The WP Express Checkout plugin for WordPress is vulnerable to Stored Cross-Site Scripting via the ‘pec_coupon[code]’ parameter in versions up to, and including, 2.2.8 due to insufficient input sanitization and output escaping. This makes it possible for authenti...
CVE-2023-1466
PUBLISHED: 2023-03-17
A vulnerability was found in SourceCodester Student Study Center Desk Management System 1.0. It has been rated as critical. This issue affects the function view_student of the file admin/?page=students/view_student. The manipulation of the argument id with the input 3' AND (SELECT 2100 FROM (SELECT(...
CVE-2023-1467
PUBLISHED: 2023-03-17
A vulnerability classified as critical has been found in SourceCodester Student Study Center Desk Management System 1.0. Affected is an unknown function of the file Master.php?f=delete_img of the component POST Parameter Handler. The manipulation of the argument path with the input C%3A%2Ffoo.txt le...
CVE-2023-1468
PUBLISHED: 2023-03-17
A vulnerability classified as critical was found in SourceCodester Student Study Center Desk Management System 1.0. Affected by this vulnerability is an unknown functionality of the file admin/?page=reports&date_from=2023-02-17&date_to=2023-03-17 of the component Report Handler. The manipula...