Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Analytics

9/18/2020
10:05 AM
50%
50%

Deepfake Detection Poses Problematic Technology Race

Experts hold out little hope for a robust technical solution in the long term.

With disinformation concerns increasing as the US presidential election approaches, industry and academic researchers continue to investigate ways of detecting misleading or fake content generated using deep neural networks, so-called "deepfakes."

While there have been successes — for example, focusing on artifacts such as the unnatural blinking of eyes has resulted in high accuracy rates — a key problem in the arms race between attackers and defenders remains: The neural networks used to create deepfake videos are automatically tested against a variety of techniques intended to detect manipulated media, and the latest defensive detection technologies can easily be included. The feedback loop used to create deepfakes is similar in approach — if not in technology — to the fully undetectable (FUD) services that allow malware to be automatically scrambled in a way to dodge signature-based detection technology.

Related Content:

The Rise of Deepfakes and What That Means for Identity Fraud

The Threat from the Internet—and What Your Organization Can Do About It

New on The Edge: Don't Fall for It! Defending Against Deepfakes

Detecting artifacts is ultimately a losing proposition, says Yisroel Mirsky, a post-doctoral fellow in cybersecurity at the Georgia Institute of Technology and co-author of a paper that surveyed the current state of deepfake creation and detection technologies.

"The defensive side is all doing the same thing," he says. "They are either looking for some sort of artifact that is specific to the deepfake generator or applying some generic classifier for some architecture or another. We need to look at solutions that are out of band."

The problem is well known among researchers. Take Microsoft's Sept. 1 announcement of a tool designed to help detect deepfake videos. The Microsoft Video Authenticator detects possible deepfakes by finding the boundary between inserted images and the original video, providing a score for the video as it plays.

While the technology is being released as a way to detect issues during the election cycle, Microsoft warned that disinformation groups will quickly adapt.

"The fact that [the images are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology," said Tom Burt, corporate vice president of customer security and trust, and Eric Horvitz, chief scientific officer, in a blog post describing the technology. "However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes."

Microsoft is not alone in considering current deepfake detection technology as a temporary fix. In its Deep Fake Detection Challenge (DFC) in early summer, Facebook found the winning algorithm only accurately detected fake videos about two-thirds of the time. 

"[T]he DFDC results also show that this is still very much an unsolved problem," the company said in its announcement. "None of the 2,114 participants, which included leading experts from around the globe, achieved 70 percent accuracy on unseen deepfakes in the black box data set." 

In fact, calling the competition between attackers and defenders an "arms race" is a bit of a misnomer because the advances in technology will likely mean that realistic fake videos that cannot be detected by technology will become a reality not too far in the future, says Alex Engler, the Rubenstein Fellow in governance studies at the Brookings Institute, a policy think tank.

"We have not see a dramatic improvement in deepfakes, and we haven't really a super-convincing deepfake video, but am I optimistic about the long-term view? Not really," he says. "They are going to get better. Eventually there will not be an empirical way to tell the difference between a deepfake and a legitimate video."

In a policy paper, Engler argued that policy-makers will need to plan for the future when deepfake technology is widespread and sophisticated.

On the technical side, like the anti-malware industry, there are two likely routes that deepfake detection will take. Some companies are creating ways of signing video as proof that it has not been modified. Microsoft, for example, unveiled a signing technology with a browser plug-in that the company said can be used to verify the legitimacy of videos.  

"In the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media," Burt and Hovitz wrote. "There are few tools today to help assure readers that the media they're seeing online came from a trusted source and that it wasn't altered." 

Another avenue of research is to look for other signs that a video has been modified. With machine-learning algorithms capable of turning videos into a series of content and metadata — from a transcription of any speech in the video to the location of where the video was taken — creating content-based detection algorithms could be a possibility, Georgia Tech's Mirsky says. 

"Just like malware, if you have a technique that can look at the actual content, that is helpful," he says. "It is very important because it raises the bar for the attacker. They can mitigate 90% of attacks, but the issue is that an adversary like a nation-state actor who has plenty of time and effort to refine the deepfake, it becomes very, very challenging to detect these attacks."

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 10/23/2020
7 Tips for Choosing Security Metrics That Matter
Ericka Chickowski, Contributing Writer,  10/19/2020
Russian Military Officers Unmasked, Indicted for High-Profile Cyberattack Campaigns
Kelly Jackson Higgins, Executive Editor at Dark Reading,  10/19/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-24847
PUBLISHED: 2020-10-23
A Cross-Site Request Forgery (CSRF) vulnerability is identified in FruityWifi through 2.4. Due to a lack of CSRF protection in page_config_adv.php, an unauthenticated attacker can lure the victim to visit his website by social engineering or another attack vector. Due to this issue, an unauthenticat...
CVE-2020-24848
PUBLISHED: 2020-10-23
FruityWifi through 2.4 has an unsafe Sudo configuration [(ALL : ALL) NOPASSWD: ALL]. This allows an attacker to perform a system-level (root) local privilege escalation, allowing an attacker to gain complete persistent access to the local system.
CVE-2020-5990
PUBLISHED: 2020-10-23
NVIDIA GeForce Experience, all versions prior to 3.20.5.70, contains a vulnerability in the ShadowPlay component which may lead to local privilege escalation, code execution, denial of service or information disclosure.
CVE-2020-25483
PUBLISHED: 2020-10-23
An arbitrary command execution vulnerability exists in the fopen() function of file writes of UCMS v1.4.8, where an attacker can gain access to the server.
CVE-2020-5977
PUBLISHED: 2020-10-23
NVIDIA GeForce Experience, all versions prior to 3.20.5.70, contains a vulnerability in NVIDIA Web Helper NodeJS Web Server in which an uncontrolled search path is used to load a node module, which may lead to code execution, denial of service, escalation of privileges, and information disclosure.