Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Analytics

9/18/2020
10:05 AM
50%
50%

Deepfake Detection Poses Problematic Technology Race

Experts hold out little hope for a robust technical solution in the long term.

With disinformation concerns increasing as the US presidential election approaches, industry and academic researchers continue to investigate ways of detecting misleading or fake content generated using deep neural networks, so-called "deepfakes."

While there have been successes — for example, focusing on artifacts such as the unnatural blinking of eyes has resulted in high accuracy rates — a key problem in the arms race between attackers and defenders remains: The neural networks used to create deepfake videos are automatically tested against a variety of techniques intended to detect manipulated media, and the latest defensive detection technologies can easily be included. The feedback loop used to create deepfakes is similar in approach — if not in technology — to the fully undetectable (FUD) services that allow malware to be automatically scrambled in a way to dodge signature-based detection technology.

Related Content:

The Rise of Deepfakes and What That Means for Identity Fraud

The Threat from the Internet—and What Your Organization Can Do About It

New on The Edge: Don't Fall for It! Defending Against Deepfakes

Detecting artifacts is ultimately a losing proposition, says Yisroel Mirsky, a post-doctoral fellow in cybersecurity at the Georgia Institute of Technology and co-author of a paper that surveyed the current state of deepfake creation and detection technologies.

"The defensive side is all doing the same thing," he says. "They are either looking for some sort of artifact that is specific to the deepfake generator or applying some generic classifier for some architecture or another. We need to look at solutions that are out of band."

The problem is well known among researchers. Take Microsoft's Sept. 1 announcement of a tool designed to help detect deepfake videos. The Microsoft Video Authenticator detects possible deepfakes by finding the boundary between inserted images and the original video, providing a score for the video as it plays.

While the technology is being released as a way to detect issues during the election cycle, Microsoft warned that disinformation groups will quickly adapt.

"The fact that [the images are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology," said Tom Burt, corporate vice president of customer security and trust, and Eric Horvitz, chief scientific officer, in a blog post describing the technology. "However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes."

Microsoft is not alone in considering current deepfake detection technology as a temporary fix. In its Deep Fake Detection Challenge (DFC) in early summer, Facebook found the winning algorithm only accurately detected fake videos about two-thirds of the time. 

"[T]he DFDC results also show that this is still very much an unsolved problem," the company said in its announcement. "None of the 2,114 participants, which included leading experts from around the globe, achieved 70 percent accuracy on unseen deepfakes in the black box data set." 

In fact, calling the competition between attackers and defenders an "arms race" is a bit of a misnomer because the advances in technology will likely mean that realistic fake videos that cannot be detected by technology will become a reality not too far in the future, says Alex Engler, the Rubenstein Fellow in governance studies at the Brookings Institute, a policy think tank.

"We have not see a dramatic improvement in deepfakes, and we haven't really a super-convincing deepfake video, but am I optimistic about the long-term view? Not really," he says. "They are going to get better. Eventually there will not be an empirical way to tell the difference between a deepfake and a legitimate video."

In a policy paper, Engler argued that policy-makers will need to plan for the future when deepfake technology is widespread and sophisticated.

On the technical side, like the anti-malware industry, there are two likely routes that deepfake detection will take. Some companies are creating ways of signing video as proof that it has not been modified. Microsoft, for example, unveiled a signing technology with a browser plug-in that the company said can be used to verify the legitimacy of videos.  

"In the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media," Burt and Hovitz wrote. "There are few tools today to help assure readers that the media they're seeing online came from a trusted source and that it wasn't altered." 

Another avenue of research is to look for other signs that a video has been modified. With machine-learning algorithms capable of turning videos into a series of content and metadata — from a transcription of any speech in the video to the location of where the video was taken — creating content-based detection algorithms could be a possibility, Georgia Tech's Mirsky says. 

"Just like malware, if you have a technique that can look at the actual content, that is helpful," he says. "It is very important because it raises the bar for the attacker. They can mitigate 90% of attacks, but the issue is that an adversary like a nation-state actor who has plenty of time and effort to refine the deepfake, it becomes very, very challenging to detect these attacks."

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
When It Comes To Security Tools, More Isn't More
Lamont Orange, Chief Information Security Officer at Netskope,  1/11/2021
US Capitol Attack a Wake-up Call for the Integration of Physical & IT Security
Seth Rosenblatt, Contributing Writer,  1/11/2021
IoT Vendor Ubiquiti Suffers Data Breach
Dark Reading Staff 1/11/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
2020: The Year in Security
Download this Tech Digest for a look at the biggest security stories that - so far - have shaped a very strange and stressful year.
Flash Poll
Assessing Cybersecurity Risk in Today's Enterprises
Assessing Cybersecurity Risk in Today's Enterprises
COVID-19 has created a new IT paradigm in the enterprise -- and a new level of cybersecurity risk. This report offers a look at how enterprises are assessing and managing cyber-risk under the new normal.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-25533
PUBLISHED: 2021-01-15
An issue was discovered in Malwarebytes before 4.0 on macOS. A malicious application was able to perform a privileged action within the Malwarebytes launch daemon. The privileged service improperly validated XPC connections by relying on the PID instead of the audit token. An attacker can construct ...
CVE-2021-3162
PUBLISHED: 2021-01-15
Docker Desktop Community before 2.5.0.0 on macOS mishandles certificate checking, leading to local privilege escalation.
CVE-2021-21242
PUBLISHED: 2021-01-15
OneDev is an all-in-one devops platform. In OneDev before version 4.0.3, there is a critical vulnerability which can lead to pre-auth remote code execution. AttachmentUploadServlet deserializes untrusted data from the `Attachment-Support` header. This Servlet does not enforce any authentication or a...
CVE-2021-21245
PUBLISHED: 2021-01-15
OneDev is an all-in-one devops platform. In OneDev before version 4.0.3, AttachmentUploadServlet also saves user controlled data (`request.getInputStream()`) to a user specified location (`request.getHeader("File-Name")`). This issue may lead to arbitrary file upload which can be used to u...
CVE-2021-21246
PUBLISHED: 2021-01-15
OneDev is an all-in-one devops platform. In OneDev before version 4.0.3, the REST UserResource endpoint performs a security check to make sure that only administrators can list user details. However for the `/users/` endpoint there are no security checks enforced so it is possible to retrieve ar...