Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operations

7/23/2020
10:00 AM
Matt Lewis
Matt Lewis
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Deepfakes & James Bond Research Project: Cool but Dangerous

Open source software for creating deepfakes is getting better and better, to the chagrin of researchers

In January 2020, NCC Group collaborated with University College London (UCL) students on the topic of cybersecurity implications of deepfakes. As part of our wider research into artificial intelligence (AI) and machine learning, we continue to explore the potential impact of deepfakes in a cybersecurity context, particularly around their use in nefarious activities. There have already been numerous stories of real-world fraudsters using AI to mimic CEO voices in cybercriminal activities, and we believe it's only a matter of time before we see similar, visual-based attempts using deepfake frameworks. And remember, many of those frameworks are open source and freely available for experimentation.

Project & Challenge
Our brief to the students (who are part of UCL's Centre for Doctoral Training in Data Intensive Science) was to explore common open source deepfake frameworks and broadly assess them in terms of ease of use and quality of faked outputs. This first part of the research was to help us understand how accessible these frameworks are to potential fraudsters, and the computational resources and execution times needed to produce realistic outputs. We examined two in particular, FaceSwap and DeepFaceLab, and one open source speech-driven facial synthesis model.

We also asked them to help us explore the practicalities — specifically, how realistic fake videos can be achieved. The challenge was to take a three-minute clip from a movie (Casino Royale) and replace the face of the lead character (Daniel Craig playing James Bond) with my face. This helped us understand logistical aspects around source and destination video qualities, lighting conditions, angles, and facial expressions of source and target imagery. We also got a better understanding of not only the technical details but also the procedural and physical aspects.

On the procedural front, we learned that when trying to create realistic deepfakes, the quality (resolution) of source and destination image sets is very important, in that they should match very well. We lost some realism in the output because the initial HD quality source footage didn't match the cinematic effect of the target video, which had a smoother resolution. Lighting conditions are also important, and both source and target faces should be similar in shape. For example, our source image had to be slightly stretched to match that of the James Bond character.

Everyday objects also presented difficulties — the simple act of wearing glasses could make it easier to prevent deepfake attacks. 

Procedurally, we also learned that it's harder to produce realistic deepfakes when the source image doesn't have the same types of mouth shape and movement (during dialogue) and eye movements related to raised eyebrows or blinking. Attackers seeking to create realistic deepfakes need a rich source facial image dataset of each individual with different facial expressions and angles.

What We Learned
Our research was designed to help us better understand technical risk mitigation strategies and/or policies, regulation, and legislation that might be needed to curb potential abuse of deepfake technology. Here's what we found:

  • There are many open source frameworks already available for creating deepfakes.
  • Many models are optimized for high-end PCs or HPCs, and require lengthy training.
  • The frameworks are easy to pick up but harder to master.
  • There is plenty of scope for human error, which results in unrealistic videos.

There are many procedural aspects that impede the creation of convincing deepfakes: lighting, angles, source, and destination faces of similar size and shape.

In terms of prevention, our research did identify a few existing techniques that offer varying degrees of deepfake detection. These largely rely on imperfections, which means that as models improve, the defensive measures will be less effective.

Preventative mechanisms pose an even bigger challenge: They require either the introduction of watermarking (which brings its own limitations) or the establishment of root of trust at the point of original content creation. These would be difficult to engineer and implement.

The prevalence of freely available and easy-to-use deepfake software is an ongoing concern. While there are still many procedural and computing roadblocks to creating realistic outputs, Moore's Law and history tell us it's only a matter of time before these technologies get better and more accessible. We need more research, more technology options, and perhaps regulation to help ward off deepfake dangers.

Related Content:

 

 

Register now for this year's fully virtual Black Hat USA, scheduled to take place August 1–6, and get more information about the event on the Black Hat website. Click for details on conference information and to register.

Matt Lewis is an experienced Technical Research Director in cybersecurity at NCC Group, one of the largest security consultancies in the world with over 35 global offices, 2,000 employees and 15,000 clients. He has experience in cybersecurity consultancy, scenario-based ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
mosesbotbol
50%
50%
mosesbotbol,
User Rank: Apprentice
7/23/2020 | 2:07:22 PM
Worrisome Future
2024 US Election cycle will be really interesting.  You think tempers flare now? 

I fear in a few years we will have little confidence in that whatever we see is genuine or not.
News
Former CISA Director Chris Krebs Discusses Risk Management & Threat Intel
Kelly Sheridan, Staff Editor, Dark Reading,  2/23/2021
Edge-DRsplash-10-edge-articles
Security + Fraud Protection: Your One-Two Punch Against Cyberattacks
Joshua Goldfarb, Director of Product Management at F5,  2/23/2021
News
Cybercrime Groups More Prolific, Focus on Healthcare in 2020
Robert Lemos, Contributing Writer,  2/22/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Building the SOC of the Future
Building the SOC of the Future
Digital transformation, cloud-focused attacks, and a worldwide pandemic. The past year has changed the way business works and the way security teams operate. There is no going back.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-25252
PUBLISHED: 2021-03-03
Trend Micro's Virus Scan API (VSAPI) and Advanced Threat Scan Engine (ATSE) - are vulnerable to a memory exhaustion vulnerability that may lead to denial-of-service or system freeze if exploited by an attacker using a specially crafted file.
CVE-2021-26813
PUBLISHED: 2021-03-03
markdown2 >=1.0.1.18, fixed in 2.4.0, is affected by a regular expression denial of service vulnerability. If an attacker provides a malicious string, it can make markdown2 processing difficult or delayed for an extended period of time.
CVE-2021-27215
PUBLISHED: 2021-03-03
An issue was discovered in genua genugate before 9.0 Z p19, 9.1.x through 9.6.x before 9.6 p7, and 10.x before 10.1 p4. The Web Interfaces (Admin, Userweb, Sidechannel) can use different methods to perform the authentication of a user. A specific authentication method during login does not check the...
CVE-2021-3419
PUBLISHED: 2021-03-03
** REJECT ** DO NOT USE THIS CANDIDATE NUMBER. ConsultIDs: none. Reason: This candidate was withdrawn by its CNA. Notes: none.
CVE-2020-15937
PUBLISHED: 2021-03-03
An improper neutralization of input vulnerability in FortiGate version 6.2.x below 6.2.5 and 6.4.x below 6.4.1 may allow a remote attacker to perform a stored cross site scripting attack (XSS) via the IPS and WAF logs dashboard.