Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operations

7/23/2020
10:00 AM
Matt Lewis
Matt Lewis
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Deepfakes & James Bond Research Project: Cool but Dangerous

Open source software for creating deepfakes is getting better and better, to the chagrin of researchers

In January 2020, NCC Group collaborated with University College London (UCL) students on the topic of cybersecurity implications of deepfakes. As part of our wider research into artificial intelligence (AI) and machine learning, we continue to explore the potential impact of deepfakes in a cybersecurity context, particularly around their use in nefarious activities. There have already been numerous stories of real-world fraudsters using AI to mimic CEO voices in cybercriminal activities, and we believe it's only a matter of time before we see similar, visual-based attempts using deepfake frameworks. And remember, many of those frameworks are open source and freely available for experimentation.

Project & Challenge
Our brief to the students (who are part of UCL's Centre for Doctoral Training in Data Intensive Science) was to explore common open source deepfake frameworks and broadly assess them in terms of ease of use and quality of faked outputs. This first part of the research was to help us understand how accessible these frameworks are to potential fraudsters, and the computational resources and execution times needed to produce realistic outputs. We examined two in particular, FaceSwap and DeepFaceLab, and one open source speech-driven facial synthesis model.

We also asked them to help us explore the practicalities — specifically, how realistic fake videos can be achieved. The challenge was to take a three-minute clip from a movie (Casino Royale) and replace the face of the lead character (Daniel Craig playing James Bond) with my face. This helped us understand logistical aspects around source and destination video qualities, lighting conditions, angles, and facial expressions of source and target imagery. We also got a better understanding of not only the technical details but also the procedural and physical aspects.

On the procedural front, we learned that when trying to create realistic deepfakes, the quality (resolution) of source and destination image sets is very important, in that they should match very well. We lost some realism in the output because the initial HD quality source footage didn't match the cinematic effect of the target video, which had a smoother resolution. Lighting conditions are also important, and both source and target faces should be similar in shape. For example, our source image had to be slightly stretched to match that of the James Bond character.

Everyday objects also presented difficulties — the simple act of wearing glasses could make it easier to prevent deepfake attacks. 

Procedurally, we also learned that it's harder to produce realistic deepfakes when the source image doesn't have the same types of mouth shape and movement (during dialogue) and eye movements related to raised eyebrows or blinking. Attackers seeking to create realistic deepfakes need a rich source facial image dataset of each individual with different facial expressions and angles.

What We Learned
Our research was designed to help us better understand technical risk mitigation strategies and/or policies, regulation, and legislation that might be needed to curb potential abuse of deepfake technology. Here's what we found:

  • There are many open source frameworks already available for creating deepfakes.
  • Many models are optimized for high-end PCs or HPCs, and require lengthy training.
  • The frameworks are easy to pick up but harder to master.
  • There is plenty of scope for human error, which results in unrealistic videos.

There are many procedural aspects that impede the creation of convincing deepfakes: lighting, angles, source, and destination faces of similar size and shape.

In terms of prevention, our research did identify a few existing techniques that offer varying degrees of deepfake detection. These largely rely on imperfections, which means that as models improve, the defensive measures will be less effective.

Preventative mechanisms pose an even bigger challenge: They require either the introduction of watermarking (which brings its own limitations) or the establishment of root of trust at the point of original content creation. These would be difficult to engineer and implement.

The prevalence of freely available and easy-to-use deepfake software is an ongoing concern. While there are still many procedural and computing roadblocks to creating realistic outputs, Moore's Law and history tell us it's only a matter of time before these technologies get better and more accessible. We need more research, more technology options, and perhaps regulation to help ward off deepfake dangers.

Related Content:

 

 

Register now for this year's fully virtual Black Hat USA, scheduled to take place August 1–6, and get more information about the event on the Black Hat website. Click for details on conference information and to register.

Matt Lewis is an experienced Technical Research Director in cybersecurity at NCC Group, one of the largest security consultancies in the world with over 35 global offices, 2,000 employees and 15,000 clients. He has experience in cybersecurity consultancy, scenario-based ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
mosesbotbol
50%
50%
mosesbotbol,
User Rank: Apprentice
7/23/2020 | 2:07:22 PM
Worrisome Future
2024 US Election cycle will be really interesting.  You think tempers flare now? 

I fear in a few years we will have little confidence in that whatever we see is genuine or not.
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/3/2020
Pen Testers Who Got Arrested Doing Their Jobs Tell All
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/5/2020
New 'Nanodegree' Program Provides Hands-On Cybersecurity Training
Nicole Ferraro, Contributing Writer,  8/3/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Changing Face of Threat Intelligence
The Changing Face of Threat Intelligence
This special report takes a look at how enterprises are using threat intelligence, as well as emerging best practices for integrating threat intel into security operations and incident response. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-12777
PUBLISHED: 2020-08-10
A function in Combodo iTop contains a vulnerability of Broken Access Control, which allows unauthorized attacker to inject command and disclose system information.
CVE-2020-12778
PUBLISHED: 2020-08-10
Combodo iTop does not validate inputted parameters, attackers can inject malicious commands and launch XSS attack.
CVE-2020-12779
PUBLISHED: 2020-08-10
Combodo iTop contains a stored Cross-site Scripting vulnerability, which can be attacked by uploading file with malicious script.
CVE-2020-12780
PUBLISHED: 2020-08-10
A security misconfiguration exists in Combodo iTop, which can expose sensitive information.
CVE-2020-12781
PUBLISHED: 2020-08-10
Combodo iTop contains a cross-site request forgery (CSRF) vulnerability, attackers can execute specific commands via malicious site request forgery.