Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operations

7/23/2020
10:00 AM
Matt Lewis
Matt Lewis
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Deepfakes & James Bond Research Project: Cool but Dangerous

Open source software for creating deepfakes is getting better and better, to the chagrin of researchers

In January 2020, NCC Group collaborated with University College London (UCL) students on the topic of cybersecurity implications of deepfakes. As part of our wider research into artificial intelligence (AI) and machine learning, we continue to explore the potential impact of deepfakes in a cybersecurity context, particularly around their use in nefarious activities. There have already been numerous stories of real-world fraudsters using AI to mimic CEO voices in cybercriminal activities, and we believe it's only a matter of time before we see similar, visual-based attempts using deepfake frameworks. And remember, many of those frameworks are open source and freely available for experimentation.

Project & Challenge
Our brief to the students (who are part of UCL's Centre for Doctoral Training in Data Intensive Science) was to explore common open source deepfake frameworks and broadly assess them in terms of ease of use and quality of faked outputs. This first part of the research was to help us understand how accessible these frameworks are to potential fraudsters, and the computational resources and execution times needed to produce realistic outputs. We examined two in particular, FaceSwap and DeepFaceLab, and one open source speech-driven facial synthesis model.

We also asked them to help us explore the practicalities — specifically, how realistic fake videos can be achieved. The challenge was to take a three-minute clip from a movie (Casino Royale) and replace the face of the lead character (Daniel Craig playing James Bond) with my face. This helped us understand logistical aspects around source and destination video qualities, lighting conditions, angles, and facial expressions of source and target imagery. We also got a better understanding of not only the technical details but also the procedural and physical aspects.

On the procedural front, we learned that when trying to create realistic deepfakes, the quality (resolution) of source and destination image sets is very important, in that they should match very well. We lost some realism in the output because the initial HD quality source footage didn't match the cinematic effect of the target video, which had a smoother resolution. Lighting conditions are also important, and both source and target faces should be similar in shape. For example, our source image had to be slightly stretched to match that of the James Bond character.

Everyday objects also presented difficulties — the simple act of wearing glasses could make it easier to prevent deepfake attacks. 

Procedurally, we also learned that it's harder to produce realistic deepfakes when the source image doesn't have the same types of mouth shape and movement (during dialogue) and eye movements related to raised eyebrows or blinking. Attackers seeking to create realistic deepfakes need a rich source facial image dataset of each individual with different facial expressions and angles.

What We Learned
Our research was designed to help us better understand technical risk mitigation strategies and/or policies, regulation, and legislation that might be needed to curb potential abuse of deepfake technology. Here's what we found:

  • There are many open source frameworks already available for creating deepfakes.
  • Many models are optimized for high-end PCs or HPCs, and require lengthy training.
  • The frameworks are easy to pick up but harder to master.
  • There is plenty of scope for human error, which results in unrealistic videos.

There are many procedural aspects that impede the creation of convincing deepfakes: lighting, angles, source, and destination faces of similar size and shape.

In terms of prevention, our research did identify a few existing techniques that offer varying degrees of deepfake detection. These largely rely on imperfections, which means that as models improve, the defensive measures will be less effective.

Preventative mechanisms pose an even bigger challenge: They require either the introduction of watermarking (which brings its own limitations) or the establishment of root of trust at the point of original content creation. These would be difficult to engineer and implement.

The prevalence of freely available and easy-to-use deepfake software is an ongoing concern. While there are still many procedural and computing roadblocks to creating realistic outputs, Moore's Law and history tell us it's only a matter of time before these technologies get better and more accessible. We need more research, more technology options, and perhaps regulation to help ward off deepfake dangers.

Related Content:

 

 

Register now for this year's fully virtual Black Hat USA, scheduled to take place August 1–6, and get more information about the event on the Black Hat website. Click for details on conference information and to register.

Matt Lewis is an experienced Technical Research Director in cybersecurity at NCC Group, one of the largest security consultancies in the world with over 35 global offices, 2,000 employees and 15,000 clients. He has experience in cybersecurity consultancy, scenario-based ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
mosesbotbol
50%
50%
mosesbotbol,
User Rank: Apprentice
7/23/2020 | 2:07:22 PM
Worrisome Future
2024 US Election cycle will be really interesting.  You think tempers flare now? 

I fear in a few years we will have little confidence in that whatever we see is genuine or not.
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/21/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-25595
PUBLISHED: 2020-09-23
An issue was discovered in Xen through 4.14.x. The PCI passthrough code improperly uses register data. Code paths in Xen's MSI handling have been identified that act on unsanitized values read back from device hardware registers. While devices strictly compliant with PCI specifications shouldn't be ...
CVE-2020-5783
PUBLISHED: 2020-09-23
In IgniteNet HeliOS GLinq v2.2.1 r2961, the login functionality does not contain any CSRF protection mechanisms.
CVE-2020-11031
PUBLISHED: 2020-09-23
In GLPI before version 9.5.0, the encryption algorithm used is insecure. The security of the data encrypted relies on the password used, if a user sets a weak/predictable password, an attacker could decrypt data. This is fixed in version 9.5.0 by using a more secure encryption library. The library c...
CVE-2020-5781
PUBLISHED: 2020-09-23
In IgniteNet HeliOS GLinq v2.2.1 r2961, the langSelection parameter is stored in the luci configuration file (/etc/config/luci) by the authenticator.htmlauth function. When modified with arbitrary javascript, this causes a denial-of-service condition for all other users.
CVE-2020-5782
PUBLISHED: 2020-09-23
In IgniteNet HeliOS GLinq v2.2.1 r2961, if a user logs in and sets the ‘wan_type’ parameter, the wan interface for the device will become unreachable, which results in a denial of service condition for devices dependent on this connection.