Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint //


10:00 AM
Labhesh Patel
Labhesh Patel
Connect Directly
E-Mail vvv

The Rise of Deepfakes and What That Means for Identity Fraud

Convincing deepfakes are a real concern, but there are ways of fighting back.

The number of deepfake videos found online doubled from 2018 to 2019 and has continued to rise since then. A deepfake superimposes existing video footage or photographs of a face onto a source head and body using advanced neural network-powered artificial intelligence, and can make fraudulent video and audio content seem incredibly real.

The security industry continues to move away from outdated authentication methods, such as SMS-based two-factor authentication and knowledge-based authentication, which are easily susceptible to fraud. As a result, there has been an increase in more advanced biometric-based authentication methods as a secure alternative, but depending on the sophistication of those biometric systems, the rise of deepfake technology will undoubtedly become a larger concern.

How Are Deepfakes Created So Convincingly?
Sophisticated deepfakes require considerable amounts of data and compute power to create — it's not easy to swap out a person's face in a video. You'll need an extensive amount of video and voice (e.g., recorded sound bites) recordings of the subjects and a commercially available AI cloud package capable of training a neural network to take in an image of one person's face and output an image of the other person's face with the same pose, expression, and illumination.

Once the model is trained, it's applied to the specific video that you want to modify. If you want to change the words in the video, you'll need a neural network that is also trained on voice data. Depending on your skill level, it may take hours or days, along with hundreds of dollars in computing time on commercial cloud services. But generally speaking, this is still publicly available software that can help you create deepfakes for less than $1,000 and a few weeks to learn the program.

While some of the basic deepfake videos are not perfect (they may not capture the full details of a person's face and there may be some artifacts around the edges), they are often good enough to dupe most liveness detection systems, or systems able to analyze facial images and determine whether they are of a live human being or a reproduction. Not only that, many high-quality models are pretrained and publicly available. Anyone can run pretrained models on any video and create convincing deepfakes. Furthermore, deepfake technology will continue to get better, faster, and cheaper. While convincing deepfakes are a real concern, there are ways of fighting back.

How to Identify a Deepfake
There are different types and levels of deepfakes and certain tools needed to discern a deepfake from a real live selfie. Some deepfakes are coarser and lower quality, and they can be quickly produced with free apps. More convincing or higher-quality deepfakes require more significant effort, skill, money and time.

The human eye can detect deepfakes by closely observing the images for slight imperfections such as:

  • Face discolorations
  • Lighting that isn't quite right
  • Badly synced sound and video
  • Blurriness where the face meets the neck and hair

Algorithms can detect deepfakes by analyzing the images and revealing small inconsistencies between pixels, coloring, or distortion. It's also possible to use AI to detect deepfakes by training a neural network to spot changes in facial images that have been artificially altered by software. The most robust forms of liveness detection rely on machine learning, AI, and computer vision to examine dozens of miniscule details from a selfie video such as hair and skin texture, micromovements, and reflections in a subject's eye.

How ID Verification Methods Get Spoofed
Online identity verification methods rely on a government-issued photo ID and a corroborating selfie. An important part of the process is the liveness detection (normally performed during the selfie-taking process), which ensures the person is physically present (versus a spoof or deepfake).

Liveness detection should detect whether the user is in fact a real person. There are some basic forms of liveness detection that require the user to blink, move their eyes, say a few words, or nod their head. Unfortunately, these basic forms of liveness can be spoofed with a deepfake video. A more sophisticated way to trick the system utilizes a regular photo that is quickly animated by software and turned into a lifelike avatar of the fraud victim. The attack enables on-command facial movements (blink, node, smile, etc.) that look far more convincing to the camera than a lifeless photo. By requiring returning users to capture a selfie and re-establish "liveness," it's virtually impossible for fraudsters to take over existing accounts.

How to Arm Yourself Against Deepfakes
Unfortunately, there are no commercially available packages that can automatically detect deepfakes for all applications and all usage models, despite efforts from universities, online platforms, and tech giants to combat the threat.

As research continues, policymakers should develop legislation to discourage deepfake usage and penalize those who are caught manipulating videos to deceive others as a form of fraud. Tech companies also play a role. Facebook has strengthened its policy toward manipulated media to combat misinformation by adding removal criteria for deepfakes and the like. The company said it will now remove media which "has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say." These are the types of steps we need in order to maintain content efficacy and better online trust, but we should not assume that a Facebook policy will stop cybercriminals for creating deepfakes to bypass biometric-based verification solutions.

There will always be an arms race between hackers and cybersecurity engineers. Organizations should keep on the leading edge and adopt the latest technologies available on the market in order to guard against harm.

Related Content:

A listing of free products and services compiled for Dark Reading by Omdia analysts to help meet the challenges of COVID-19. 

Labhesh Patel is the CTO and Chief Scientist at Jumio. He is responsible for driving Jumio's innovation by operationalizing deep learning, computer vision, and augmented intelligence. An out-of-the-box thinker, Labhesh has 135 patents issued under his name. View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Current Issue
2020: The Year in Security
Download this Tech Digest for a look at the biggest security stories that - so far - have shaped a very strange and stressful year.
Flash Poll
Assessing Cybersecurity Risk in Today's Enterprises
Assessing Cybersecurity Risk in Today's Enterprises
COVID-19 has created a new IT paradigm in the enterprise -- and a new level of cybersecurity risk. This report offers a look at how enterprises are assessing and managing cyber-risk under the new normal.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-01-22
Pepperl+Fuchs Comtrol IO-Link Master in Version 1.5.48 and below is prone to an authenticated reflected POST Cross-Site Scripting
PUBLISHED: 2021-01-22
Pepperl+Fuchs Comtrol IO-Link Master in Version 1.5.48 and below is prone to an authenticated blind OS Command Injection.
PUBLISHED: 2021-01-22
Pepperl+Fuchs Comtrol IO-Link Master in Version 1.5.48 and below is prone to a NULL Pointer Dereference that leads to a DoS in discoveryd
PUBLISHED: 2021-01-22
M&M Software fdtCONTAINER Component in versions below 3.5.20304.x and between 3.6 and 3.6.20304.x is vulnerable to deserialization of untrusted data in its project storage.
PUBLISHED: 2021-01-22
Pepperl+Fuchs Comtrol IO-Link Master in Version 1.5.48 and below is prone to a Cross-Site Request Forgery (CSRF) in the web interface.