Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint //


10:00 AM
Labhesh Patel
Labhesh Patel
Connect Directly
E-Mail vvv

The Rise of Deepfakes and What That Means for Identity Fraud

Convincing deepfakes are a real concern, but there are ways of fighting back.

The number of deepfake videos found online doubled from 2018 to 2019 and has continued to rise since then. A deepfake superimposes existing video footage or photographs of a face onto a source head and body using advanced neural network-powered artificial intelligence, and can make fraudulent video and audio content seem incredibly real.

The security industry continues to move away from outdated authentication methods, such as SMS-based two-factor authentication and knowledge-based authentication, which are easily susceptible to fraud. As a result, there has been an increase in more advanced biometric-based authentication methods as a secure alternative, but depending on the sophistication of those biometric systems, the rise of deepfake technology will undoubtedly become a larger concern.

How Are Deepfakes Created So Convincingly?
Sophisticated deepfakes require considerable amounts of data and compute power to create — it's not easy to swap out a person's face in a video. You'll need an extensive amount of video and voice (e.g., recorded sound bites) recordings of the subjects and a commercially available AI cloud package capable of training a neural network to take in an image of one person's face and output an image of the other person's face with the same pose, expression, and illumination.

Once the model is trained, it's applied to the specific video that you want to modify. If you want to change the words in the video, you'll need a neural network that is also trained on voice data. Depending on your skill level, it may take hours or days, along with hundreds of dollars in computing time on commercial cloud services. But generally speaking, this is still publicly available software that can help you create deepfakes for less than $1,000 and a few weeks to learn the program.

While some of the basic deepfake videos are not perfect (they may not capture the full details of a person's face and there may be some artifacts around the edges), they are often good enough to dupe most liveness detection systems, or systems able to analyze facial images and determine whether they are of a live human being or a reproduction. Not only that, many high-quality models are pretrained and publicly available. Anyone can run pretrained models on any video and create convincing deepfakes. Furthermore, deepfake technology will continue to get better, faster, and cheaper. While convincing deepfakes are a real concern, there are ways of fighting back.

How to Identify a Deepfake
There are different types and levels of deepfakes and certain tools needed to discern a deepfake from a real live selfie. Some deepfakes are coarser and lower quality, and they can be quickly produced with free apps. More convincing or higher-quality deepfakes require more significant effort, skill, money and time.

The human eye can detect deepfakes by closely observing the images for slight imperfections such as:

  • Face discolorations
  • Lighting that isn't quite right
  • Badly synced sound and video
  • Blurriness where the face meets the neck and hair

Algorithms can detect deepfakes by analyzing the images and revealing small inconsistencies between pixels, coloring, or distortion. It's also possible to use AI to detect deepfakes by training a neural network to spot changes in facial images that have been artificially altered by software. The most robust forms of liveness detection rely on machine learning, AI, and computer vision to examine dozens of miniscule details from a selfie video such as hair and skin texture, micromovements, and reflections in a subject's eye.

How ID Verification Methods Get Spoofed
Online identity verification methods rely on a government-issued photo ID and a corroborating selfie. An important part of the process is the liveness detection (normally performed during the selfie-taking process), which ensures the person is physically present (versus a spoof or deepfake).

Liveness detection should detect whether the user is in fact a real person. There are some basic forms of liveness detection that require the user to blink, move their eyes, say a few words, or nod their head. Unfortunately, these basic forms of liveness can be spoofed with a deepfake video. A more sophisticated way to trick the system utilizes a regular photo that is quickly animated by software and turned into a lifelike avatar of the fraud victim. The attack enables on-command facial movements (blink, node, smile, etc.) that look far more convincing to the camera than a lifeless photo. By requiring returning users to capture a selfie and re-establish "liveness," it's virtually impossible for fraudsters to take over existing accounts.

How to Arm Yourself Against Deepfakes
Unfortunately, there are no commercially available packages that can automatically detect deepfakes for all applications and all usage models, despite efforts from universities, online platforms, and tech giants to combat the threat.

As research continues, policymakers should develop legislation to discourage deepfake usage and penalize those who are caught manipulating videos to deceive others as a form of fraud. Tech companies also play a role. Facebook has strengthened its policy toward manipulated media to combat misinformation by adding removal criteria for deepfakes and the like. The company said it will now remove media which "has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say." These are the types of steps we need in order to maintain content efficacy and better online trust, but we should not assume that a Facebook policy will stop cybercriminals for creating deepfakes to bypass biometric-based verification solutions.

There will always be an arms race between hackers and cybersecurity engineers. Organizations should keep on the leading edge and adopt the latest technologies available on the market in order to guard against harm.

Related Content:

A listing of free products and services compiled for Dark Reading by Omdia analysts to help meet the challenges of COVID-19. 

Labhesh Patel is the CTO and Chief Scientist at Jumio. He is responsible for driving Jumio's innovation by operationalizing deep learning, computer vision, and augmented intelligence. An out-of-the-box thinker, Labhesh has 135 patents issued under his name. View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Ninja
4/30/2020 | 11:37:31 AM
Great article! I can only imagine that this will be exacerbated by the fact that filing your tax return in the US has been extended due to the pandemic. Those who do not file are at risk of a fraudlent entity hopping ahead of them in line and claiming any refunds they may have received.
COVID-19: Latest Security News & Commentary
Dark Reading Staff 6/4/2020
Abandoned Apps May Pose Security Risk to Mobile Devices
Robert Lemos, Contributing Writer,  5/29/2020
How AI and Automation Can Help Bridge the Cybersecurity Talent Gap
Peter Barker, Chief Product Officer at ForgeRock,  6/1/2020
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: What? IT said I needed virus protection!
Current Issue
How Cybersecurity Incident Response Programs Work (and Why Some Don't)
This Tech Digest takes a look at the vital role cybersecurity incident response (IR) plays in managing cyber-risk within organizations. Download the Tech Digest today to find out how well-planned IR programs can detect intrusions, contain breaches, and help an organization restore normal operations.
Flash Poll
New Best Practices for Secure App Development
New Best Practices for Secure App Development
The transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-06-04
In MiniShare before 1.4.2, there is a stack-based buffer overflow via an HTTP PUT request, which allows an attacker to achieve arbitrary code execution, a similar issue to CVE-2018-19861, CVE-2018-19862, and CVE-2019-17601. NOTE: this product is discontinued.
PUBLISHED: 2020-06-04
The MQTT protocol 3.1.1 requires a server to set a timeout value of 1.5 times the Keep-Alive value specified by a client, which allows remote attackers to cause a denial of service (loss of the ability to establish new connections), as demonstrated by SlowITe.
PUBLISHED: 2020-06-04
Portable UPnP SDK (aka libupnp) 1.12.1 and earlier allows remote attackers to cause a denial of service (crash) via a crafted SSDP message due to a NULL pointer dereference in the functions FindServiceControlURLPath and FindServiceEventURLPath in genlib/service_table/service_table.c.
PUBLISHED: 2020-06-04
Castel NextGen DVR v1.0.0 is vulnerable to CSRF in all state-changing request. A __RequestVerificationToken is set by the web interface, and included in requests sent by web interface. However, this token is not verified by the application: the token can be removed from all requests and the request ...
PUBLISHED: 2020-06-04
Pydio Cells 2.0.4 web application offers an administrative console named “Cells Console� that is available to users with an administrator role. This console provides an administrator user with the possibility of changing several settings, including the applicat...