Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint //

Authentication

4/30/2020
10:00 AM
Labhesh Patel
Labhesh Patel
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

The Rise of Deepfakes and What That Means for Identity Fraud

Convincing deepfakes are a real concern, but there are ways of fighting back.

The number of deepfake videos found online doubled from 2018 to 2019 and has continued to rise since then. A deepfake superimposes existing video footage or photographs of a face onto a source head and body using advanced neural network-powered artificial intelligence, and can make fraudulent video and audio content seem incredibly real.

The security industry continues to move away from outdated authentication methods, such as SMS-based two-factor authentication and knowledge-based authentication, which are easily susceptible to fraud. As a result, there has been an increase in more advanced biometric-based authentication methods as a secure alternative, but depending on the sophistication of those biometric systems, the rise of deepfake technology will undoubtedly become a larger concern.

How Are Deepfakes Created So Convincingly?
Sophisticated deepfakes require considerable amounts of data and compute power to create — it's not easy to swap out a person's face in a video. You'll need an extensive amount of video and voice (e.g., recorded sound bites) recordings of the subjects and a commercially available AI cloud package capable of training a neural network to take in an image of one person's face and output an image of the other person's face with the same pose, expression, and illumination.

Once the model is trained, it's applied to the specific video that you want to modify. If you want to change the words in the video, you'll need a neural network that is also trained on voice data. Depending on your skill level, it may take hours or days, along with hundreds of dollars in computing time on commercial cloud services. But generally speaking, this is still publicly available software that can help you create deepfakes for less than $1,000 and a few weeks to learn the program.

While some of the basic deepfake videos are not perfect (they may not capture the full details of a person's face and there may be some artifacts around the edges), they are often good enough to dupe most liveness detection systems, or systems able to analyze facial images and determine whether they are of a live human being or a reproduction. Not only that, many high-quality models are pretrained and publicly available. Anyone can run pretrained models on any video and create convincing deepfakes. Furthermore, deepfake technology will continue to get better, faster, and cheaper. While convincing deepfakes are a real concern, there are ways of fighting back.

How to Identify a Deepfake
There are different types and levels of deepfakes and certain tools needed to discern a deepfake from a real live selfie. Some deepfakes are coarser and lower quality, and they can be quickly produced with free apps. More convincing or higher-quality deepfakes require more significant effort, skill, money and time.

The human eye can detect deepfakes by closely observing the images for slight imperfections such as:

  • Face discolorations
  • Lighting that isn't quite right
  • Badly synced sound and video
  • Blurriness where the face meets the neck and hair

Algorithms can detect deepfakes by analyzing the images and revealing small inconsistencies between pixels, coloring, or distortion. It's also possible to use AI to detect deepfakes by training a neural network to spot changes in facial images that have been artificially altered by software. The most robust forms of liveness detection rely on machine learning, AI, and computer vision to examine dozens of miniscule details from a selfie video such as hair and skin texture, micromovements, and reflections in a subject's eye.

How ID Verification Methods Get Spoofed
Online identity verification methods rely on a government-issued photo ID and a corroborating selfie. An important part of the process is the liveness detection (normally performed during the selfie-taking process), which ensures the person is physically present (versus a spoof or deepfake).

Liveness detection should detect whether the user is in fact a real person. There are some basic forms of liveness detection that require the user to blink, move their eyes, say a few words, or nod their head. Unfortunately, these basic forms of liveness can be spoofed with a deepfake video. A more sophisticated way to trick the system utilizes a regular photo that is quickly animated by software and turned into a lifelike avatar of the fraud victim. The attack enables on-command facial movements (blink, node, smile, etc.) that look far more convincing to the camera than a lifeless photo. By requiring returning users to capture a selfie and re-establish "liveness," it's virtually impossible for fraudsters to take over existing accounts.

How to Arm Yourself Against Deepfakes
Unfortunately, there are no commercially available packages that can automatically detect deepfakes for all applications and all usage models, despite efforts from universities, online platforms, and tech giants to combat the threat.

As research continues, policymakers should develop legislation to discourage deepfake usage and penalize those who are caught manipulating videos to deceive others as a form of fraud. Tech companies also play a role. Facebook has strengthened its policy toward manipulated media to combat misinformation by adding removal criteria for deepfakes and the like. The company said it will now remove media which "has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say." These are the types of steps we need in order to maintain content efficacy and better online trust, but we should not assume that a Facebook policy will stop cybercriminals for creating deepfakes to bypass biometric-based verification solutions.

There will always be an arms race between hackers and cybersecurity engineers. Organizations should keep on the leading edge and adopt the latest technologies available on the market in order to guard against harm.

Related Content:

A listing of free products and services compiled for Dark Reading by Omdia analysts to help meet the challenges of COVID-19. 

Labhesh Patel is the CTO and Chief Scientist at Jumio. He is responsible for driving Jumio's innovation by operationalizing deep learning, computer vision, and augmented intelligence. An out-of-the-box thinker, Labhesh has 135 patents issued under his name. View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/21/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2015-4719
PUBLISHED: 2020-09-24
The client API authentication mechanism in Pexip Infinity before 10 allows remote attackers to gain privileges via a crafted request.
CVE-2020-15604
PUBLISHED: 2020-09-24
An incomplete SSL server certification validation vulnerability in the Trend Micro Security 2019 (v15) consumer family of products could allow an attacker to combine this vulnerability with another attack to trick an affected client into downloading a malicious update instead of the expected one. CW...
CVE-2020-24560
PUBLISHED: 2020-09-24
An incomplete SSL server certification validation vulnerability in the Trend Micro Security 2019 (v15) consumer family of products could allow an attacker to combine this vulnerability with another attack to trick an affected client into downloading a malicious update instead of the expected one. CW...
CVE-2020-25596
PUBLISHED: 2020-09-23
An issue was discovered in Xen through 4.14.x. x86 PV guest kernels can experience denial of service via SYSENTER. The SYSENTER instruction leaves various state sanitization activities to software. One of Xen's sanitization paths injects a #GP fault, and incorrectly delivers it twice to the guest. T...
CVE-2020-25597
PUBLISHED: 2020-09-23
An issue was discovered in Xen through 4.14.x. There is mishandling of the constraint that once-valid event channels may not turn invalid. Logic in the handling of event channel operations in Xen assumes that an event channel, once valid, will not become invalid over the life time of a guest. Howeve...