The Rise of Deepfakes and What That Means for Identity Fraud
Convincing deepfakes are a real concern, but there are ways of fighting back.
The number of deepfake videos found online doubled from 2018 to 2019 and has continued to rise since then. A deepfake superimposes existing video footage or photographs of a face onto a source head and body using advanced neural network-powered artificial intelligence, and can make fraudulent video and audio content seem incredibly real.
The security industry continues to move away from outdated authentication methods, such as SMS-based two-factor authentication and knowledge-based authentication, which are easily susceptible to fraud. As a result, there has been an increase in more advanced biometric-based authentication methods as a secure alternative, but depending on the sophistication of those biometric systems, the rise of deepfake technology will undoubtedly become a larger concern.
How Are Deepfakes Created So Convincingly?
Sophisticated deepfakes require considerable amounts of data and compute power to create — it's not easy to swap out a person's face in a video. You'll need an extensive amount of video and voice (e.g., recorded sound bites) recordings of the subjects and a commercially available AI cloud package capable of training a neural network to take in an image of one person's face and output an image of the other person's face with the same pose, expression, and illumination.
Once the model is trained, it's applied to the specific video that you want to modify. If you want to change the words in the video, you'll need a neural network that is also trained on voice data. Depending on your skill level, it may take hours or days, along with hundreds of dollars in computing time on commercial cloud services. But generally speaking, this is still publicly available software that can help you create deepfakes for less than $1,000 and a few weeks to learn the program.
While some of the basic deepfake videos are not perfect (they may not capture the full details of a person's face and there may be some artifacts around the edges), they are often good enough to dupe most liveness detection systems, or systems able to analyze facial images and determine whether they are of a live human being or a reproduction. Not only that, many high-quality models are pretrained and publicly available. Anyone can run pretrained models on any video and create convincing deepfakes. Furthermore, deepfake technology will continue to get better, faster, and cheaper. While convincing deepfakes are a real concern, there are ways of fighting back.
How to Identify a Deepfake
There are different types and levels of deepfakes and certain tools needed to discern a deepfake from a real live selfie. Some deepfakes are coarser and lower quality, and they can be quickly produced with free apps. More convincing or higher-quality deepfakes require more significant effort, skill, money and time.
The human eye can detect deepfakes by closely observing the images for slight imperfections such as:
Face discolorations
Lighting that isn't quite right
Badly synced sound and video
Blurriness where the face meets the neck and hair
Algorithms can detect deepfakes by analyzing the images and revealing small inconsistencies between pixels, coloring, or distortion. It's also possible to use AI to detect deepfakes by training a neural network to spot changes in facial images that have been artificially altered by software. The most robust forms of liveness detection rely on machine learning, AI, and computer vision to examine dozens of miniscule details from a selfie video such as hair and skin texture, micromovements, and reflections in a subject's eye.
How ID Verification Methods Get Spoofed
Online identity verification methods rely on a government-issued photo ID and a corroborating selfie. An important part of the process is the liveness detection (normally performed during the selfie-taking process), which ensures the person is physically present (versus a spoof or deepfake).
Liveness detection should detect whether the user is in fact a real person. There are some basic forms of liveness detection that require the user to blink, move their eyes, say a few words, or nod their head. Unfortunately, these basic forms of liveness can be spoofed with a deepfake video. A more sophisticated way to trick the system utilizes a regular photo that is quickly animated by software and turned into a lifelike avatar of the fraud victim. The attack enables on-command facial movements (blink, node, smile, etc.) that look far more convincing to the camera than a lifeless photo. By requiring returning users to capture a selfie and re-establish "liveness," it's virtually impossible for fraudsters to take over existing accounts.
How to Arm Yourself Against Deepfakes
Unfortunately, there are no commercially available packages that can automatically detect deepfakes for all applications and all usage models, despite efforts from universities, online platforms, and tech giants to combat the threat.
As research continues, policymakers should develop legislation to discourage deepfake usage and penalize those who are caught manipulating videos to deceive others as a form of fraud. Tech companies also play a role. Facebook has strengthened its policy toward manipulated media to combat misinformation by adding removal criteria for deepfakes and the like. The company said it will now remove media which "has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren't apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say." These are the types of steps we need in order to maintain content efficacy and better online trust, but we should not assume that a Facebook policy will stop cybercriminals for creating deepfakes to bypass biometric-based verification solutions.
There will always be an arms race between hackers and cybersecurity engineers. Organizations should keep on the leading edge and adopt the latest technologies available on the market in order to guard against harm.
Related Content:
A listing of free products and services compiled for Dark Reading by Omdia analysts to help meet the challenges of COVID-19.
About the Author
You May Also Like
Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024