Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Digital Clones Could Cause Problems for Identity Systems

Three fundamental technologies -- chatbots, audio fakes, and deepfake videos -- have improved to the point that creating digital, real-time clones of people is merely a matter of integrating the systems.

The fundamental technologies for creating digital clones of people — text, audio, and video that sound and look like a specific person — have rapidly advanced and are within striking distance of a future in which digital avatars can sound and act like specific people, Tamaghna Basu, co-founder and chief technology officer of neoEYED, a behavioral analytics firm, told attendees at the virtual Black Hat conference on Aug. 6.

While deepfake videos that superimpose a 3D model of a specific person over another person's face have raised fears of propaganda videos, disinformation operations, and smear campaigns, successful digital clones could cause even more problems, especially for systems that use voice or facial recognition for access management or as a way to fool employees into accepting someone's identity. While the current result of Basu's experiment have numerous telltale signs that the subject is clearly not human, the relative success of project demonstrates how close we may be to successfully creating simulated people.

"As you can clearly see, there is a gap, but this gap is about making the voice more convincing, making the facial expressions have more emotion, those are on the road map to be done," he told attendees during his presentation. "The ultimate goal that I have, [building] an alternate [version of me] that can have a conversation over text, voice, and video," seems achievable.

Inspired by futuristic shows such as Black Mirror, Basu decided to attempt to construct a digital clone of himself using three already existing technologies: chatbots, audio synthesis, and deepfake videos. The effort is less about original research and more about stitching together a variety of technologies. While the video version of his digital clone is choppy and the voice sounds generated, several friends who conversed with the chatbot version of his model thought he might be feeding the answers to the machine.

Such believable personalization, suggests that — depending on how close two people are — a digital clone could fool one into thinking it's the other person, he said.

"Our object was to get a positive Turing test, to convince them it is really me," he said in a Dark Reading interview, adding: "One of the scariest parts is that if you have 100 friends in your Facebook, honestly speaking, there are very few relationships where people are very personal. So, the real problem is that it is easy to fake the relationship."

The technology could spell trouble for identity verification technologies, he added. Basu's company uses analytics to create behavioral profiles of people to protect identities — one reason why he decided to take an adversarial strategy and try to use behavioral profiles to create a clone. Digital clones that not only look and sound like another person but also have mannerisms and patterns of speaking that are similar to the subject will make social engineering easier.

At a high level, the technology is broken up into three parts, which Basu called the brain, the voice, and the face. The brain is a text chatbot engine that attempts to have an interactive chat using natural language processing. There are a variety of approaches to chatbots that can produce reasonable functionality, depending on the type of conversation. Limited domain conversations — such as small talk and conversations seeking specific information —can often be rule-based.

Using a variety of different chat histories for a specific person, you can train such bots to use the same type of language as  that person, he said during the presentation. "The brain is the engine which is the crux of the entire project. It knows what kind of questions to ask and how to answer those questions."

Using an open source chatbot library known as Rasa, Basu created a system that could make small talk and hold conversations. Basu also used audio synthesis software and 500 samples of his voice averaging 10 seconds each to train the machine learning process. Better audio cloning will require as much as 10 hours of recording. He is playing around with accents.

For the face, he wanted to create it in near-real time and have the mouth match the words. Overall, identity attacks appear feasible and at this point merely require refinement, he said.

Related Content:


Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Threaded  |  Newest First  |  Oldest First
7 Tips for Choosing Security Metrics That Matter
Ericka Chickowski, Contributing Writer,  10/19/2020
IoT Vulnerability Disclosure Platform Launched
Dark Reading Staff 10/19/2020
Register for Dark Reading Newsletters
White Papers
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2020-10-22
A path handling issue was addressed with improved validation. This issue is fixed in iOS 13.5 and iPadOS 13.5, macOS Catalina 10.15.5, tvOS 13.4.5, watchOS 6.2.5. A malicious application may be able to overwrite arbitrary files.
PUBLISHED: 2020-10-22
An information disclosure issue was addressed with improved state management. This issue is fixed in macOS Catalina 10.15.6, watchOS 6.2.8. A malicious application may disclose restricted memory.
PUBLISHED: 2020-10-22
A memory corruption issue was addressed with improved input validation. This issue is fixed in macOS Catalina 10.15.6. An application may be able to execute arbitrary code with kernel privileges.
PUBLISHED: 2020-10-22
Multiple memory corruption issues were addressed with improved memory handling. This issue is fixed in macOS Catalina 10.15.6. An application may be able to execute arbitrary code with kernel privileges.
PUBLISHED: 2020-10-22
A memory corruption issue was addressed with improved memory handling. This issue is fixed in macOS Catalina 10.15.6. A local user may be able to cause unexpected system termination or read kernel memory.