Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

Digital Clones Could Cause Problems for Identity Systems

Three fundamental technologies -- chatbots, audio fakes, and deepfake videos -- have improved to the point that creating digital, real-time clones of people is merely a matter of integrating the systems.

The fundamental technologies for creating digital clones of people — text, audio, and video that sound and look like a specific person — have rapidly advanced and are within striking distance of a future in which digital avatars can sound and act like specific people, Tamaghna Basu, co-founder and chief technology officer of neoEYED, a behavioral analytics firm, told attendees at the virtual Black Hat conference on Aug. 6.

While deepfake videos that superimpose a 3D model of a specific person over another person's face have raised fears of propaganda videos, disinformation operations, and smear campaigns, successful digital clones could cause even more problems, especially for systems that use voice or facial recognition for access management or as a way to fool employees into accepting someone's identity. While the current result of Basu's experiment have numerous telltale signs that the subject is clearly not human, the relative success of project demonstrates how close we may be to successfully creating simulated people.

"As you can clearly see, there is a gap, but this gap is about making the voice more convincing, making the facial expressions have more emotion, those are on the road map to be done," he told attendees during his presentation. "The ultimate goal that I have, [building] an alternate [version of me] that can have a conversation over text, voice, and video," seems achievable.

Inspired by futuristic shows such as Black Mirror, Basu decided to attempt to construct a digital clone of himself using three already existing technologies: chatbots, audio synthesis, and deepfake videos. The effort is less about original research and more about stitching together a variety of technologies. While the video version of his digital clone is choppy and the voice sounds generated, several friends who conversed with the chatbot version of his model thought he might be feeding the answers to the machine.

Such believable personalization, suggests that — depending on how close two people are — a digital clone could fool one into thinking it's the other person, he said.

"Our object was to get a positive Turing test, to convince them it is really me," he said in a Dark Reading interview, adding: "One of the scariest parts is that if you have 100 friends in your Facebook, honestly speaking, there are very few relationships where people are very personal. So, the real problem is that it is easy to fake the relationship."

The technology could spell trouble for identity verification technologies, he added. Basu's company uses analytics to create behavioral profiles of people to protect identities — one reason why he decided to take an adversarial strategy and try to use behavioral profiles to create a clone. Digital clones that not only look and sound like another person but also have mannerisms and patterns of speaking that are similar to the subject will make social engineering easier.

At a high level, the technology is broken up into three parts, which Basu called the brain, the voice, and the face. The brain is a text chatbot engine that attempts to have an interactive chat using natural language processing. There are a variety of approaches to chatbots that can produce reasonable functionality, depending on the type of conversation. Limited domain conversations — such as small talk and conversations seeking specific information —can often be rule-based.

Using a variety of different chat histories for a specific person, you can train such bots to use the same type of language as  that person, he said during the presentation. "The brain is the engine which is the crux of the entire project. It knows what kind of questions to ask and how to answer those questions."

Using an open source chatbot library known as Rasa, Basu created a system that could make small talk and hold conversations. Basu also used audio synthesis software and 500 samples of his voice averaging 10 seconds each to train the machine learning process. Better audio cloning will require as much as 10 hours of recording. He is playing around with accents.

For the face, he wanted to create it in near-real time and have the mouth match the words. Overall, identity attacks appear feasible and at this point merely require refinement, he said.

Related Content:

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
More SolarWinds Attack Details Emerge
Kelly Jackson Higgins, Executive Editor at Dark Reading,  1/12/2021
Vulnerability Management Has a Data Problem
Tal Morgenstern, Co-Founder & Chief Product Officer, Vulcan Cyber,  1/14/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
2020: The Year in Security
Download this Tech Digest for a look at the biggest security stories that - so far - have shaped a very strange and stressful year.
Flash Poll
Assessing Cybersecurity Risk in Today's Enterprises
Assessing Cybersecurity Risk in Today's Enterprises
COVID-19 has created a new IT paradigm in the enterprise -- and a new level of cybersecurity risk. This report offers a look at how enterprises are assessing and managing cyber-risk under the new normal.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-20619
PUBLISHED: 2021-01-19
Cross-site scripting vulnerability in GROWI (v4.2 Series) versions prior to v4.2.3 allows remote attackers to inject an arbitrary script via unspecified vectors.
CVE-2020-29450
PUBLISHED: 2021-01-19
Affected versions of Atlassian Confluence Server and Data Center allow remote attackers to impact the application's availability via a Denial of Service (DoS) vulnerability in the avatar upload feature. The affected versions are before version 7.2.0.
CVE-2020-36192
PUBLISHED: 2021-01-18
An issue was discovered in the Source Integration plugin before 2.4.1 for MantisBT. An attacker can gain access to the Summary field of private Issues (either marked as Private, or part of a private Project), if they are attached to an existing Changeset. The information is visible on the view.php p...
CVE-2020-36193
PUBLISHED: 2021-01-18
Tar.php in Archive_Tar through 1.4.11 allows write operations with Directory Traversal due to inadequate checking of symbolic links, a related issue to CVE-2020-28948.
CVE-2020-7343
PUBLISHED: 2021-01-18
Missing Authorization vulnerability in McAfee Agent (MA) for Windows prior to 5.7.1 allows local users to block McAfee product updates by manipulating a directory used by MA for temporary files. The product would continue to function with out-of-date detection files.