Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Edge Articles

9/17/2020
06:40 PM
Curtis Franklin Jr.
Curtis Franklin Jr.
Edge Articles
50%
50%

Defending Against Deepfakes: From Tells to Crypto

Detecting doctored media has become tricky -- and risky -- business. Here's how organizations can better protect themselves from fake video, audio, and other forms of content.

(Continued from previous page)

But the risks from deepfakes don't only extend to employees receiving them. As in the previously mentioned scenario, the possibility exists that a threat actor could create a deepfake of a corporate executive saying or doing something detrimental to the organization's success.

"Amateur deepfake videos have improved significantly in the past few years on commodity sub-$500 video-gaming GPU hardware," says Chris Clements, vice president of solutions architecture at Cerberus Sentinel. "Audio can be even more convincing."

He reminds us, though, that even amateur deepfake videos require significant training sets to be more realistic. Organizations with executives who are frequent public speakers can have a much higher risk profile in this scenario.

"If a large number of high-quality video and audio data of an executive does exist, say from giving multiple public talks, it can be used to create convincing deepfakes," Clements says.

There are a few "tells" of deepfake videos today, he says.

"These include a noticeable lack of blinking by the subject, as well as a smearing effect around the edges of the face or hair. Shadows looking 'off' are another common shortfall of current deepfake technology," Clements explains. "These tells are going to get harder to spot as the technology and compute power of the hardware improves, however."

Many examples of "nonconsensual images" in which celebrity faces have been used in pornographic videos have already surfaced. It's no stretch to believe that corporate figures could find themselves in other sorts of nonconsensual images that could be just as damaging, if perhaps less graphic.

The ultimate solution may be the application of techniques taken from cryptography, Fernick says.

"I think if we have robust ways of authenticating that given video stream or audio stream has come from a specific device or been uploaded a certain time by a certain user, that may be helpful in disambiguating some of this and in offering journalists and news organizations ways of ensuring some level of quality or robustness or integrity of the content that they would be sharing with viewers," she says.

And doing this sort of authentication at scale will almost certainly give rise to a new form of cloud service offering, Fernick suggests.

"If there's anything that we can learn from cryptography, we always say, 'Don't roll your own crypto,'" she says. "[Ultimately] it becomes a really complex technical question with a few layers. But I imagine that it's something that could be well-served within the open source community and perhaps by a company that scales it up." 

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and ... View Full Bio
Previous
2 of 2
Next
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
   OVER THE EDGE
A Swift Reminder About Cybersecurity

Source: The Security Awareness Company

What security-related videos have made you laugh? Let us know! Add them to the Comments section or email us at [email protected].

Name That Toon: Masks and Manners