Detecting doctored media has become tricky — and risky — business. Here’s how organizations can better protect themselves from fake video, audio, and other forms of content.
The idea that artificial intelligence AI can help create video, audio, and other media that can’t be easily separated from “real” media is the stuff of dystopian science-fiction and film-makers’ dreams. But that’s what deepfakes are all about. Pundits and security analysts have spent hundreds of thousands of words worrying about the dangers deepfakes pose to democracy, but what about the dangers they pose to the enterprise?
“The concern that I would have for the enterprise is that the sophistication of existing deepfake technologies are certainly beyond most humans’ threshold for being tricked by fake imagery,” says Jennifer Fernick, chief researcher for the NCC Group.
Images and words that go beyond the human recognition threshold can be used for purposes as “prosaic” as very effective spear-phishing campaigns, she says. It’s also a growing problem because the deepfake technology is getting better while our ability to detect deepfakes is not.
“The current machine-based defenses don’t solve all of our problems,” she explains.
As an example of how difficult the deepfake problem is to solve, Fernick points to last year’s Kaggle Data Science Competition called the Deepfake Detection Challenge. With more than 2,200 teams participating and, according to Fernick, approximately 35,000 detection models submitted, the best model could detect a deepfake less than two-thirds of the time.
Criminal applications for this difficult-to-detect technology are becoming more varied.
“Now you’re getting a voicemail message that sounds just like your boss. She’s mad and she wants you to wire the money now,” says Tom Pendergast, chief learning officer at MediaPro. “The urgency in her voice — and you’re sure it’s her voice — overwhelms your caution, and you send the money. And now you’ve been duped.”
‘Don’t Always Believe What You See’
So with detection beyond the ability of humans and out of reach for most technologies, what can an organization do to be safe from deepfakes?
“Moving forward, the best way to defend against deepfakes is to hold those platforms who host and make deepfakes available to the public accountable and responsible for them,” says Joseph Carson, chief security scientist at Thycotic. “If a post has not had any type of trusted source or context provided, then correct labeling of the content should be clear to the viewer that the content source has been verified, is still being analyzed, or that the content has been significantly modified.”
Without clear notice of a media source, employees with proper training are critical cogs in the deepfake security machinery. Chris Hauk, consumer privacy champion at Pixel Privacy, says it begins with basic media literacy.
“Don’t always believe what you see. Videos from questionable sources are always to be taken with a grain of your favorite salt-free substitute,” he explains. “If a video or photo is not from an established media sources, investigate it by consulting with other sources.”
Hank Schless, senior manager, security solutions at Lookout, says employee training should be updated to take both the new realities of work and new deepfake threats into account.
“Audio and social media deepfakes start with social interaction, and you need to train your employees on how to identify these suspicious activities,” he says. “The best first step is to make sure your security training includes identifying modern tactics like deepfakes and mobile phishing – especially while people work remotely. Since we can’t walk down the hall to validate communication from a co-worker, encourage your employees to reach out over different channels.”
As an example, he suggests sending a message through a collaboration system to verify that an unusual phone call was legitimate.
But the risks from deepfakes don't only extend to employees receiving them. As in the previously mentioned scenario, the possibility exists that a threat actor could create a deepfake of a corporate executive saying or doing something detrimental to the organization's success.
"Amateur deepfake videos have improved significantly in the past few years on commodity sub-$500 video-gaming GPU hardware," says Chris Clements, vice president of solutions architecture at Cerberus Sentinel. "Audio can be even more convincing."
He reminds us, though, that even amateur deepfake videos require significant training sets to be more realistic. Organizations with executives who are frequent public speakers can have a much higher risk profile in this scenario.
"If a large number of high-quality video and audio data of an executive does exist, say from giving multiple public talks, it can be used to create convincing deepfakes," Clements says.
There are a few "tells" of deepfake videos today, he says.
"These include a noticeable lack of blinking by the subject, as well as a smearing effect around the edges of the face or hair. Shadows looking 'off' are another common shortfall of current deepfake technology," Clements explains. "These tells are going to get harder to spot as the technology and compute power of the hardware improves, however."
Many examples of "nonconsensual images" in which celebrity faces have been used in pornographic videos have already surfaced. It's no stretch to believe that corporate figures could find themselves in other sorts of nonconsensual images that could be just as damaging, if perhaps less graphic.
The ultimate solution may be the application of techniques taken from cryptography, Fernick says.
"I think if we have robust ways of authenticating that given video stream or audio stream has come from a specific device or been uploaded a certain time by a certain user, that may be helpful in disambiguating some of this and in offering journalists and news organizations ways of ensuring some level of quality or robustness or integrity of the content that they would be sharing with viewers," she says.
And doing this sort of authentication at scale will almost certainly give rise to a new form of cloud service offering, Fernick suggests.
"If there's anything that we can learn from cryptography, we always say, 'Don't roll your own crypto,'" she says. "[Ultimately] it becomes a really complex technical question with a few layers. But I imagine that it's something that could be well-served within the open source community and perhaps by a company that scales it up."