Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

6/13/2019
09:00 AM
Alex Wawro, Special to Dark Reading
Alex Wawro, Special to Dark Reading
News
50%
50%

Black Hat Q&A: Defending Against Cheaper, Accessible ‘Deepfake’ Tech

ZeroFox's Matt Price and Mike Price discuss their work researching cybersecurity responses to the rising tide of 'deepfake' videos.

The tools and techniques to create false videos via AI-driven image synthesis are getting easier to access every year, and few people know that better than ZeroFox’s Matt Price and Mike Price (not related). In an email interview with Black Hat's Alex Wawro, the pair of security experts shared their latest research, which will be presented at Black Hat USA in Las Vegas this summer.

Alex: Why are 'deepfakes' important?

Matt: For me personally, I think deepfakes are important because of their potential to change political discourse, and just public discourse in general. We've already seen evidence of this, not even with deepfakes, but with people splicing videos and slowing them down. I think deepfakes have a lot of potential to do some good, especially when you think about movies and special effects, but they also have a lot of potential to cause problems.

Mike: Long story short, here at ZeroFox we do a lot of work in terms of analyzing content for security-related issues. We started off as a social media security company, and when I arrived here four or five years ago, most of what we were doing was 'Hey, is there something bad in this tech? Or, is there something bad in this image.' So that brought us to the question -- what about video?

A couple years ago, when deepfakes appeared on the scene, our research team organically took interest in the topic and we started looking into how they're created, and how we can develop protections against them. I've been working with Matt to really round out not just the offensive parts but also the defensive part: how do you detect these things, and do something against them?

Alex: How good is deepfake tech right now, and how quickly do you think it will pose a significant threat to security systems?

Mike: The research that's been done by other folks, and the work that we've done in understanding what's going on out there suggests that the tools and the resources required to produce deepfakes are much lower-cost now. Previously, stuff like this didn't really exist outside of Hollywood studios where they needed to synthesize a person's image. But now you have these tools where, anybody can download an open-source package and produce a fake video clip pretty quickly. So the cost has been brought down a ton, the complexity has been brought down a ton, so that's really the main risk factors.

As far as quality goes, from what we've seen there's still a lot of work going on to really perfect this stuff; you have a lot of little hiccups with regards to, for example, getting a variety of different videos, jumping through all kinds of hoops to get the right kinds of source images, and so on. So there are still a lot of hurdles to producing deepfakes that are really dynamic, with many people in the video moving around and changing positions. You see mostly short clips of a single person looking forward; there are still some limitations to what's easily accomplished with this tech.

But there's a lot of work going on. The tooling seems to be getting better and better, and people are doing a lot of exploration of different algorithms that may be able to produce better results with less input. So that's where things stand today. And as far as people using it for nefarious purposes, mostly we're seeing lots of proof-of-concept videos out there. Nicolas Cage is the guinea pig for a lot of the work being done, and then you see some political examples -- like the Obama video.

Alex: Why did you feel it was important to give this talk at Black Hat, and what do you hope attendees will get out of it?

Mike: A lot of people have asked about this subject; I know that in the federal space there are a lot of people thinking about whether this will be an issue in the future. So there's lots of questions in the air about what deepfake technology is, how it works, how real it can be, that sort of thing. We want to explain all that, and then walk you through what your options are for detecting deepfakes and doing something about it.

Matt: To piggyback off that, I'm mainly interested in the detection side, and I think this talk is important because I've seen some quite sensationalist headlines saying there is no solution to deepfakes, which isn't true. There are detection methods out there right now to detect deepfakes; DARPA's actually heavily investing in this area as well. So that's kind of the point, for me. We can detect deepfakes. There are tools to do it; this is just a security problem like any other.

Alex: What are you hoping to get out of Black Hat this year?

Matt: I'm really interested in some of the developments in neural networks and their applications to cybersecurity problems. My role at ZeroFox is mainly to run our data science program, so I'm always interested in the newest and latest tech on that front, and neural networks seems to be one of the hot topics for solving problems that traditionally we've had issues solving.

For more information about the ZeroFox Deepfake Briefing and many more check out the Black Hat USA Briefings page, which is regularly updated with new content as we get closer to the event. Black Hat USA returns to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
I 'Hacked' My Accounts Using My Mobile Number: Here's What I Learned
Nicole Sette, Director in the Cyber Risk practice of Kroll, a division of Duff & Phelps,  11/19/2019
6 Top Nontechnical Degrees for Cybersecurity
Curtis Franklin Jr., Senior Editor at Dark Reading,  11/21/2019
TPM-Fail: What It Means & What to Do About It
Ari Singer, CTO at TrustPhi,  11/19/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
Navigating the Deluge of Security Data
In this Tech Digest, Dark Reading shares the experiences of some top security practitioners as they navigate volumes of security data. We examine some examples of how enterprises can cull this data to find the clues they need.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industry’s conventional wisdom. Here’s a look at what they’re thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-18610
PUBLISHED: 2019-11-22
An issue was discovered in manager.c in Sangoma Asterisk through 13.x, 16.x, 17.x and Certified Asterisk 13.21 through 13.21-cert4. A remote authenticated Asterisk Manager Interface (AMI) user without system authorization could use a specially crafted Originate AMI request to execute arbitrary syste...
CVE-2019-9536
PUBLISHED: 2019-11-22
Apple iPhone 3GS bootrom malloc implementation returns a non-NULL pointer when unable to allocate memory, aka 'alloc8'. An attacker with physical access to the device can install arbitrary firmware.
CVE-2013-6811
PUBLISHED: 2019-11-22
Multiple cross-site request forgery (CSRF) vulnerabilities in the D-Link DSL-6740U gateway (Rev. H1) allow remote attackers to hijack the authentication of administrators for requests that change administrator credentials or enable remote management services to (1) Custom Services in Port Forwarding...
CVE-2013-6880
PUBLISHED: 2019-11-22
Open redirect in proxy.php in FlashCanvas before 1.6 allows remote attackers to redirect users to arbitrary web sites and conduct cross-site scripting (XSS) attacks via the HTTP Referer header.
CVE-2019-15652
PUBLISHED: 2019-11-22
The web interface for NSSLGlobal SatLink VSAT Modem Unit (VMU) devices before 18.1.0 doesn't properly sanitize input for error messages, leading to the ability to inject client-side code.