Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


02:00 PM
Ian Cruxton
Ian Cruxton
Connect Directly
E-Mail vvv

Phishing Today, Deepfakes Tomorrow: Training Employees to Spot This Emerging Threat

Cybercriminals are evolving their tactics, and the security community anticipates voice and video fraud to play a role in one of the next big data breaches -- so start protecting your business now.

Deepfake fraud is a new, potentially devastating issue for businesses. In fact, last year a top executive at an unidentified energy company was revealed to have been conned into paying £200,000 by scammers using artificial intelligence to replicate his boss's voice — simply because he answered a telephone call, which he believed was from his German parent company. The request was for him to transfer the funds, which he dutifully sent to what he presumed was his parent company. In the end, the funds were stolen by sophisticated criminals at the forefront of what I believe is a frightening new age of deepfake fraud. Although this was the first reported case of this kind of fraud in the UK, it certainly won't be the last.

Recently, a journalist paid just over $550 to develop his own deepfake, placing the face of Lieutenant Commander Data from Star Trek: The Next Generation over Mark Zuckerberg's. It took only two weeks to develop the video.

When the Enterprise Evolves, the Enemy Adapts
We're no strangers to phishing emails in our work inboxes. In fact, many of us have received mandatory training and warnings about how to detect them — the tell-tale signs of spelling errors, urgency, unfamiliar requests from "colleagues," or the slightly unusual sender addresses. But fraudsters know that continuing with established phishing techniques won't survive for much longer. They also understand the large potential gains from gathering intelligence from corporations using deepfake technology — a mixture of video, audio, and email messaging— to extract confidential employee information under the guise of the CEO or CFO.

Deepfake technology is still in its early days, but even in 2013, it was powerful enough to make an impact. While serving at the National Crime Agency (NCA) in the UK, I saw how a Dutch NGO pioneered the technology to create the deepfake of a 10-year-old girl, identifying thousands of child sex offenders around the globe. In this case, the AI video deepfake technology was implemented by a humanitarian-focused organization with the purpose of fighting crime.

But as the technology evolves, we're seeing how much of the research into deepfakes surrounds its unlawful and criminal applications — many of which present seriously detrimental financial and reputational consequences. As more businesses educate their employees to detect and thwart traditional phishing and spearphishing attacks, it's not difficult to see how the fraudsters may instead turn their efforts to fruitful deepfake technology to execute their schemes.  

How Deepfakes Will Thrive in the Modern Workplace
With the sheer amount of jobs requiring their employees to be online, it's critical that workforces are educated and provided with the tools to detect, refute, and protect against deepfake attacks and fraudulent activity taking place in the workplace. It's not difficult to see why corporate deepfake detection in particular is so crucial: Employees by nature are often eager to satisfy the requests of their seniors, and do so with as little friction as possible.

The stakes are raised even further when considering how large teams, remote workers, and complex hierarchies make it even more difficult for employees to distinguish between a colleague's "status quo" and an unusual request or attitude. Add into that equation the fast-tempo demands to deliver through agile working methodologies, and it is easy to see how a convincingly realistic video request from a known boss to transfer funds could attract less scrutiny from an employee than a video from someone they know less well.

A New Era of Employee Security Training
Companies must empower employees to question and challenge requests that are deemed to be unusual, either because of the atypical action demanded or the out-of-character manner or style of the person making the request. This can be particularly challenging for organizations with very hierarchical and autocratic leadership that does not encourage or respect what it perceives as challenges to its authority. Fortunately, some business owners and academics are already looking into ways to solve the issue of detecting deepfakes.

Facebook, for instance, announced the launch of the Deepfake Detection Challenge in partnership with Microsoft and leading academics in September last year, and lawmakers in the US House of Representatives recently passed legislation to combat deepfakes. But there is much to be done quickly if we are to stay ahead of the fraudsters.

If organizations can no longer assume the identity of the email sender or individual at the other end of the phone, they must develop programs and protocol for training employees to over-ride their natural inclination to assume that any voice caller or video subject is real, and instead consider that there may be a fraudster leveraging AI and deepfake technology to spoof the identities of their colleagues.

Cybercriminals are constantly evolving their tactics and broadening their channels, and the security community anticipates voice and video fraud to play a role in one of the next big data breaches. So start protecting your business sooner rather than later.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "How to Comprehend the Buzz About Honeypots."

After nearly 35 years in the law enforcement, Ian Cruxton joined the private sector as CSO of Callsign, an identity fraud, authorization, and authentication company. While at the National Crime Agency (NCA), he led 7 of the 12 organized crime threats and regularly briefed the ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Former CISA Director Chris Krebs Discusses Risk Management & Threat Intel
Kelly Sheridan, Staff Editor, Dark Reading,  2/23/2021
Security + Fraud Protection: Your One-Two Punch Against Cyberattacks
Joshua Goldfarb, Director of Product Management at F5,  2/23/2021
Cybercrime Groups More Prolific, Focus on Healthcare in 2020
Robert Lemos, Contributing Writer,  2/22/2021
Register for Dark Reading Newsletters
White Papers
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Building the SOC of the Future
Building the SOC of the Future
Digital transformation, cloud-focused attacks, and a worldwide pandemic. The past year has changed the way business works and the way security teams operate. There is no going back.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-02-24
A cross-site request forgery (CSRF) vulnerability in Jenkins Claim Plugin 2.18.1 and earlier allows attackers to change claims.
PUBLISHED: 2021-02-24
Jenkins Support Core Plugin 2.72 and earlier provides the serialized user authentication as part of the "About user (basic authentication details only)" information, which can include the session ID of the user creating the support bundle in some configurations.
PUBLISHED: 2021-02-24
Jenkins Artifact Repository Parameter Plugin 1.0.0 and earlier does not escape parameter names and descriptions, resulting in a stored cross-site scripting (XSS) vulnerability exploitable by attackers with Job/Configure permission.
PUBLISHED: 2021-02-24
A stack-based buffer overflow vulnerability exists in the import_stl.cc:import_stl() functionality of Openscad openscad-2020.12-RC2. A specially crafted STL file can lead to code execution. An attacker can provide a malicious file to trigger this vulnerability.
PUBLISHED: 2021-02-24
Helpcom before v10.0 contains a file download and execution vulnerability caused by storing hardcoded cryptographic key. It finally leads to a file download and execution via access to crafted web page.