Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

1/16/2020
02:00 PM
Ian Cruxton
Ian Cruxton
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Phishing Today, Deepfakes Tomorrow: Training Employees to Spot This Emerging Threat

Cybercriminals are evolving their tactics, and the security community anticipates voice and video fraud to play a role in one of the next big data breaches -- so start protecting your business now.

Deepfake fraud is a new, potentially devastating issue for businesses. In fact, last year a top executive at an unidentified energy company was revealed to have been conned into paying £200,000 by scammers using artificial intelligence to replicate his boss's voice — simply because he answered a telephone call, which he believed was from his German parent company. The request was for him to transfer the funds, which he dutifully sent to what he presumed was his parent company. In the end, the funds were stolen by sophisticated criminals at the forefront of what I believe is a frightening new age of deepfake fraud. Although this was the first reported case of this kind of fraud in the UK, it certainly won't be the last.

Recently, a journalist paid just over $550 to develop his own deepfake, placing the face of Lieutenant Commander Data from Star Trek: The Next Generation over Mark Zuckerberg's. It took only two weeks to develop the video.

When the Enterprise Evolves, the Enemy Adapts
We're no strangers to phishing emails in our work inboxes. In fact, many of us have received mandatory training and warnings about how to detect them — the tell-tale signs of spelling errors, urgency, unfamiliar requests from "colleagues," or the slightly unusual sender addresses. But fraudsters know that continuing with established phishing techniques won't survive for much longer. They also understand the large potential gains from gathering intelligence from corporations using deepfake technology — a mixture of video, audio, and email messaging— to extract confidential employee information under the guise of the CEO or CFO.

Deepfake technology is still in its early days, but even in 2013, it was powerful enough to make an impact. While serving at the National Crime Agency (NCA) in the UK, I saw how a Dutch NGO pioneered the technology to create the deepfake of a 10-year-old girl, identifying thousands of child sex offenders around the globe. In this case, the AI video deepfake technology was implemented by a humanitarian-focused organization with the purpose of fighting crime.

But as the technology evolves, we're seeing how much of the research into deepfakes surrounds its unlawful and criminal applications — many of which present seriously detrimental financial and reputational consequences. As more businesses educate their employees to detect and thwart traditional phishing and spearphishing attacks, it's not difficult to see how the fraudsters may instead turn their efforts to fruitful deepfake technology to execute their schemes.  

How Deepfakes Will Thrive in the Modern Workplace
With the sheer amount of jobs requiring their employees to be online, it's critical that workforces are educated and provided with the tools to detect, refute, and protect against deepfake attacks and fraudulent activity taking place in the workplace. It's not difficult to see why corporate deepfake detection in particular is so crucial: Employees by nature are often eager to satisfy the requests of their seniors, and do so with as little friction as possible.

The stakes are raised even further when considering how large teams, remote workers, and complex hierarchies make it even more difficult for employees to distinguish between a colleague's "status quo" and an unusual request or attitude. Add into that equation the fast-tempo demands to deliver through agile working methodologies, and it is easy to see how a convincingly realistic video request from a known boss to transfer funds could attract less scrutiny from an employee than a video from someone they know less well.

A New Era of Employee Security Training
Companies must empower employees to question and challenge requests that are deemed to be unusual, either because of the atypical action demanded or the out-of-character manner or style of the person making the request. This can be particularly challenging for organizations with very hierarchical and autocratic leadership that does not encourage or respect what it perceives as challenges to its authority. Fortunately, some business owners and academics are already looking into ways to solve the issue of detecting deepfakes.

Facebook, for instance, announced the launch of the Deepfake Detection Challenge in partnership with Microsoft and leading academics in September last year, and lawmakers in the US House of Representatives recently passed legislation to combat deepfakes. But there is much to be done quickly if we are to stay ahead of the fraudsters.

If organizations can no longer assume the identity of the email sender or individual at the other end of the phone, they must develop programs and protocol for training employees to over-ride their natural inclination to assume that any voice caller or video subject is real, and instead consider that there may be a fraudster leveraging AI and deepfake technology to spoof the identities of their colleagues.

Cybercriminals are constantly evolving their tactics and broadening their channels, and the security community anticipates voice and video fraud to play a role in one of the next big data breaches. So start protecting your business sooner rather than later.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "How to Comprehend the Buzz About Honeypots."

After nearly 35 years in the law enforcement, Ian Cruxton joined the private sector as CSO of Callsign, an identity fraud, authorization, and authentication company. While at the National Crime Agency (NCA), he led 7 of the 12 organized crime threats and regularly briefed the ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...