Deepfakes are increasingly concerning because they use AI to imitate human activities and can be used to augment social engineering attacks.

Derek Manky, Chief Security Strategist & VP Global Threat Intelligence, FortiGuard Labs

December 7, 2021

3 Min Read
Digital faces
Source: freshidea via Adobe Stock

Cybercrime has risen precipitously this year. From July 2020 to June 2021, there was an almost 11x increase in ransomware attacks, we have found. And that number continues to grow. But the next challenge is about much more than just the rising number of attacks. We're also seeing an increase in attacks on high-profile targets — and the rise of new methodologies.

Deepfakes and Deep Attacks
Deepfakes, which really started to gain prominence in 2017, have largely been popularized for entertainment purposes. Two examples are people creating social media memes by inserting Nicolas Cage into movies he wasn't actually in or the recent Anthony Bourdain documentary, which used deepfake technology to emulate the voice of the deceased celebrity chef. There have also been beneficial use cases for deep fake technology in the medical field.

Unfortunately, once again, the maturity of deepfake technology hasn't gone unnoticed by the bad guys. In the cybersecurity world, deepfakes are an increasing cause for concern because they use artificial intelligence to imitate human activities and can be used to augment social engineering attacks.

GPT-3 (Generative Pre-trained Transformer) is an AI-based system that uses deep language learning to create emails that read naturally and are quite convincing. With it, attackers can use appropriated email addresses by compromising mail servers or running man-in-the-middle attacks to generate emails and email replies that mimic the writing style, word choice, and tone of the person being impersonated. This could include a manager or executive, even making references to previous correspondences.

Tip of the Iceberg
Creating emails is only the beginning. Software tools that can clone someone's voice already exist online, with others in development. A vocal fingerprint of someone can be created using just a few seconds of audio, and then the software generates arbitrary speech in real time.

Though still in early-stage development, deepfake videos will become problematic as central processing unit (CPU)/graphics processing unit (GPU) performance becomes both more powerful and cheaper. The bar for creating these deepfakes will also be lowered through the commercialization of advanced applications. These could eventually lead to real-time impersonations over voice and video applications that could pass biometric analysis. The possibilities are endless, including the elimination of voiceprints as a form of authentication.

Counterfit, an open source tool, is a sign of hope. The newly released tool enables organizations to pen-test AI systems — including facial recognition, image recognition, fraud detection, and so on — to ensure that the algorithms being used are trustworthy. They can also use this tool for red/blue wargaming. We can also expect attackers to do the same, using this tool to identify vulnerabilities in AI systems.

Taking Action Against Deepfakes
As these proof-of-concept technologies become mainstream, security leaders will need to change how they detect and mitigate attacks. This will certainly include fighting fire with fire — that is, if the bad guys are using AI as part of their offense, the defenders must also be using it. One such example is leveraging AI technologies that can detect minor voice and video anomalies.

Our best defenses currently are zero-trust access that restricts users and devices to a predefined set of assets, segmentation, and integrated security strategies designed to detect and restrict the impact of an attack.

We'll also need to revamp end-user training to include how to detect suspicious or unexpected requests arriving via voice or video — in addition to those coming from email. For spoofed communications including embedded malware, enterprises will need to monitor traffic to detect a payload. This means having devices in place that are fast enough to inspect streaming video without affecting user experience.

Fight the Deepfakes Now
Just about every technology becomes a double-edged sword, and AI-powered deepfake software is no exception. Malicious actors are already using AI in a number of ways — and this will only expand. In 2022, watch for them to use deepfakes to mimic human activities and pull off enhanced social engineering attacks. By implementing the recommendations above, organizations can take proactive steps to stay secure even with the advent of these sophisticated attacks.

About the Author(s)

Derek Manky

Chief Security Strategist & VP Global Threat Intelligence, FortiGuard Labs

As Chief Security Strategist & VP Global Threat Intelligence at FortiGuard Labs, Derek Manky formulates security strategy with more than 15 years of cybersecurity experience. His ultimate goal is to make a positive impact toward the global war on cybercrime. Manky provides thought leadership to the industry, and has presented research and strategy worldwide at premier security conferences. As a cybersecurity expert, his work has included meetings with leading political figures and key policy stakeholders, including law enforcement, who help define the future of cybersecurity. He is actively involved with several global threat intelligence initiatives, including NATO NICP, Interpol Expert Working Group, the Cyber Threat Alliance (CTA) working committee, and FIRST, all in an effort to shape the future of actionable threat intelligence and proactive security strategy.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights