OWASP Beefs Up GenAI Security Guidance Amid Growing Deepfakes

As businesses worry over deepfake scams and other AI attacks, organizations are adding guidance for cybersecurity teams on how to detect, and respond to, next-generation threats. That includes Exabeam, which was recently targeted by a deepfaked job candidate.

5 Min Read
A virtual computer screen above a keyboard
Source: Family Stock via Shutterstock

Deepfakes and other generative artificial intelligence (GenAI) attacks are becoming less rare, and signs are pointing to a coming onslaught of such attacks: Already, AI-generated text is becoming more common in emails, and security firms are finding ways to detect emails likely not created by humans. Human-written emails have declined to about 88% of all emails, while text attributed to large language models (LLMs) now accounts for about 12% of all email, up from around 7% in late 2022, according to one analysis.

To help organizations develop stronger defenses against AI-based attacks, the Top 10 for LLM Applications & Generative AI group within the Open Worldwide Application Security Project (OWASP) released a trio of guidance documents for security organizations on Oct. 31. To its previously released AI cybersecurity and governance checklist, the group added a guide for preparing for deepfake events, a framework to create AI security centers of excellence, and a curated database on AI security solutions.

While the previous Top 10 guide is useful for companies building models and creating their own AI services and product, the new guidance is aimed at the users of AI technology, says Scott Clinton, co-project lead at OWASP.

Those companies "want to be able to do AI safely with as much guidance as possible — they're going to do it anyway, because it's a competitive differentiator for the business," he says. "If their competitors are doing it, [then] they need to find a way to do it, do it better ... so security can't be a blocker, it can't be a barrier to that."

Related:With 'TPUXtract,' Attackers Can Steal Orgs' AI Models

One Security Vendor's Job Candidate Deepfake Attack

In an example of the kinds of real-world attacks that are now happening, one job candidate at security vendor Exabeam had passed all the initial vetting and moved onto the final interview round. That's when Jodi Maas, GRC team lead at the company, recognized that something was wrong.

While the human resources group had flagged the initial interview for a new senior security analyst as "somewhat scripted," the actual interview started with normal greetings. Yet, it quickly became apparent that some form of digital trickery was in use. Background artifacts appeared, the female interviewee's mouth did not match the audio, and she hardly moved or expressed emotion, says Maas, who runs application security and governance, risk, and compliance within Exabeam's security operations center (SOC).

"It was very odd — just no smile, there was no personality at all, and we knew right away that it was not a fit, but we continued the interview, because [the experience] was very interesting," she says.

Related:OData Injection Risk in Low-Code/No-Code Environments

After the interview, Maas approached Exabeam's chief information security officer (CISO), Kevin Kirkwood, and they concluded it had been a deepfake based on similar video examples. The experience shook them enough that they decided the company needed better procedures in place to catch GenAI-based attacks, embarking on meetings with security staff and an internal presentation to employees.

"The fact that it got past our HR group was interesting. ... They passed them through because they had answered all the questions correctly," Kirkwood says.

After the deepfake interview, Exabeam's Kirkwood and Maas started revamping their processes, following up with their HR group, for example to let them know to expect more such attacks in the future. For now, the company advises its employees to treat video calls with suspicion. (Half-jokingly, Kirkwood requested this correspondent to turn on my video midway through the interview as proof of humanness. I did.)

"You're going to see this more often now, and you know these are the things you can check for, and these are the things that you will see in a deepfake," Kirkwood says.

Related:Efforts to Secure US Telcos Beset by Salt Typhoon Might Fall Flat

Technical Anti-Deepfake Solutions Are Needed

Deepfake incidents are capturing the imagination — and fear — of IT professionals, with about half (48%) very concerned over deepfakes at present, and 74% believing deepfakes will pose a significant future threat, according to a survey conducted by email security firm Ironscales.

The trajectory of deepfakes is quite easy to predict — even if they are not good enough to fool most people today, they will be in the future, says Eyal Benishti, founder and CEO of Ironscales. That means that human training will likely only go so far. AI videos are getting eerily realistic, and a fully digital twin of another person controlled in real time by an attacker — a true "sock puppet" — is likely not far behind.

"Companies want to try and figure out how they get ready for deepfakes," he says. "The are realizing that this type of communication cannot be fully trusted moving forward, which ... will take people some time to realize and adjust."

In the future, since the telltale artifacts will be gone, better defenses are necessary, Exabeam's Kirkwood says.

"Worst case scenario: The technology gets so good that you're playing a tennis match — you know, the detection gets better, the deepfake gets better, the detection gets better, and so on," he says. "I'm waiting for the technology pieces to catch up, so I can actually plug it into my SIEM and flag the elements associated with deepfake."

OWASP's Clinton agrees. Rather focus on training humans to detect suspect video chats, companies should create infrastructures for authenticating that a chat is with a human who is also an employee, building processes around financial transactions, and creating an incident-response plan, he says.

"Training people on how to identify deepfakes — that's not really practical, because it's all subjective," Clinton says. "I think there have to be more unsubjective approaches, and so we went through and came up with some tangible steps that you can use, which are combinations of technologies and process to really focus on a few areas."

Don't miss the latest Dark Reading Confidential podcast, where we talk about NIST's post-quantum cryptography standards and what comes next for cybersecurity practitioners. Guests from General Dynamics Information Technology (GDIT) and Carnegie Mellon University break it all down. Listen now!

Read more about:

CISO Corner

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights