But generative AI's ability to strengthen security and fortify defenses can keep bad actors in check.

Mandy Andress, Chief Information Security Officer, Elastic

February 14, 2024

5 Min Read
Word cloud for generative AI, in a form that resembles face of a robot
Source: MauriceNorbert via Alamy Stock Photo


Cybersecurity has always been a cat-and-mouse game between the "good guys" and the "bad guys." With the growing prevalence of AI, including new forms like generative AI, this ongoing chess match has only grown more intense — and it's increasingly clear that AI will serve as the powerful "queen" that can tip the match in favor of whoever wields this piece most effectively.

Cyberattacks Are More Sophisticated Than Ever

Bad actors have wasted little time finding ways to incorporate generative AI into their activities. They've been able to take their phishing efforts to a whole new level: Messages now arrive as subtle, fluid prose, void of spelling errors or grammatical mistakes.

A clever scammer can even "prompt" generative AI models to assume a persona to make the phishing email more convincing; for example, "Make this email sound like it's coming from the accounting department of a Fortune 500 company" or "Imitate the writing style and mannerisms of executive X." With this type of highly targeted, AI-honed phishing attack, bad actors increase their odds of stealing an employee's login credentials so they can access highly sensitive information, such as a company's financial details.

Threat actors are also developing their own malevolent versions of mainstream GPT tools. DarkGPT, for example, is able to tap into all corners of the Dark Web, making it very easy to gather information and resources that can be put to nefarious ends. There's also FraudGPT, which enables cybercriminals to create malicious codes and viruses with just a few strokes of their keyboard. The result? Devastatingly efficient ransomware attacks that are easier than ever to launch, with a lowered barrier to entry.

Unfortunately, as long as these illicit activities yield results, there will continue to be bad actors who seek creative ways to use new technologies like generative AI for sinister reasons. The good news is enterprises can leverage these very same capabilities to bolster their own security postures.

Context Is Key

In the same way that DarkGPT and FraudGPT can serve up harmful resources faster than ever before, a GPT tool deployed responsibly can serve up helpful resources — providing the context needed to help evade potential attacks and facilitate a more effective response to any threats.

For example, let's say a security professional sees some irregular activity or anomalous behavior happening in their environment, but they're not sure what the next steps are for appropriate investigation or remediation. Generative AI can very quickly pull relevant information, best practices, and recommended actions from the collective intelligence of the security field. Having this comprehensive context allows practitioners to quickly understand the nature of the attack, as well as what respective actions they should take.

This capability becomes particularly powerful when security teams can look at their environment holistically and analyze all of the data that's available.

Seeing the Full Picture

Before, it was standard to observe a single system for normal behavior, or perhaps, more importantly, abnormal behavior. Now, it's possible to look across multiple systems and configurations — including how they're interacting together — to deliver a much more detailed picture of what's happening across the environment. As a result, professionals can have a much deeper, contextual understanding of the unfolding situation and make better, more informed decisions.

Additionally, generative AI doesn't just help security professionals make better decisions, it also helps them make faster decisions — with less manual effort.

Today, there's a lot of grunt work involved in gaining visibility across the technology stack and digital footprint within the organization, pulling data together, and trying to figure out what's happening. Given the scale and complexity of today's technology environments and the volumes of data involved, it's historically been impossible to provide a holistic security blanket or identify every single blind spot — and this is largely what bad actors are taking advantage of.

Generative AI not only helps aggregate all of this data, it also democratizes it. This enables security professionals to perform analysis across massive amounts of information in near real-time and can identify potential threats based on landscape changes they previously might have only stumbled on by accident. This alone can reduce the dwell time of any bad actors from days to just minutes — a significant advantage for the good guys.

There's Cause for Optimism

As automobiles became more common in the early 1900s, it was customary to have someone carrying a red flag on the road ahead of the car to provide advance warning to other travelers that something new and unexpected was coming, and to be aware of their surroundings.

Obviously, society has long since acclimatized to having vehicles on the road. They've simply become part of the fabric of the world we live in, even as they've become increasingly sophisticated and powerful.

When it comes to AI, we're at a red-flag moment: We need to proceed mindfully and carefully. Whether it's cars or AI, there is always some risk involved. But just as we've added more enhanced security features to vehicles and increased regulations, we can do the same with AI.

Ultimately, there's cause for optimism here. The cat-and-mouse game between hackers and defenders will continue, as it always has. But in using AI, and generative AI in particular, as a way to strengthen their overall security posture and fortify their defenses, the good guys will be able to take their game up a notch and improve their ability to keep the bad guys where they belong: in check.

About the Author(s)

Mandy Andress

Chief Information Security Officer, Elastic

Mandy Andress is the chief information security officer at Elastic, a leader in search-powered solutions, and has more than 25 years of experience in information risk management and security. Mandy has a JD from Western New England University, a Master’'s in Management Information Systems from Texas A&M University, and a B.B.A in Accounting from Texas A&M University. Mandy is a CISSP, CPA, and member of the Texas Bar.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights