Why ChatGPT Isn't a Death Sentence for Cyber Defenders

Generative AI combined with user awareness training creates a security alliance that can let organizations work protected from ChatGPT.

Jose Lopez, Principal Data Scientist, Mimecast

February 8, 2023

5 Min Read
Green robot holding a samurai sword
Source: Panther Media GmbH via Alamy Stock Photo

ChatGPT has taken the world by storm since late November, sparking legitimate concerns about its potential to amplify the severity and complexity of the cyber-threat landscape. The generative AI tool's meteoric rise marks the latest development in an ongoing cybersecurity arms race between good and evil, where attackers and defenders alike are constantly in search of the next breakthrough AI/ML technologies that can provide a competitive edge.

This time around, however, the stakes have been raised. As a result of ChatGPT, social engineering is now officially democratized — expanding the availability of a dangerous tool that enhances a threat actor's ability to bypass stringent detection measures and cast wider nets across the hybrid attack surface.

Casting Wide Attack Nets

Here's why: Most social engineering campaigns are reliant upon generalized templates containing common keywords and text strings that security solutions are programmed to identify and then block. These campaigns, whether carried out via email or collaboration channels like Slack and Microsoft Teams, often take a spray-and-pray approach resulting in a low success rate.

But with generative AIs like ChatGPT, threat actors could theoretically leverage the system's Large Language Model (LLM) to stray away from universal formats, instead automating the creation of entirely unique phishing or spoofing emails with perfect grammar and natural speech patterns tailored to the individual target. This heightened level of sophistication makes any average email-borne attack appear far more credible, in turn making it far more difficult to detect and prevent recipients from clicking a hidden malware link.

However, let's be clear that ChatGPT doesn't signify the death sentence for cyber defenders that some have made it out to be. Rather, it's the latest development in a continuous cycle of evolving threat actor tactics, techniques, and procedures (TTPs) that can be analyzed, addressed, and alleviated. After all, this isn't the first time we've seen generative AIs exploited for malicious intent; what separates ChatGPT from the technologies that came before it is its simplicity of use and free access. With OpenAI likely moving to subscription-based models requiring user authentication coupled with enhanced protections, defending against ChatGPT attacks will ultimately come down to one key variable: fighting fire with fire.

Beating ChatGPT at Its Own Game

Security operation teams must leverage their own AI-powered large language models (LLMs) to combat ChatGPT social engineering. Consider it the first and last line of defense, empowering human analysts to improve detection efficiency, streamline workflows, and automate response actions. For example, an LLM integrated within the right enterprise security solution can be programmed to detect highly sophisticated social engineering templates generated by ChatGPT. Within seconds of the LLM identifying and categorizing a suspicious pattern, the solution flags it as an anomaly, notifies a human analyst with prescribed corrective actions, and then shares that threat intelligence in real-time across the organization's security ecosystem.

The benefits are the reason why the rate of AI/ML adoption across cybersecurity has accelerated in recent years. In IBM's 2022 "Cost of a Data Breach" report, companies that leveraged an AI-driven security solution alleviated attacks 28 days faster, on average, and reduced financial damages by more than $3 million. Meanwhile, 92% of those polled in Mimecast's 2022 "State of Email Security" report indicated they were already leveraging AI within their security architectures or planned on doing so in the near future. Building on that progress with a stronger commitment to leveraging AI-driven LLMs should be an immediate focus moving forward, as it's the only way to keep pace with the velocity of ChatGPT attacks.

Iron Sharpens Iron

The applied use of AI-driven LLMs like ChatGPT can also enhance the efficiency of black-box, gray-box, and white-box penetration testing, which all require a significant amount of time and manpower that strained IT teams lack amidst widespread labor shortages. Considering time is of the essence, LLMs offer an effective methodology for streamlining pen-testing processes — automating the identification of optimal attack vectors and network gaps without relying on previous exploit models that often become outdated as the threat landscape evolves.

For example, within a simulated environment, a "bad" LLM can generate tailored email text to test the organization's social engineering defenses. If that text bypasses detection and reaches its intended target, the data can be repurposed to train another "good" LLM on how to identify similar patterns in real-world environments. This helps to effectively inform both red and blue teams on the intricacies of combating ChatGPT with generative AI, while also providing an accurate assessment of the organization's security posture that allows analysts to bridge vulnerability gaps before adversaries capitalize on them.

The Human Error Effect

It's important to remember that only investing in best-of-breed solutions isn't a magic bullet to safeguard organizations from sophisticated social engineering attacks. Amidst the societal adoption of cloud-based hybrid work structures, human risk has emerged as a critical vulnerability of the modern enterprise. More than 95% of security breaches today, a majority of which are the result of social engineering attacks, involve some degree of human error. And with ChatGPT expected to increase the volume and velocity of such attacks, ensuring hybrid employees follow safe practices regardless of where they work should be considered nonnegotiable.

That reality heightens the importance of implementing user awareness training modules as a core component of their security framework — employees who receive consistent user awareness training are five times more likely to identify and avoid malicious links. However, according to a 2022 Forrester report, "Security Awareness and Training Solutions," many security leaders lack extensive understanding of how to build a culture of security awareness and revert to static, one-size-fits-all employee training to measure engagement and influence behavior. This approach is largely ineffective. For training modules to resonate, they must be scalable and personalized with entertaining content and quizzes that align with employees' areas of interest and learning styles.

Combining generative AI with well-executed user awareness training creates a robust security alliance that can enable organizations to work protected from ChatGPT. Don't worry cyber defenders, the sky isn't falling. Hope remains on the horizon.

About the Author(s)

Jose Lopez

Principal Data Scientist, Mimecast

Jose Lopez is the Principal Data Scientist at Mimecast. With 20 years of experience in the field, Jose is an expert in generative AI applied to cybersecurity, specializing in natural language processing and computer vision. He has designed and deployed language models at scale to detect attacks, and works with various teams within Mimecast to identify and solve problems where AI can be applied.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like

More Insights