From updating employee education and implementing stronger authentication protocols to monitoring corporate accounts and adopting a zero-trust model, companies can better prepare defenses against chatbot-augmented attacks.

Paul Trulove, CEO, SecureAuth

January 18, 2023

4 Min Read
Artificial intelligence
Source: Kiyoshi Takahase Segundo via Alamy Stock Photo

Over LinkedIn, Slack, Twitter, email, and text messages, people are sharing examples created using ChatGPT, the new artificial intelligence (AI) model from the team behind OpenAI.

Using ChatGPT, you can produce authentic-sounding dialogue that can be used for almost anything — from answering follow-up questions in an online chat dialogue to writing poetry. There are plenty of opportunities for enterprises to take advantage of the new chatbot technology, including helping with enterprise support and customer interactions. Its ability to quickly generate content, and, according to its developers, "answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests," opens many opportunities and some new challenges.

AI and machine learning (ML) is shifting quickly from niche solutions in corporate scenarios that have been high cost/high reward toward solutions to more expansive implementations that can solve a range of enterprise challenges, particularly those which focus on end users. Cybersecurity, for example, is now embracing AI-based approaches to provide greater protection with smaller security teams. One example highlights how organizations can use AI/ML to perform dynamic, risk-based checks when someone tries to access sensitive applications or data. This replaces older, policy-based approaches, which can take significant time and human interaction to develop and maintain the individual access policies. In a larger company, maintaining those policies could include hundreds or even thousands of applications and data repositories.

Using AI for Greater Efficiency

Interest in AI is continuing to explode as modern attacks require rapid detection and response to anomalous user behavior — something an AI model can be trained to quickly identify, but which takes significant human power to try and replicate manually. The goal of AI is to increase efficiency and trust while reducing friction and improving the user interface for typical users. Self-driving cars, for example, are designed to increase the safety and security of end users. In cybersecurity, specifically in the authentication of users (human and nonhuman), AI-based approaches are advancing rapidly, working with rule-based systems to improve the experience for end users.

In an increasingly digital world, attackers can already sidestep detection of binary identity authentication, including location, device, network, and other checks. Inevitably, cybercriminals will also look at the new opportunities ChatGPT opens up. AI tools that interact in a conversational way can be leveraged by cybercriminals to launch convincing phishing campaigns or account takeovers. In the past, it may have been easier to spot a phishing attack due to the poor grammar, unusual phrasing, or frequent spelling errors so common in phishing emails. That is quickly evolving with solutions like ChatGPT, which use natural language processing to create the initial message and then generate realistic responses to a target user's questions without any of those telltale markers.

Rethink Security Approaches

To prepare for chatbot-augmented attacks, organizations need to rethink security approaches to mitigate potential threats. Five measures they can take include:

  1. Updating employee education regarding the risks of phishing. Employees who understand how ChatGPT can be leveraged are likely to be more cautious about interacting with chatbots and other AI-supported solutions, and thereby avoid falling victim to these types of attacks.

  2. Implementing strong authentication protocols to make it more difficult for attackers to gain access to accounts. Attackers already know how to take advantage of users tired of reauthenticating through multifactor authentication (MFA) tools, so leveraging AI/ML to authenticate users through digital fingerprint matching and avoiding passwords altogether can help increase security while reducing MFA fatigue.

  3. Adopting a zero-trust model. By only granting access after verification and ensuring least-privileged access, security leaders can create an environment where even if an attacker leverages ChatGPT to get around unsuspecting users, the cybercriminals will still have to verify their identity and, even then, will only have access to limited resources. Limiting the access not only of developers but also leadership, including technical leadership, to the least access needed to perform their jobs effectively, may initially meet with resistance, but will make it harder for an attacker to leverage any unauthorized access for gain.

  4. Monitoring activity on corporate accounts and using tools, including spam filters, behavioral analysis, and keyword filtering, to identify and block malicious messages. Your employees cannot fall victim to a phishing attack, no matter how sophisticated the language is, if they never see the message. Quarantining malicious messages based on the behavior and relationship of the correspondents rather than specific keywords is more likely to block malicious actors.

  5. Leveraging AI. Cyberattacks are already leveraging AI, and ChatGPT is just one more way that they will use it to gain access to your organization's environments. Organizations must take advantage of the capabilities offered by AI/ML to improve cybersecurity and respond faster to threats and potential breaches.

Account takeover via social engineering, leveraging passwords from data breaches, and phishing attacks will become more frequent and more successful — despite our best defenses — given that ChatGPT can provide authentic-sounding responses to target inquiries. By updating and adopting these five approaches, organizations can help to reduce the risks to themselves and their employees when chatbots and other AI tools are used for malicious purposes.

About the Author(s)

Paul Trulove

CEO, SecureAuth

Paul Trulove holds 15+ years of IAM experience in senior leadership roles, and as CEO of SecureAuth, he sets the vision and strategy. More recently, Paul was CPO at SailPoint Technologies, where he joined in 2007 as Head of Product, driving the product strategy, road map, and messaging for its market-leading identity management portfolio. He played a key role in taking SailPoint from an identity pioneer to its successful IPO.

Prior to SailPoint, Paul gained extensive experience in formulating innovative product strategies, launching products in early-stage ventures, and growing products into category leaders at tech companies including Newgistics, Sabre, and Pervasive Software.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights