The innovation that ChatGPT and other LLMs demonstrate is a good thing, but safeguards and other security frameworks must keep pace.

The letters "AI" in blue against a background of ones and zeroes
Source: Marcos Alvarado via Alamy Stock Photo

As innovation in artificial intelligence (AI) continues apace, 2024 will be a crucial time for organizations and governing bodies to establish security standards, protocols, and other guardrails to prevent AI from getting ahead of them, security experts warn.

Large language models (LLMs), powered by sophisticated algorithms and massive data sets, demonstrate remarkable language understanding and humanlike conversational capabilities. One of the most sophisticated of these platforms to date is OpenAI's GPT-4, which boasts advanced reasoning and problem-solving capabilities and powers the company's ChatGPT bot. And the company, in partnership with Microsoft, has started work on GPT-5, which CEO Sam Altman said will go much further — to the point of possessing "superintelligence."

These models represent enormous potential for significant productivity and efficiency gains for organizations, but experts agree that the time has come for the industry as a whole to address the inherent security risks posed by their development and deployment. Indeed, recent research by Writerbuddy AI, which offers an AI-based content-writing tool, found that ChatGPT already has had 14 billion visits and counting.

As organizations march toward progress in AI, it "should be coupled with rigorous ethical considerations and risk assessments," says Gal Ringel, CEO of AI-based privacy and security firm MineOS.

Is AI an Existential Threat?

Concerns around security for the next generation of AI started percolating in March, with an open letter signed by nearly 34,000 top technologists that called for a halt to the development of generative AI systems more powerful than OpenAI's GPT-4. The letter cited the "profound risks" to society that the technology represents and the "out-of-control race by AI labs to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control."

Despite those dystopian fears, most security experts aren't that concerned about a doomsday scenario in which machines become smarter than humans and take over the world.

"The open letter noted valid concerns about the rapid advancement and potential applications of AI in a broad, 'is this good for humanity' sense," says Matt Wilson, director of sales engineering at cybersecurity firm Netrix. "While impressive in certain scenarios, the public versions of AI tools don't appear all that threatening."

What is concerning is the fact that AI advancements and adoption are moving too quickly for the risks to be properly managed, researchers note. "We cannot put the lid back on Pandora's box," observes Patrick Harr, CEO of AI security provider SlashNext.

Moreover, merely "attempting to stop the rate of innovation in the space will not help to mitigate" the risks it presents, which must be addressed separately, observes Marcus Fowler, CEO of AI security firm DarkTrace Federal. That doesn't mean AI development should continue unchecked, he says. On the contrary, the rate of risk assessment and implementing appropriate safeguards should match the rate at which LLMs are being trained and developed.

"AI technology is evolving quickly, so governments and the organizations using AI must also accelerate discussions around AI safety," Fowler explains.

Generative AI Risks

There are several widely recognized risks to generative AI that demand consideration and will only get worse as future generations of the technology get smarter. Fortunately for humans, none of them so far poses a science-fiction doomsday scenario in which AI conspires to destroy its creators.

Instead, they include far more familiar threats, such as data leaks, potentially of business-sensitive info; misuse for malicious activity; and inaccurate outputs that can mislead or confuse users, ultimately resulting in negative business consequences.

Because LLMs require access to vast amounts of data to provide accurate and contextually relevant outputs, sensitive information can be inadvertently revealed or misused.

"The main risk is employees feeding it with business-sensitive information when asking it to write a plan or rephrase emails or business decks containing the company's proprietary information," Ringel notes.

From a cyberattack perspective, threat actors already have found myriad ways to weaponize ChatGPT and other AI systems. One way has been to use the models to create sophisticated business email compromise (BEC) and other phishing attacks, which require the creation of socially engineered, personalized messages designed for success.

"With malware, ChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines," Harr says.

AI hallucinations also pose a significant security threat and allow malicious actors to arm LLM-based technology like ChatGPT in a unique way. An AI hallucination is a plausible response by the AI that's insufficient, biased, or flat-out not true. "Fictional or other unwanted responses can steer organizations into faulty decision-making, processes, and misleading communications," warns Avivah Litan, a Gartner vice president.

Threat actors also can use these hallucinations to poison LLMs and "generate specific misinformation in response to a question," observes Michael Rinehart, vice president of AI at data security provider Securiti. "This is extensible to vulnerable source-code generation and, possibly, to chat models capable of directing users of a site to unsafe actions."

Attackers can even go so far as to publish malicious versions of software packages that an LLM might recommend to a software developer, believing it's a legitimate fix to a problem. In this way, attackers can further weaponize AI to mount supply chain attacks.

The Way Forward

Managing these risks will require measured and collective action before AI innovation outruns the industry's ability to control it, experts note. But they also have ideas about how to address AI's problem.

Harr believes in a "fight AI with A" strategy, in which "advancements in security solutions and strategies to thwart risks fueled by AI must develop at an equal or greater pace.

"Cybersecurity protection needs to leverage AI to successfully battle cyber threats using AI technology," he adds. "In comparison, legacy security technology doesn't stand a chance against these attacks."

However, organizations also should take a measured approach to adopting AI — including AI-based security solutions — lest they introduce more risks into their environment, Netrix's Wilson cautions.

"Understand what AI is, and isn't," he advises. "Challenge vendors that claim to employ AI to describe what it does, how it enhances their solution, and why that matters for your organization."

Securiti's Rinehart offers a two-tiered approach to phasing AI into an environment by deploying focused solutions and then putting guardrails in place immediately before exposing the organization to unnecessary risk.

"First adopt application-specific models, potentially augmented by knowledge bases, which are tailored to provide value in specific use cases," he says. "Then … implement a monitoring system to safeguard these models by scrutinizing messages to and from them for privacy and security issues."

Experts also recommend setting up security policies and procedures around AI before it's deployed rather than as an afterthought to mitigate risk. They can even set up a dedicated AI risk officer or task force to oversee compliance.

Outside of the enterprise, the industry as a whole also must take steps to set up security standards and practices around AI that everyone developing and using the technology can adopt — something that will require collective action by both the public and private sector on a global scale, DarkTrace Federal's Fowler says.

He cites guidelines for building secure AI systems published collaboratively by the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) as an example of the type of efforts that should accompany the continued evolution of AI.

"In essence," Securiti's Rinehart says, "the year 2024 will witness a rapid adaptation of both traditional security and cutting-edge AI techniques toward safeguarding users and data in this emerging generative AI era."

About the Author(s)

Elizabeth Montalbano, Contributing Writer

Elizabeth Montalbano is a freelance writer, journalist, and therapeutic writing mentor with more than 25 years of professional experience. Her areas of expertise include technology, business, and culture. Elizabeth previously lived and worked as a full-time journalist in Phoenix, San Francisco, and New York City; she currently resides in a village on the southwest coast of Portugal. In her free time, she enjoys surfing, hiking with her dogs, traveling, playing music, yoga, and cooking.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights