News, news analysis, and commentary on the latest trends in cybersecurity technology.

Researchers explore a love-hate relationship with AI tools like ChatGPT, which can be used to both attack and defend more efficiently.

4 Min Read
A young businessman wearing glasses grins as a robot leans in and whispers a secret into his ear
Source: Andrey Popov via Adobe Stock

Generative artificial intelligence (AI) tools are already being used to penetrate systems, and the damage will get worse, panelists said last week at Nvidia's GPU Technology Conference. But those same tools, enhanced with standard zero-trust practices, can counteract such attacks.

"We're definitely seeing generative AI being used to produce content that would make it more likely for an unwitting partner to click on a link ... [and] do something that they shouldn't have to give access to the system," said Kathleen Fisher, director of DARPA's Information Innovation Office, during a breakout session at the conference.

A good chunk of AI infrastructure has been built on Nvidia's GPUs, which is why the company's virtual developer conference has become a watering hole to discuss AI development and techniques. The show focused on providing swift and ethical responses to AI requests.

Nvidia CEO Jensen Huang declared ChatGPT's introduction in November 2022 to be the "iPhone moment" for AI. However, there are also concerns about generative AI being used to write malware, craft convincing business email compromise (BEC) messages, and create deepfake video and audio.

"I think we are just seeing the tip of the iceberg of these kinds of attacks," Fisher said. "But given how easy that capability is to use and how it's only available as a service these days, we will see a lot more of that in the future."

AI Deserves Zero Trust

Perhaps the best-known generative AI system is OpenAI's ChatGPT, which uses AI to produce coherent answers to user queries. It can write reports and generate code. Other prominent efforts include Microsoft's BingGPT (which is based on ChatGPT) and Google's Bard.

Organizations are eager to try ChatGPT and its competitors, but cybersecurity teams shouldn't be complacent as AI widens the overall attack surfaces and opportunities for hackers to break in, said Kim Crider, managing director for AI Innovation for National Security and Defense at Deloitte, during the panel.

"We should approach our AI from the perspective of 'it is vulnerable, it has already some potential to be exploited against us,'" she said.

Companies need to take a zero-trust approach to AI models and data used to train models. Crider talked about a concept called "model drift," where bad data could be fed into an AI model and lead to a system acting unreliably.

The zero-trust approach will help put in place investments, systems, and personnel for continuous verification and validation of AI models before and after it is put into use, Crider said.

Department of Defense's AI Defenses

DARPA is testing AI cyber agents to enhance its digital defenses. The Cyber Agents for Security Testing and Learning Environments (CASTLE) program simulates a "blue" defensive agent to protect systems against a malicious "red" agent. The blue agent takes actions such as turning off access from a certain source, such as an Internet address associated with domain fronting being used to stage an attack.

"The technology is using reinforcement learning, so you have a blue agent that is learning how to protect the system, and its reward is keeping up the mission readiness of the system," Fisher said.

Another defense program, Cyber-Hunting at Scale (CHASE), uses machine learning to analyze information from telemetry, data logs, and other sources to help track down security vulnerabilities in the Defense Department's IT infrastructure. In a retrospective experiment, CHASE found 13 security incidents about 21 days earlier than the standard technique.

"It was very smart about the data management that it then fed into other machine learning algorithms that were, in fact, much more able to identify the threats really quickly," Fisher said.

Finally, Fisher talked about Guaranteeing AI Robustness Against Deception (GUARD), which uses machine learning to prevent data poisoning that could potentially affect the effectiveness of AI systems.

Civilian Use of AI-Based Cybersecurity

The panelists also suggested supplementing standard security approaches, like running security drills to minimize phishing, with multiple security layers to make sure systems are safe.

Some AI-based defense tools are entering the commercial market. Microsoft introduced Security Copilot, an AI assistant that can help security teams triage threats by actively monitoring internal networks and comparing them to Microsoft's existing security knowledge base.

Microsoft runs all of its AI operations on GPUs from Nvidia. That's partly to take advantage of Nvidia's AI-based security algorithm Morpheus, which can diagnose threats and abnormal activity by analyzing system logs and user log-in patterns.

AI provides many benefits, but a human needs to be at the wheel for security, Fisher said. He provided the example of an autonomous car where a human is ready to take over if something goes wrong with a car's AI system.

"In a way, that takes the psychology of people seriously in terms of what we can do right now to improve the cyber system while we're waiting for these more enhanced agents" that can defend against cyberweapons more quickly than people typing on keyboards can, Fisher said.

"I think 2023 is going to be a major turning point in the generative AI space," Deloitte's Crider said. "It terrifies the heck out of me."

About the Author(s)

Agam Shah, Contributing Writer

Agam Shah has covered enterprise IT for more than a decade. Outside of machine learning, hardware, and chips, he's also interested in martial arts and Russia.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights