How AI-Augmented Threat Intelligence Solves Security Shortfalls

Researchers explore how overburdened cyber analysts can improve their threat intelligence jobs by using ChatGPT-like large language models (LLMs).

4 Min Read
3d cute AI robot
Source: Atchariya Wattanakul via Alamy Stock Photo

Security-operations and threat-intelligence teams are chronically short-staffed, overwhelmed with data, and dealing with competing demands — all issues which large-language-model (LLM) systems can help remedy. But a lack of experience with the systems is holding back many companies from adopting the technology.

Organizations that implement LLMs will be able to better synthesize intelligence from raw data and deepen their threat-intelligence capabilities, but such programs need support from security leadership to be focused correctly. Teams should implement LLMs for solvable problems, and before they can do that, they need to evaluate the utility of LLMs in an organization's environment, says John Miller, head of Mandiant's intelligence analysis group.

"What we're aiming for is helping organizations navigate the uncertainty, because there aren't a lot of either success stories or failure stories yet," Miller says. "There aren't really answers yet that are based on routinely available experience, and we want to provide a framework for thinking about how to best look forward to those types of questions about the impact."

In a presentation at Black Hat USA in early August, entitled "What Does an LLM-Powered Threat Intelligence Program Look Like?," Miller and Ron Graf, a data scientist on the intelligence-analytics team at Mandiant's Google Cloud, will demonstrate the areas where LLMs can augment security workers to speed up and deepen cybersecurity analysis.

Three Ingredients of Threat Intelligence

Security professionals who want to create a strong threat intelligence capability for their organization need three components to successfully create an internal threat intelligence function, Miller tells Dark Reading. They need data about the threats that are relevant; the capability to process and standardize that data so that it's useful; and the ability to interpret how that data relates to security concerns.

That's easier said than done, because threat intelligence teams — or individuals in charge of threat intelligence — are often overwhelmed with data or requests from stakeholders. However, LLMs can help bridge the gap, allowing other groups in the organization to request data with natural language queries and get the information in non-technical language, he says. Common questions include trends in specific areas of threats, such as ransomware, or when companies want to know about threats in specific markets.

"Leaders who succeed in augmenting their threat intelligence with LLM-driven capabilities can basically plan for a higher return on investment from their threat intelligence function," Miller says. "What a leader can expect as they're thinking forward, and what their current intelligence function can do, is create higher capability with the same resourcing to be able to answer those questions."

AI Cannot Replace Human Analysts

Organizations that embrace LLMs and AI-augmented threat intelligence will have an improved ability to transform and make use of enterprise security datasets that otherwise would go untapped. Yet, there are pitfalls. Relying on LLMs to produce coherent threat analysis can save time, but can also, for instance, lead to potential "hallucinations" — a shortcoming of LLMs where the system will create connections where there are none or fabricate answers entirely, thanks to being trained on incorrect or missing data.

"If you're relying on the output of a model to make a decision about the security of your business, then you want to be able to confirm that someone has looked at it, with the ability to recognize if there are any fundamental errors," Google Cloud's Miller says. "You need to be able to make sure that you've got experts who are qualified, who can speak for the utility of the insight in answering those questions or making those decisions."

Such issues are not insurmountable, says Google Cloud's Graf. Organizations could have competing models chained together to essentially do integrity checks and reduce the rate of hallucinations. In addition, asking questions in an optimized ways — so called "prompt engineering" — can lead to better answers, or at least ones that are the most in tune with reality.

Keeping an AI paired with a human, however, is the best way, Graf says.

"It's our opinion that the best approach is just to include humans in the loop," he says. "And that's going to yield downstream performance improvements anyways, so the organizations is still reaping the benefits."

This augmentation approach has been gaining traction, as cybersecurity firms have joined other companies in exploring ways to transform their core capabilities with large LLMs. In March, for example, Microsoft launched Security Copilot to help cybersecurity teams investigate breaches and hunt for threats. And in April, threat intelligence firm Recorded Future debuted an LLM-enhanced capability, finding that the system's ability to turn vast data or deep searching into a simple two- or three-sentence summary report for the analyst has saved a significant amount of time for its security professionals.

"Fundamentally, threat intelligence, I think, is a 'Big Data' problem, and you need to have extensive visibility into all levels of the attack into the attacker, into the infrastructure, and into the people they target," says Jamie Zajac, vice president of product at Recorded Future, who says that AI allows humans to simply be more effective in that environment. "Once you have all this data, you have the problem of 'how do you actually synthesize this into something useful?', and we found that using our intelligence and using large language models ... started to save [our analysts] hours and hours of time."

Read more about:

Black Hat News

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights