News, news analysis, and commentary on the latest trends in cybersecurity technology.

Microsoft Security Copilot Uses GPT-4 to Beef Up Security Incident Response

Microsoft's new AI assistant tool helps cybersecurity teams investigate security incidents and hunt for threats.

Fahmida Y. Rashid, Managing Editor, Features

March 29, 2023

4 Min Read
3d rendering of a robot with binoculars on blue background
Source: Kittipong Jirasukhanont via Alamy Stock Photo

Microsoft has been leaning into its $10 billion investment in OpenAI by introducing AI assistants – all called Copilot – across its product portfolio. The latest one is Microsoft Security Copilot, to help security teams investigate and respond to security incidents.

Defenders are having a hard time dealing with a very dynamic security environment, and Microsoft Security Copilot is intended to make defenders’ lives better by using artificial intelligence to help them catch incidents that they may otherwise miss, improve the quality of threat detection, and speed up response, says Chang Kawaguchi, vice president and AI Security Architect at Microsoft. Security Copilot uses both OpenAI’s GPT-4 generative AI model and Microsoft’s proprietary security-based model to identify breaches, connect threat signals, and analyze data.

Security Copilot is intended to make “defenders’ lives better, make them more efficient, and make them more effective by bringing AI to this problem,” Kawaguchi says.

Security Copilot will ingest and make sense of huge amounts of security data – including the 65 trillion security signals Microsoft pulls every day and all the data being collected by the Microsoft products the organization is using, such as Microsoft Sentinel, Defender, Entra, Priva, Purview, and Intune. Analysts can summarize incidents, analyze vulnerabilities, and look up data on common vulnerabilities and exposures.

Analysts and incident response teams will be able to type what they are trying to understand into a text prompt – “/ask about” – and Security Copilot will return responses based on what it knows of the organization’s data. This way, security teams will be able to see connections between various parts of a security incident, such as a suspicious email, a malicious software file, or the different parts of the system that had been compromised, says Kawaguchi. The queries could be general, such as an explanation of a vulnerability, or specific to the organization’s environment, such as looking in the logs for signs that a particular Exchange flaw had been exploited. And because Security Copilot uses GPT-4, it can respond to natural language questions.

The analyst can see summaries of what happened and then follow prompts from Security Copilot to drill down deeper in the investigation. All these steps can be saved in a “pinboard,” which can be shared with other members of the security team as well as stakeholders and senior executives. All the tasks that have been completed are saved and readily accessible. There is also an automatically generated summary which will update as new tasks are completed.

“This is what makes this experience more of a notebook than a chat bot experience,” says Kawaguchi, noting that the tool can also create PowerPoint presentations based on the investigation that the security team can use to share the details of the incident afterward.

Security Copilot is not designed to replace human analysts, but to provide them with the information they need to move quickly and at scale during the course of an investigation, Kawaguchi says. Threat hunters can also use the tool to determine whether an organization is susceptible to known vulnerabilities and exploits by examining each asset in the environment, according to the company.

Forrester Senior Analyst Allie Mellen said in an email that Security Copilot was the first time a product focused on using AI to improve investigation and response. Previously AI tended to focus more on detection, Mellen said.

In a demo, Kawaguchi showed how Security Copilot can look at a cyberattack and figure out how the malware got into the network and infected the victim’s machine. Security Copilot can perform log analysis, display alert summaries, and generate graphs and other visualizations to show the sequence of events in the incident. The tool can also suggest steps for remediation and guidance.

If the information provided looks incorrect, or “off target,” there is a way to provide feedback immediately so that the model can learn. Security Copilot will continually improve as it learns from the company’s own data, as well as Microsoft’s extensive security research and visibility over the global threat landscape. However, the organization’s data will never be used to train the main Security Copilot algorithm, Kawaguchi says.

Microsoft has been flexing its AI-clout in recent weeks, going beyond Copilot in GitHub, which helps developers write code. A few weeks ago, the company announced Microsoft 365 Copilot, which allows users to use Copilot to do things like put together a PowerPoint presentation and write up articles in Word. Copilot in Dynamics 365 helps sales and customer success representatives engage with customers and prospects by sending natural-sounding messages, rather than relying on premade boilerplate messages.

Security Copilot is currently in private preview; Kawaguchi declined to specify a date for when it will be generally available, noting that it all depended on how organizations wind up using the tool. There are also long-term plans for extending Security Copilot to ingest information from other security vendors’ products.

“Security Copilot is poised to become the connective tissue for all Microsoft security products and, importantly, will integrate with third-party products as well,” Forrester’s Mellen said.

“We think that AI has the opportunity to change the game for almost all things that security folks do. We're starting with the investigation flow because this is the one where we think there's a massive amount of opportunity for efficiency and effectiveness,” Kawaguichi says.

About the Author

Fahmida Y. Rashid

Managing Editor, Features, Dark Reading

As Dark Reading’s managing editor for features, Fahmida Y Rashid focuses on stories that provide security professionals with the information they need to do their jobs. She has spent over a decade analyzing news events and demystifying security technology for IT professionals and business managers. Prior to specializing in information security, Fahmida wrote about enterprise IT, especially networking, open source, and core internet infrastructure. Before becoming a journalist, she spent over 10 years as an IT professional -- and has experience as a network administrator, software developer, management consultant, and product manager. Her work has appeared in various business and test trade publications, including VentureBeat, CSO Online, InfoWorld, eWEEK, CRN, PC Magazine, and Tom’s Guide.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights