Microsoft’s ‘Security Copilot’ Unleashes ChatGPT on Breaches

The new tool aims to deliver the network insights and coordination that “AI” security systems have long promised.
Broken chain with padlock on red backdrop
Photograph: t_kimura/Getty Images

For years now, “artificial intelligence” has been a hot buzzword in the cybersecurity industry, promising tools that spot suspicious behavior on a network, quickly figure out what's going on, and guide incident response if there's an intrusion. The most credible and useful of services, though, have actually been machine learning algorithms trained to spot characteristics of malware and other dubious network activity. Now, as generative AI tools proliferate, Microsoft says it has finally built a service for defenders that's worthy of all the hype.

Two weeks ago, the company launched Microsoft 365 Copilot, which builds on a partnership with OpenAI along with Microsoft's own work on large language models. The company is now rolling out Security Copilot, a sort of security field notebook that integrates system data and network monitoring from security tools like Microsoft Sentinel and Defender and even third-party services. 

Security Copilot can surface alerts, map out in both words and charts what may be going on within a network, and provide steps for a potential investigation. As a human user works with Copilot to map out a possible security incident, the platform tracks history and generates summaries, so if colleagues get added to the project, they can quickly come up to speed and see what's been done so far. The system will also automatically produce slides and other presentation tools about an investigation to help security teams communicate the facts of a situation to people outside their department, and particularly executives who may not have security experience but need to stay informed.   

“Over the past few years, what we’ve seen is this absolute escalation in the frequency of attacks, in the sophistication of attacks, as well as in the intensity of attacks,” says Vasu Jakkal, Microsoft’s chief vice president of security. “And there is not a lot of time for a defender to contain the escalation of an attack. The balance is right now shifted in the direction of attackers.” 

Courtesy of Microsoft

Jakkal says that while machine learning security tools have been effective in specific domains, like monitoring email or activity on individual devices—known as endpoint security—Security Copilot brings all of those separate streams together and extrapolates a bigger picture. “With Security Copilot you can catch what others may have missed because it forms that connective tissue,” she says.

Security Copilot is largely powered by OpenAI's ChatGPT-4, but Microsoft emphasizes that it also integrates a proprietary Microsoft security-specific model. The system tracks everything that's done during an investigation. The resulting record can be audited, and the materials it produces for distribution can all be edited for accuracy and clarity. If something Copilot is suggesting during an investigation is wrong or irrelevant, users can click the “Off Target” button to further train the system.

The platform offers access controls so certain colleagues can be shared on particular projects and not others, which is especially important for investigating possible insider threats. And Security Copilot allows for a sort of backstop for 24/7 monitoring. That way, even if someone with a specific skillset isn't working on a given shift or a given day, the system can offer basic analysis and suggestions to help plug gaps. For example, if a team wants to quickly analyze a script or software binary that may be malicious, Security Copilot can start that work and contextualize how the software has been behaving and what its goals may be.

Microsoft emphasizes that customer data is not shared with others and is “not used to train or enrich foundation AI models.” Microsoft does pride itself, though, on using “65 trillion daily signals” from its massive customer base around the world to inform its threat detection and defense products. But Jakkal and her colleague, Chang Kawaguchi, Microsoft's vice president and AI security architect, emphasize that Security Copilot is subject to the same data-sharing restrictions and regulations as any of the security products it integrates with. So if you already use Microsoft Sentinel or Defender, Security Copilot must comply with the privacy policies of those services.

Kawaguchi says that Security Copilot has been built to be as flexible and open-ended as possible, and that customer reactions will inform future feature additions and improvements. The system's usefulness will ultimately come down to how insightful and accurate it can be about each customer’s network and the threats they face. But Kawaguchi says that the most important thing is for defenders to start benefiting from generative AI as quickly as possible.

As he puts it: “We need to equip defenders with AI given that attackers are going to use it regardless of what we do.”