Jakkal says that while machine learning security tools have been effective in specific domains, like monitoring email or activity on individual devices, known as endpoint security, Security Copilot brings all of those separate streams together and extrapolates a bigger picture. “With Security Copilot you can catch what others may have missed because it forms that connective tissue,” she says.

Security Copilot is largely powered by OpenAI’s ChatGPT 4, but Microsoft emphasizes that it also integrates a proprietary Microsoft security-specific model. The system tracks everything that’s done during an investigation. The resulting record can be audited, and the materials it produces for distribution can all be edited for accuracy and clarity. If something Copilot is suggesting during an investigation is wrong or irrelevant, users can click the “Off Target” button to further train the system.

The system offers access controls so certain colleagues can be shared on particular projects and not others, which is especially important for investigating possible insider threats. And Security Copilot allows for a sort of backstop for 24/7 monitoring. That way, even if someone with a specific skillset isn’t working on a given shift or a given day, the system can offer basic analysis and suggestions to help plug gaps. For example, if a team wants to quickly analyze a script or software binary that may be malicious, Security Copilot can start that work and contextualize how the software has been behaving and what its goals may be.

Microsoft emphasizes that customer data is not shared with others and is “not used to train or enrich foundation AI models.” Microsoft does pride itself, though, on using “65 trillion daily signals” from its massive customer base around the world to inform its threat detection and defense products. But Jakal and her colleague, Chang Kawaguchi, a Microsoft’s vice president and AI security architect, emphasize that Security Copilot is subject to the same data-sharing restrictions and regulations as any of the security products it integrates with. So if you already use Microsoft Sentinel or Defender, Security Copilot must comply with the privacy policies of those services.

Kawaguchi says that Security Copilot has been built to be as flexible and open-ended as possible, and that customer reactions will inform future feature additions and improvements. The system’s usefulness will ultimately come down to how insightful and accurate it can be about each customer’s network and the threats they face. But Kawaguchi says that the most important thing is for defenders to start benefiting from generative AI as quickly as possible.

As he puts it, “We need to equip defenders with AI given that attackers are going to use it regardless of what we do.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI predicts arrests within three years of a being prisoner released on parole

It may sound like the plot of the 2002 movie Minority Report,…

Why is Indiefoxx banned from Twitch? Streamer famed for ‘nudity and lingerie videos’ blocked from app for FOURTH time

A TWITCH star famed for hosting racy livestreams in her underwear has…

Musk offers to pay legal bills of people ‘unfairly treated’ for posting on platform

Comment comes as company is going through organizational changes and looking to…

PS5 UK stock checker – Amazon Playstation 5 restock on MONDAY plus Argos, BT, Currys, Asda, Very, Smyths, GAME consoles

SINCE its release in November, gamers have found it near impossible to…