Google’s cloud unit has launched a platform integrating its threat intelligence and cybersecurity operations services with generative artificial intelligence.
The company, owned by Alphabet Inc., said Monday that it has combined existing services including its Mandiant cyber intelligence unit and Chronicle security operations platform with its Vertex AI infrastructure, and an AI model named Sec-PaLM, to create the Google Cloud Security AI Workbench.
The goal is to allow analysts to upload potentially harmful code to Sec-PaLM for analysis, receive breach alerts from Mandiant, and use an AI chat feature to interact with Google’s library of historical security data through Chronicle. This data includes information gathered from protecting Google’s own systems as well as protecting Google Cloud customers, plus Mandiant’s data and other information gathered from widely used products, such as Google’s Chrome browser.
The generative AI, developed by Google’s DeepMind unit, allows users to have conversations with the platform without having to learn specialized vocabularies, said Sunil Potti, vice president and general manager of security at Google Cloud. The AI will look at sample malware, determine ways hackers could breach a system, and produce explanations that can be read and understood quickly, he said.
“We do a lot of work around security for preserving our consumer space, as well as our enterprise customers, so we thought, can we do something in the world of generative AI-based applicability, but do it in a way that could be more than just a product?” Mr. Potti said.
The platform is also designed to be extensible, he said, which will allow other firms to plug in their data and help train the model. Consulting firm Accenture PLC has signed on as Google’s first partner, and Mr. Potti said he expects to add more over the summer.
Generative AI applications have garnered attention in recent months, including AI engines that allow users to generate artwork, or more powerful programs such as ChatGPT, which can produce working computer code, write essays, summarize large amounts of data and produce other forms of text-based outputs.
These platforms, which typically respond to plain-text prompts from users who might not have any technical knowledge, have prompted controversy. Artists say that some amount to intellectual property theft and other people worry they could threaten jobs performed by humans. ChatGPT has been temporarily banned in Italy, while regulators study potential harms to privacy.
In cybersecurity, researchers have claimed these platforms could enable new waves of cybercrime, by creative persuasive phishing emails that read as if they were written by humans, or by generating code for malware.
Security chiefs tend to be more skeptical about the immediate threat from these platforms. The code generated by AI isn’t the same level an expert human coder could create, said Justin Shattuck, chief information security officer at cyber insurance business Resilience. More than that, he said, outputs are often unreliable and need to be thoroughly checked by humans.
“Simply put, I don’t trust it, yet,” Mr. Shattuck said.
Mr. Potti acknowledged that generative AI technology has an element of mistrust among professionals. Google’s platform is still in the “curation” stage, he said, and will continue to improve as it learns from data. For now, he said, it will be focused on what he termed low-risk and high-reward jobs, such as analyzing threat intelligence and writing server rules, which can be audited by humans.
But because Google’s platform was trained on Google’s security data, and was engineered specifically for cybersecurity, it is more effective than a chatbot, he said.
“The easy answer is to slap on what we call a chat or a conversational interface, which is useful in itself. But that would be falling short,” Mr. Potti said.
Write to James Rundle at [email protected]
Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8