The ChatGPT AI bot has spurred speculation about how hackers might use it and similar tools to attack faster and more effectively, though the more damaging exploits so far have been in laboratories.

In its current form, the ChatGPT bot from OpenAI, an artificial-intelligence startup backed by billions of dollars from Microsoft Corp., is mainly trained to digest and generate text. For security chiefs, that means bot-written phishing emails might be more convincing than, for example, messages from a hacker whose first language isn’t English. 

Today’s ChatGPT is too unpredictable and susceptible to errors to be a reliable weapon itself, said Dustin Childs, head of threat awareness at Trend Micro Inc.’s Zero Day Initiative, the cybersecurity company’s software vulnerability-hunting program. “We’re years away from AI finding vulnerabilities and doing exploits all on its own,” Mr. Childs said.

Still, that won’t always be the case, he said. 

Two security researchers from cybersecurity company Claroty Ltd. said ChatGPT helped them win the Zero Day Initiative’s hack-a-thon in Miami last month.

Noam Moshe, a vulnerability researcher at Claroty, said the approach he and his partner took shows how a determined hacker can employ an AI bot. Generative AI—algorithms that create realistic text or images built on the training data they have consumed—can supplement hackers’ know-how, he said.

The goal of the three-day event, known as Pwn2Own, was to disrupt, break into and take over Internet of Things and industrial systems. Before arriving, contestants chose targets from Pwn2Own’s list, and then prepared tactics.  

Mr. Moshe and his partner found several potential weak points in their selected systems. They used ChatGPT to help write code to chain the bugs together, he said, saving hours of manual development. No single bug would have allowed the team to get very far, he said, but manipulating them in a sequence would. At the contest, Mr. Moshe and his partner succeeded all 10 times they tried, winning $123,000. 

“A vulnerability on its own isn’t interesting, but when we look at the bigger picture and collect vulnerabilities, we can rebuild the chain to take over the system,” he said.  

OpenAI and other companies with generative AI bots are adding controls and filters to prevent abuse, such as to prevent racist or sexist outputs

Some bad actors will likely try to get around any cybersecurity boundaries the bots are taught, said Christopher Whyte, an assistant professor of cybersecurity and homeland security at Virginia Commonwealth University. 

Rather than instructing a bot to write code to take data from a computer without a user knowing, a hacker could try to trick it to write malicious code by formulating the request without obvious triggers, Mr. Whyte said.

It is similar to when a scammer uses persuasion to trick an office worker to reveal credentials or wire money to fraudulent accounts, he said. “You steer the conversation to get the target to bypass controls,” he said.  

Write to Kim S. Nash at [email protected]

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

This post first appeared on wsj.com

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Deceased suspect identified in fatal shooting of four officers in N.C.

Authorities on Tuesday identified the deceased suspect in the killing of four…

Michigan Arab and Muslim leaders fuming after second Biden visit this year with no meeting

SAGINAW, Mich. — For the second time this year, President Joe Biden…

Covid will be a leading cause of death in the U.S. indefinitely, whether or not the pandemic is ‘over’

After President Joe Biden said the coronavirus pandemic was “over” in an…

People with smell disorders may get relief with an experimental treatment

A numbing procedure usually used to treat pain and post-traumatic stress disorder…