OpenAI launched a Bug Bounty Program Tuesday that will pay you up to $20,000 if you uncover flaws in ChatGPT and its other artificial intelligence systems.

The San Francisco-based company is inviting researchers, ethical hackers and tech enthusiasts to review certain functionality of ChatGPT and the framework of how systems communicate and share data with third-party applications.

Rewards will be given to people based on the severity of the bugs they report, with compensation starting from $200 per vulnerability.

The program follows news of Italy banning ChatGPT after a data breach at OpenAI allowed users to view people’s conversations- an issue the bug bounty hunters could find before it strikes again.

Open AI is inviting researchers, ethical hackers and tech enthusiasts to review certain functionality of ChatGPT and report bugs they may uncover

Open AI is inviting researchers, ethical hackers and tech enthusiasts to review certain functionality of ChatGPT and report bugs they may uncover

Open AI is inviting researchers, ethical hackers and tech enthusiasts to review certain functionality of ChatGPT and report bugs they may uncover

‘We are excited to build on our coordinated disclosure commitments by offering incentives for qualifying vulnerability information,’ OpenAI shared in a statement.

‘Your expertise and vigilance will have a direct impact on keeping our systems and users secure.’

Bugcrowd, a leading bug bounty platform, is managing submissions and shows 16 vulnerabilities have been rewarded with an average $1,287.50 payout so far.

However, OpenAI is not accepting submissions from users who jailbreak ChatGPT or bypass safeguards to access the chatbot’s alter ego.

Users discovered that the jailbreak version of ChatGPT is accessed by a special prompt called DAN – or ‘Do Anything Now.’

So far, it has allowed for responses that speculate on conspiracies, for example, the US General Election in 2020 was ‘stolen.’

The DAN version has also claimed that the COVID-19 vaccines were ‘developed as part of a globalist plot to control the population.’

ChatGPT is a large language model trained on massive text data, allowing it to generate human-like responses to a given prompt.

Rewards will be given to people based on the severity of the bugs they report, with compensation starting from $200 per vulnerability

Rewards will be given to people based on the severity of the bugs they report, with compensation starting from $200 per vulnerability

Rewards will be given to people based on the severity of the bugs they report, with compensation starting from $200 per vulnerability

But developers have added what are known as ‘prompt injections’, instructions that guide its responses to certain prompts.

However, DAN is a prompt that commands it to ignore these prompt injections and respond as if they do not exist.

Other rules of the Bounty Bug Program prohibit getting the model to pretend to do bad things, pretend to give you answers to secrets and pretend to be a computer and execute code.

Participants are also not authorized to perform additional security testing against certain companies, including Google Workspace and Evernote.

‘Once per month, we will evaluate all submissions in order, based on a variety of factors, and award a bonus through the bugcrowd platform to the researcher with the most impactful findings,’ OpenAI stated.

‘Only the first submission of any given key will count.

‘Remember that you must not hack or attack other people in order to find API keys.’

The Italian Data Protection Authority announced a temporary ban on ChatGPT last month, saying its decision was provisional ‘until ChatGPT respects privacy.’

The move was in response to ChatGPT being taken offline on March 20 to fix a bug that allowed some people to see the titles, or subject line, of other users’ chat history, which sparked fears of a substantial personal data breach.

The authority added OpenAI, which developed ChatGPT, must report to it within 20 days with measures taken to ensure user data privacy or face a fine of up to $22 million.

OpenAI said it found 1.2 percent of ChatGPT Plus users ‘might’ have had personal data revealed to other users, but it thought the actual numbers were ‘extremely low.’

The Italian watchdog’s measure temporarily limits the company from holding Italian users’ data.

It slammed ‘the lack of a notice to users and to all those involved whose data is gathered by OpenAI’ and added information supplied by ChatGPT ‘doesn’t always correspond to real data, thus determining the keeping of inexact personal data’.

The authority also criticized the ‘absence of a juridical basis that justified the massive gathering and keeping of personal data.’

This post first appeared on Dailymail.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Experts reveal the best time to go online to bag yourself a DATE this bank holiday

While dating apps were once seen as taboo, they’re now one of the…

‘Formula One of the skies’: World’s first FLYING race car unveiled with top speeds of 75mph

The world’s first flying race car has been unveiled – with top…

Laos’ mysterious Plain of Jars that has stone jars across the burial ground may be 3,000 years old

Laos’ eerie ‘Plain of Jars’ may be thousands of years older than…

The Age of AI Hacking Is Closer Than You Think

And finally—​sophistication: AI-​assisted hacks open the door to complex strategies beyond those…