ChatGPT‘s creator has confirmed that a bug in the system has allowed some users to snoop on other people’s chat histories.

OpenAI CEO Sam Altman confirmed last night that the company was experiencing a ‘significant issue’ that threatened the privacy of conversations on its platform.

The revelations came after several social media users shared ChatGPT conversations online that they had not taken part in.

As a result of this, users were then blocked from viewing any chat history between 8am and 5pm (GMT) yesterday.

Mr Altman said: ‘We had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating. A small percentage of users were able to see the titles of other users’ conversation history.’

On Monday it was confirmed that a 'small percentage' of ChatGPT users were able to view other people's chat histories

On Monday it was confirmed that a 'small percentage' of ChatGPT users were able to view other people's chat histories

On Monday it was confirmed that a ‘small percentage’ of ChatGPT users were able to view other people’s chat histories

ChatGPT fast facts – what you need to know 

  • It’s a chatbot built on a large language model which can output human-like text and understand complex queries
  • It launched on November 30, 2022
  • By January 2023, it had 100 million users – faster than TikTok or Instagram
  • The company behind it is OpenAI
  • OpenAI secured a $10 billion investment from Microsoft
  • Other ‘big tech’ companies such as Google have their own rivals such as Google’s Bard
<!—->

Advertisement

ChatGPT was founded by in Silicon Valley in 2015 by a group of American angel investors including current CEO Sam Altman. 

It is a large language model that has been trained on a massive amount of text data, allowing it to generate responses to a given prompt.

People across the world have used the platform to write human-like poems, texts and various other written works. 

However, a ‘small percentage’ of users this week could see chat titles in their own conversation history that were not theirs.

On Monday, one person on Twitter warned others to ‘be careful’ of the chat bot which had shown them other people’s conversation topics.

An image of their list showed a number of titles including ‘Girl Chases Butterflies’, ‘Books on human behaviour’ and ‘Boy Survives Solo Adventure’, but it was unclear which of these were not theirs.   

They said: ‘If you use #ChatGPT be careful! There’s a risk of your chats being shared to other users! 

‘Today I was presented another user’s chat history. I couldn’t see contents, but could see their recent chats’ titles.’ 

OpenAI CEO Sam Altman confirmed ChatGPT experienced a 'significant' issue yesterday

OpenAI CEO Sam Altman confirmed ChatGPT experienced a 'significant' issue yesterday

OpenAI CEO Sam Altman confirmed ChatGPT experienced a ‘significant’ issue yesterday

Users were blocked from viewing any chat history between 8am and 5pm (GMT) yesterday

Users were blocked from viewing any chat history between 8am and 5pm (GMT) yesterday

Users were blocked from viewing any chat history between 8am and 5pm (GMT) yesterday

One person on Twitter warned others to 'be careful' of the chat bot which had shown them other people's conversation topics

One person on Twitter warned others to 'be careful' of the chat bot which had shown them other people's conversation topics

One person on Twitter warned others to ‘be careful’ of the chat bot which had shown them other people’s conversation topics

During the incident, the user added that they were faced with many errors in regards to network connectivity in addition to ‘unable to load history’ errors.

According to the BBC, another user also claimed they could see conversations written in Mandarin and another called ‘Chinese Socialism Development’.

Following this, ChatGPT functions were temporarily disabled as the company worked to fix the issue.

But this privacy concern is not the first to be raised surrounding the online language model. 

Last month, JP Morgan Chase joined companies like Amazon and Accenture in restricting use of AI chatbot ChatGPT among the company’s some 250,000 staff over concerns about data privacy. 

One of the major shared concerns was that data could be used by ChatGPT’s developers in order to enhance algorithms or that sensitive information could be accessed by engineers. 

ChatGPT’s privacy policy states that it may use personal data surrounding ‘use of the services’ to ‘develop new programs and services’.

However, it is also claimed that this personal information may be de-identified or aggregated before service analysis takes place.

What is OpenAI’s chatbot ChatGPT and what is it used for?

OpenAI states that their ChatGPT model, trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.

Initial development involved human AI trainers providing the model with conversations in which they played both sides – the user and an AI assistant. The version of the bot available for public testing attempts to understand questions posed by users and responds with in-depth answers resembling human-written text in a conversational format.

A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code.

The bot can respond to a large range of questions while imitating human speaking styles.

A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code

A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code

A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code

As with many AI-driven innovations, ChatGPT does not come without misgivings. OpenAI has acknowledged the tool´s tendency to respond with “plausible-sounding but incorrect or nonsensical answers”, an issue it considers challenging to fix.

AI technology can also perpetuate societal biases like those around race, gender and culture. Tech giants including Alphabet Inc’s Google and Amazon.com have previously acknowledged that some of their projects that experimented with AI were “ethically dicey” and had limitations. At several companies, humans had to step in and fix AI havoc.

Despite these concerns, AI research remains attractive. Venture capital investment in AI development and operations companies rose last year to nearly $13 billion, and $6 billion had poured in through October this year, according to data from PitchBook, a Seattle company tracking financings.

This post first appeared on Dailymail.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Samsung secret code can help owners FIX broken speakers and other problems – check your device today

A PHONE expert on TikTok has revealed a secret Samsung code that…

Three slashes upfront cost of Samsung Galaxy S23 plan from £99 to ZERO in superb deal

GET your hands on the flagship Samsung handset for less in this…

‘Splatter’ count suggests flying insect numbers have plunged 60% in less than 20 years: report

Numbers of British insects have fallen at a ‘terrifying’ rate, which threatens…

POTATO BEANS are excellent sources of protein and fibre – and they could be grown in the UK

From chia seeds to kale leaves, several trendy ‘superfoods’ have gained popularity…