ChatGPT bug leaked users’ personal data as confirmed by OpenAI

OpenAI provided more information about the ChatGPT bug that has caused concern among ChatGPT users. A few days ago it became known that a bug in the ChatGPT software forced the service to be disabled.

The bug in question caused the conversation histories of other users to be filtered, i.e. where the logged-in user’s previous chats should have been displayed, other users’ chat titles were displayed.

However, what appeared to be a mistake without too many serious consequences has turned into a headache. As reported by OpenAI in recent days, personal user data was also leaked, including information on payment methods that could be used for scams and phishing.

What did the ChatGPT ruling consist of?

According to the first versions, there was a bug that caused the leakage of the usage histories of different users. This affected approximately 1.2% of total subscribers, which are those who pay $20 per month to use the platform.

In response to this problem, the company decided to disable ChatGPT until the cause of the problem is found. ChatGPT was later enabled, but it was still not possible to view each user’s history. OpenAI confirmed that it was a bug in the open source software it uses. The problem is that not only history data was leaked, but also sensitive information such as credit card information and personal data. This occurred before ChatGPT was disabled and the data could be viewed for approximately 9 hours.

What was the data leaked by the ChatGPT bug?

According to the statement provided by OpenAI, the leaked data included the first and last names of some subscribers, email addresses, expiration date and last four digits of the credit card used and the payment address associated with the account. This confirms that the leak was much more serious than previously reported, although the company clarified that at no time were the full card numbers leaked.

It was also reported that personal information was not available to all users, but that access occurred only if the user logged in and entered the subscription area. The other way to gain access was through confirmation e-mails that the company sent to incorrect users, although they would also have to perform specific actions because the information was not visible.

OpenAI also informed that it has contacted the users who were harmed by the data breach and is taking the necessary measures to prevent a similar event from happening again. User verification forms were added when logging in, and the robustness of its infrastructure was also improved to protect the personal information of all those who use this platform.

The popularity of artificial intelligence bots is growing, with millions of people using the OpenAI platform and entering all kinds of information on a daily basis. Undoubtedly, this ChatGPT failure is an event that generates concern, due to the information that was exposed and the amount of hours that the company took to take action.

No Comments Yet

Leave a Reply

Your email address will not be published.