This article is more than 1 year old

OpenAI CEO 'feels awful' after ChatGPT leaks conversations, payment info

Delayed mea culpa isn't a good look for a biz with 'open' in the name

Updated OpenAI CEO Sam Altman feels "awful" about ChatGPT leaking some users' chat histories on Monday, and blamed an open source library bug for the snafu.

In a couple of tweets, Altman admitted the flaw, which allowed some users to see snippets of others' conversations — not the full contents, but recent titles – with the question-and-response bot.

"We had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating," Altman said.

"A small percentage of users were able to see the titles of other users' conversation history. We feel awful about this."

Because of the buggy code, ChatGPT users won't be able to access most of their March 20 conversations, he added.  

OpenAI also plans to follow up with a technical postmortem about the privacy breach, according to Altman. The formerly non-profit biz did not respond to The Register's inquiries about which open-source library contained the buggy code, and how many users were affected.

There's no word yet on when the fix will be released and when the postmortem will publish, either.

While users are understandably peeved about the conversation leaks, Kaspersky's lead data scientist told The Register that ChatGPT users should read the small print — and forget any illusion of privacy.

"ChatGPT warns on login that 'conversations may be reviewed by our AI trainers,'" Vlad Tushkanov said, noting that the web demo and the API for businesses use different interfaces. "So from the very beginning the users should have had zero expectation of privacy when using the ChatGPT web demo."

Kaspersky has the following advice, he added: "Treat any interaction with a chatbot (or any other service, for that matter) as a conversation with a complete stranger. You don't know where the content will end up, so refrain from revealing any personal or sensitive information about yourself or other people."

Meanwhile, look at the new toys!

On Thursday OpenAI announced the rollout of ChatGPT plugins to connect the chatbot to third-party apps, thus allowing the chatbot to do things like order food via Instacart on behalf of the users or book a flight on Expedia. 

The plugins also allow ChatGPT to access real-time information, like stock prices and sports scores, or company documents stored on your device — if you trust the chatbot with those.

"You can install plugins to help with a wide variety of tasks," Altman tweeted. "We are excited to see what developers create!"

No doubt, the data thieves are, too. ®

Updated to add on March 24

OpenAI has emitted its technical postmortem report into the bug, which we're told was within an open-source Redis client library called redis-py. It may indeed be this programming error.

Essentially, OpenAI uses Redis to cache user data. The biz made a change to its systems at 0100 PT, March 20, that caused a spike in activity that led to the cache coughing up the wrong information, via this client library fault. The data leak persisted from that time to 1000 PT that day.

"This bug only appeared in the Asyncio redis-py client for Redis Cluster, and has now been fixed," OpenAI wrote.

It turns out more than just snippets of chat titles were leaked via this bug; some bits of people's subscription payment details were also exposed to others as well as possibly their opening messages with the chatbot.

Part of ChatGPT's website features a log of one's conversations with the bot, and it was in that history sidebar that snatches of other people's chats were showing up.

OpenAI said:

We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.

Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window. In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time.

That payment info was leaked via subscription confirmation emails going to the wrong people on March 20, or by users looking at their subscription payment details on the ChatGPT website and seeing a stranger's records during the time the bug was active. We've also heard that users were able to see others' names and email addresses on their own pages.

"We have reached out to notify affected users that their payment information may have been exposed. We are confident that there is no ongoing risk to users’ data," OpenAI added.

More about

TIP US OFF

Send us news


Other stories you might like