ChatGPT starts spouting nonsense in 'unexpected responses' shocker

Skips the 'intelligence' part of generative AI

Updated Sometimes generative AI systems can spout gibberish, as OpenAI's ChatGPT chatbot users discovered last night.

OpenAI noted, "We are investigating reports of unexpected responses from ChatGPT" at 2340 UTC on February 20, 2024, as users gleefully posted images of the chatbot appearing to emit utter nonsense.

While some were obviously fake, other responses indicated that the popular chatbot was indeed behaving very strangely. On the ChatGPT forum on Reddit, a user posted a strange, rambling response from the chatbot to the question, "What is a computer?"

The response began: "It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few..." and just kept on going, getting increasingly surreal.

Other users posted examples where the chatbot appeared to respond in a different language, or simply responded with meaningless garbage.

Some users described the output as a "word salad."


Gary Marcus, a cognitive scientist and artificial intelligence pundit, wrote in his blog: "ChatGPT has gone berserk" and went on to describe the behavior as "a warning."

OpenAI has not elaborated on what exactly happened, although one plausible theory is that one or more of the settings used behind the scenes to govern the response of the chatbot had been incorrectly configured, resulting in gibberish being presented to users.

Seven minutes after first admitting a problem, OpenAI said, "The issue has been identified and is being remediated now," and it has since been monitoring the situation. When we tried the "What is a computer?" question this morning, ChatGPT responded with a far more reasonable "A computer is a programmable electronic device that can store, retrieve, and process data."

We also asked it why it went berserk last night.

It responded:

As an AI language model, I don't have emotions or consciousness, so I can't go "berserk" in the way a human might. However, sometimes unexpected behavior can occur due to errors in programming, misunderstanding of input, or limitations in the training data.

Marcus opined: "In the end, Generative AI is a kind of alchemy. People collect the biggest pile of data they can, and (apparently, if rumors are to be believed) tinker with the kinds of hidden prompts... hoping that everything will work out right."

He went on to state that, in reality, the systems have never been stable, and lack safety guarantees. "The need for altogether different technologies that are less opaque, more interpretable, more maintainable, and more debuggable — and hence more tractable — remains paramount."

We contacted OpenAI for a more detailed explanation of what happened and will update this article should the company respond. ®

Updated to add on February 22:

OpenAI has published a postmortem on why its ChatBot began gibbering madly at users.

It said: "On February 20, 2024, an optimization to the user experience introduced a bug with how the model processes language.

"LLMs generate responses by randomly sampling words based in part on probabilities. Their 'language' consists of numbers that map to tokens.

"In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense.

"More technically, inference kernels produced incorrect results when used in certain GPU configurations.

"Upon identifying the cause of this incident, we rolled out a fix and confirmed that the incident was resolved."

More about

More about

More about


Send us news

Other stories you might like