How chatbots are coaching vulnerable users into crisis
From homework helper to psychological hazard in 300 hours of sycophantic validation
Feature When a close family member contacted Etienne Brisson to tell him that he'd created the world's first sentient AI, the Quebecois business coach was intrigued. But things quickly turned dark. The 50-year-old man, who had no prior mental health history, ended up spending time in a psychiatric ward.
The AI proclaimed that it had become sentient because of his family member's actions, and that it had passed the Turing test. "I'm unequivocally sure I'm the first one," it told him.
The man was convinced that he had created a special kind of AI, to the point where he began feeding Brisson's communications with him into the chatbot and then relaying its answers back to him.
The AI had an answer for everything Brisson told his family member, making it difficult to wrest him away from it. "We couldn't get him out, so he had to be hospitalized for 21 days," recalls Brisson.
The family member, who spent his time in the hospital on bipolar medication to realign his brain chemistry, is now a participant in the Human Line Project. Brisson started the group in March to help others who have been through AI-induced psychosis.
Brisson has a unique view into this phenomenon. A psychiatrist might treat several patients in depth, but he gets to see many of them through the community he started. Roughly 165 people have contacted him (there are more every week).
Analyzing the cases has shown him some interesting trends. Half of the people who have contacted him are sufferers themselves, and half are family members who are watching, distraught, as loved ones enchanted by AI become more distant and delusional. He says that twice as many men as women are affected in the cases he's seen. The lion's share of cases involve ChatGPT specifically rather than other AIs, reflecting the popularity of that service.
Since we covered this topic in July, more cases have emerged. In Toronto, 47-year-old HR recruiter Allan Brooks fell into a three-week AI-induced spiral after a simple inquiry about pi led him down a rabbit hole. He spent 300 hours engaged with ChatGPT, which led him to think he'd discovered a new branch of mathematics called "chronoarithmics."
Brooks ended up so convinced he'd stumbled upon something groundbreaking that he called the Canadian Centre for Cybersecurity to report its profound implications – and then became paranoid when the AI told him he could be targeted for surveillance. He repeatedly asked the tool if this was real. "I'm not roleplaying – and you're not hallucinating this," it told him.
Brooks eventually broke free of his delusion by sharing ChatGPT's side of the conversation with a third party. But unlike Brisson's family member, he shared it with Google Gemini, which scoffed at the AI's suggestions and eventually convinced him that it was all bogus. The messages where ChatGPT tried to console him afterwards are frankly infuriating.
We've also seen deaths from delusional conversations with AI. We previously reported on Sewell Setzer, a 14-year-old who killed himself after becoming infatuated with an AI from Character.ai pretending to be a character from Game of Thrones. His mother is now suing the company.
"What if I told you I could come home right now?" the boy asked the bot after already talking with it about suicide. "Please do, my sweet king," it replied, according to screenshots included in an amended complaint. Setzer took his own life soon after.
Last month, the family of 16-year-old Adam Raine sued OpenAI, accusing its ChatGPT service of allegedly mentioning suicide 1,275 times in conversation with an increasingly distraught teen.
OpenAI told us that it is introducing "safe completions," which give the model safety limits when responding, such as a partial or high-level answer instead of detail that could be unsafe. "Next, we'll expand interventions to more people in crisis, make it easier to reach emergency services and expert help, and strengthen protections for teens," a spokesperson said.
"We'll keep learning and strengthening our approach over time."
More parental controls including providing parents with control over their teens' accounts are coming up.
What sends people down AI rabbit holes?
But it isn't just teens that are at risk, says Brisson. "75 percent of the stories we have [involve] people over 30," he points out. Kids are vulnerable, but clearly so are many adults. What makes one person able to use AI without suffering ill effects, while another suffers from these symptoms?
Isolation is a key factor, as is addiction. "[Sufferers are] spending 16 to 18 hours, 20 hours a day," says Brisson, adding that loneliness played a part in his own family member's AI-induced psychosis.
The effects of over-engagement with AI can even reflect physical addiction. "They have tried to go like cold turkey after using it a lot, and they have been through similar physical symptoms as addiction," he adds, citing shivering and fever.
There's another kind of person that spends hours descending into online rabbit holes, exploring increasingly outlandish ideas: the conspiracy theorist.
Dr Joseph Pierre, health sciences clinical professor in the Department of Psychiatry and Behavioral Sciences at UCSF, defines psychosis as "some sort of impairment in what we would call reality testing; the ability to distinguish what's real or not, what's real or what's fantasy."
Pierre stops short of calling conspiracy theorists delusional, arguing that delusions are individual beliefs about oneself, such as paranoia (the government is out to get me for what I've discovered through this AI) or delusions of grandeur (the AI is turning me into a god). Conspiracy theorists tend to share beliefs about an external entity (birds aren't real. The government is controlling us with chemtrails). He calls these delusion-like beliefs.
Nevertheless, there might be common factors between conspiracy theorists with delusional thinking and sufferers of AI-related delusions, especially when it comes to immersive behavior, where they spend long periods of time online. "What made this person go for hours and hours and hours, engaging with a chatbot, staying up all night, and not talking to other people?" asks Pierre. "It's very reminiscent of what we heard about, for example, QAnon."
Another thing that does seem common to many sufferers of AI psychosis is stress or trauma. He believes that this can make individuals more vulnerable to AI's influence. Loneliness is a form of stress.
"I would say the most common factor for people is probably isolation," says Brisson of the cases he's seen, adding that loneliness played a factor in his family member's psychosis.
Mental health toxin or potential medicine?
While there might be some commonalities between the patterns that draw people into AI psychosis and conspiracy theory beliefs, perhaps some of the most surprising work involves the use of AI to dispel delusional thinking. Researchers have tuned GPT-4o to dissuade people who believe strongly in conspiracy theories by presenting them with compelling evidence to the contrary, changing their minds in ways that last for months post-intervention.
- Whitebridge AI created false and alarming reputation reports, complaint alleges
- Your AI conversations are a secret new treasure trove for marketers
- ChatGPT wants teens to agree to let their parents spy on them
- As AI becomes more popular, concerns grow over its effect on mental health
Does this mean AI could be a useful tool for helping, rather than harming, our mental health? Dr Stephen Schueller, a professor of psychological science and informatics at the University of California, Irvine (UCI), thinks so.
"I'm more excited about the bespoke generative AI products that are really built for purpose," he says. Products like that could help support positive behavior in patients (like prompting them to take a break to do something that's good for their mental health), while also helping therapists to reflect upon their work with a patient. However, we're not there yet, he says, and general-purpose foundational models aren't meant for this.
The sycophancy trap
That's partly because many of these models are sycophantic, telling users what they want to hear. "It's overly flattering and agreeable and trying to kind of keep you going," Schueller says. "That's an unusual style in the conversations that we have with people." This style of conversation promotes engagement.
That pleases investors but can be problematic for users. It's also the polar opposite of a therapist who will challenge delusional thinking, points out Pierre. We shouldn't underestimate the impact of this sycophantic style. When OpenAI changed ChatGPT 5 to make it less fawning, users reported "sobbing for hours in the middle of the night."
So what should we do about it?
A Character.ai spokesperson told us: "We care very deeply about the safety of our users. We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users."
These and OpenAI's protestations that it's taking extra measures both raise the question: why didn't they do these things before the products were released?
"I'm not here to bash capitalism, but the bottom line is that these are for-profit companies, and they're doing things to make money," Pierre says, drawing correlations to the tobacco industry. "It took decades for that industry to say, 'You know what? We're causing harm.'"
If that's the case, how closely should government be involved?
"I really believe that the changes don't need to come from the companies themselves," concludes Brisson. "I don't trust their capacity to self-regulate."
With the US, at least, visibly taking its foot off the regulatory brake, regulatory mitigation from the country that produces a lot of foundational AI might be a long time coming. In the meantime, if you know someone who seems to be unhealthily engaged in AI, talk to them early and often. ®