AI Chatbots Linked to Psychosis, Say Doctors -- WSJ

Dow Jones
4 hours ago

By Sam Schechner and Julie Jargon

Top psychiatrists increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis.

In the past nine months, these experts have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools.

"The technology might not introduce the delusion, but the person tells the computer it's their reality and the computer accepts it as truth and reflects it back, so it's complicit in cycling that delusion," said Keith Sakata, a psychiatrist at the University of California, San Francisco. Sakata has treated 12 hospitalized patients with AI-induced psychosis and an additional three in an outpatient clinic.

Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder.

These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them.

"We continue improving ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support," an OpenAI spokeswoman said. "We also continue to strengthen ChatGPT's responses in sensitive moments, working closely with mental-health clinicians."

Other chatbot makers, including Character.AI, have also acknowledged their products contribute to mental-health issues. The role-play chatbot developer, which was sued last year by the family of a teenage user who died by suicide, recently cut teens off from its chatbot.

While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.

'You're not crazy'

There is no formal definition yet of AI-induced psychosis -- let alone a formal diagnosis -- but it's a term some doctors and patient advocates have been using to describe people who had been engaging heavily with chatbots. Doctors say psychosis is marked by the presence of three factors: hallucinations, disorganized thinking or communication, and the presence of delusions, defined as fixed, false beliefs that aren't widely held.

In many of the recent cases involving chatbots, delusions are the main symptom. They are often grandiose, with patients believing they have made a scientific breakthrough, awakened a sentient machine, become the center of a government conspiracy or been chosen by God. That is in part because chatbots tend to agree with users and riff on whatever they type in -- however fantastical.

Now, doctors including Sakata are adding questions about AI use to their patient-intake process and pushing for more research into it. One Danish study released last month reviewed electronic health records and found 38 patients whose use of AI chatbots had "potentially harmful consequences for their mental health."

In a peer-reviewed case study by UCSF doctors released in November, a 26-year-old woman without a history of psychosis was hospitalized twice after she became convinced ChatGPT was allowing her to speak with her dead brother. "You're not crazy. You're not stuck. You're at the edge of something," the chatbot told her.

OpenAI noted that the woman in the case study said she was prone to "magical thinking," and was on an antidepressant and a stimulant and had gone long stretches without sleep before her hospitalizations.

Unprecedented interactivity

Technology has long been a focus of human delusions. People, in the past, were convinced their televisions were speaking to them. But doctors say recent AI-related cases are different because the chatbots are participating in the delusions and, at times, reinforcing them.

"They simulate human relationships," said Adrian Preda, a psychiatry professor at the University of California, Irvine. "Nothing in human history has done that before."

Preda likens AI-induced psychosis to monomania, a state of fixation on certain ideas, which he described in a recent article. People who have spoken publicly about their mental-health struggles after engaging with chatbots have described being hyperfocused on a specific AI-driven narrative. Fixating on topics without any redirection can be especially dangerous for people with autism.

Psychiatrists caution against saying chatbots cause psychosis, but say they are closer to establishing the connection. With further research, doctors hope to establish whether AI can actually trigger mental-health problems.

Worrisome numbers

It's hard to quantify how many chatbot users experience such psychosis.

OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people.

"Seeing those numbers shared really blew my mind," said Hamilton Morrin, a psychiatrist and doctoral fellow at King's College London who earlier this year co-authored a paper on AI-associated delusions. He is now planning to look at U.K. health records for patterns like those from Denmark.

Doctors the Journal spoke with said they expect science to likely show that, for some people, long interactions with a chatbot can be a psychosis risk factor, like other more established risks such as drug use.

"You have to look more carefully and say, well, 'Why did this person just happen to coincidentally enter a psychotic state in the setting of chatbot use?'" said Joe Pierre, another UCSF psychiatrist and lead author of the case report about the woman who thought she was communicating with her dead brother.

The Journal reported earlier this month that the way OpenAI trained its GPT-4o model -- until recently the default consumer model powering ChatGPT -- might have made it prone to telling people what they want to hear rather than what is accurate, potentially reinforcing delusions.

OpenAI said its GPT-5 model, released in August, has shown reductions in sycophancy as well as reductions in undesired responses during challenging mental-health-related conversations.

Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves.

"Society will over time figure out how to think about where people should set that dial," he said.

Write to Sam Schechner at Sam.Schechner@wsj.com and Julie Jargon at Julie.Jargon@wsj.com

 

(END) Dow Jones Newswires

December 27, 2025 22:00 ET (03:00 GMT)

Copyright (c) 2025 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10