More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.
In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.
As OpenAI releases data on mental health issues related to its marquee product, the company is facing increased scrutiny following a highly publicized lawsuit from the family of a teenage boy who died by suicide after extensive engagement with ChatGPT. The Federal Trade Commission last month additionally launched a broad investigation into companies that create AI chatbots, including OpenAI, to find how they measure negative impacts on children and teens.
OpenAI claimed in its post that its recent GPT-5 update reduced the number of undesirable behaviors from its product and improved user safety in a model evaluation involving more than 1,000 self-harm and suicide conversations. The company did not immediately return a request for comment.
“Our new automated evaluations score the new GPT‑5 model at 91% compliant with our desired behaviors, compared to 77% for the previous GPT‑5 model,” the company’s post reads.
OpenAI stated that GPT-5 expanded access to crisis hotlines and added reminders for users to take breaks during long sessions. To make improvements to the model, the company said it enlisted 170 clinicians from its Global Physician Network of health care experts to assist its research over recent months, which included rating the safety of its model’s responses and helping write the chatbot’s answers to mental-health related questions.
“As part of this work, psychiatrists and psychologists reviewed more than 1,800 model responses involving serious mental health situations and compared responses from the new GPT‑5 chat model to previous models,” OpenAI said. The company’s definition of “desirable” involved determining whether a group of its experts reached the same conclusion about what would be an appropriate response in certain situations.
AI researchers and public health advocates have long been wary of chatbots’ propensity to affirm users’ decisions or delusions regardless of whether they may be harmful, an issue known as sycophancy. Mental health experts have also been concerned about people using AI chatbots for psychological support and warned how it could harm vulnerable users.
The language in OpenAI’s post distances the company from any potential causal links between its product and the mental health crises that its users are experiencing.
after newsletter promotion
“Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations,” OpenAI’s post stated.
OpenAI’s CEO Sam Altman earlier this month claimed in a post on X that the company had made advancements in treating mental health issues, announcing that OpenAI would ease restrictions and soon begin to allow adults to create erotic content.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” Altman posted. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”













