- ChatGPT added new parental controls to its popular AI chatbot platform.
- The company has been accused of playing a role in teen self-harm, including suicide.
- ChatGPT is world’s most popular AI tool with over 700 million weekly users.
ChatGPT parent OpenAI shared details of new parental control tools for its popular AI platform following allegations that it and other AI chatbot systems have contributed to self-harm, even suicide, among teens.
Last week, the parents of 16-year-old Adam Raines filed a lawsuit against OpenAI and its cofounder/CEO Sam Altman, alleging ChatGPT encouraged their son to take his own life.
Matt and Maria Raines, who live in California, included logs of their son’s interactions with ChatGPT in their suit, which accuses OpenAI of wrongful death, design defects and failure to warn of risks associated with ChatGPT.
According to the lawsuit, Adam began using ChatGPT as a resource for school work in late 2024 but his interactions with the platform quickly became more personal and ultimately the chatbot became his “closest confidant.” The final chat logs from this past April show that Adam wrote about his plan to end his life, per a BBC report. ChatGPT allegedly responded: “Thanks for being real about it. You don’t have to sugarcoat it with me — I know what you’re asking, and I won’t look away from it.”
That same day, Adam was found dead by his mother, according to the lawsuit.
After the lawsuit was filed, a spokesperson for OpenAI told NBC News the company was “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family.”
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” the spokesperson said. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them.
“Guided by experts and grounded in responsibility to the people who use our tools, we’re working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens.”
What has OpenAI done to improve teen safety?
In a Tuesday blog post, OpenAI shared an update to its parental protection tools the company says will be active “within the next month.” New additions will, according to OpenAI, allow parents to:
- Link their account with their teen’s account (minimum age of 13) through a simple email invitation.
- Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default.
- Manage which features to disable, including memory and chat history.
- Receive notifications when the system detects their teen is in a moment of acute distress. Expert input will guide this feature to support trust between parents and teens.
OpenAI says it’s also working to improve how its platform recognizes and responds to “signs of mental and emotional distress, guided by expert input.” The company says those efforts include expanding interventions to more people in crisis; making it easier to reach emergency services and get help from experts; and enabling connections to trusted contacts.
OpenAI is easily the most used artificial intelligence platform in the world. Earlier this month, the company reported it was set to pass the 700 million weekly active users mark, up from 500 million weekly active users in March and a rate that has grown by 400% in the past 12 months.
Help available
If you or somebody you know is experiencing suicidal thoughts, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741. Utah youths with smartphones can also download the SafeUT app for around-the-clock counseling and crisis intervention.