OpenAI announced plans on Tuesday to relax restrictions on its ChatGPT chatbot, including allowing erotic content for verified adult users as part of what the company calls a “treat adult users like adults” principle.
OpenAI’s plan includes the release of an updated version of ChatGPT that will allow users to customize their AI assistant’s personality, including options for more human-like responses, heavy emoji use, or friend-like behavior. The most significant change will come in December, when OpenAI plans to roll out more comprehensive age-gating that would permit erotic content for adults who have verified their ages. OpenAI did not immediately provide details on its age verification methods or additional safeguards planned for adult content.
The company launched a dedicated ChatGPT experience for under-18 users in September, with automatic redirection to age-appropriate content that blocks graphic and sexual material.
It also said it was developing behavior-based age prediction technology that estimates whether a user is over or under 18 based on how they interact with ChatGPT.
In a post on X, Sam Altman, the CEO of OpenAI, said that stricter guardrails on conversational AI to address mental health concerns had made its chatbot “less useful/enjoyable to many users who had no mental health problems”.
The stricter safety controls came after Adam Raine, a California teenager, died by suicide earlier this year, with his parents filing a lawsuit in August claiming ChatGPT provided him with specific advice on how to kill himself. Just two months later, Altman said the company has “been able to mitigate the serious mental health issues”.
The US Federal Trade Commission had also launched an inquiry into several tech companies, including OpenAI, over how AI chatbots potentially negatively affect children and teenagers.
after newsletter promotion
“Given the seriousness of the issue we wanted to get this right,” Altman said Tuesday, arguing that OpenAI’s new safety tools now allow the company to ease restrictions while still addressing serious mental health risks.