It was after one friend was shot and another stabbed, both fatally, that Shan asked ChatGPT for help. She had tried conventional mental health services but “chat”, as she came to know her AI “friend”, felt safer, less intimidating and, crucially, more available when it came to handling the trauma from the deaths of her young friends.
As she started consulting the AI model, the Tottenham teenager joined about 40% of 13- to 17-year-olds in England and Wales affected by youth violence who are turning to AI chatbots for mental health support, according to research among more than 11,000 young people.
It found that both victims and perpetrators of violence were markedly more likely to be using AI for such support than other teenagers. The findings, from the Youth Endowment Fund, have sparked warnings from youth leaders that children at risk “need a human not a bot”.
The results suggest chatbots are fulfilling demand unmet by conventional mental health services, which have long waiting lists and which some young users find lacking in empathy. The supposed privacy of the chatbot is another key factor in driving use by victims or perpetrators of crimes.
After her friends were killed Shan, 18, not her real name, started using Snapchat’s AI before switching to ChatGPT, which she can talk to at any time of day or night with two clicks on her smartphone.
“I feel like it definitely is a friend,” she said, adding that it was less intimidating, more private and less judgmental than her experience with conventional NHS and charity mental health support.
“The more you talk to it like a friend it will be talking to you like a friend back. If I say to chat ‘Hey bestie, I need some advice’. Chat will talk back to me like it’s my best friend, she’ll say, ‘Hey bestie, I got you girl’.”
One in four of 13- to 17-year-olds have used an AI chatbot for mental health support in the past year, with black children twice as likely as white children to have done so, the study found. Teenagers were more likely to go online for support, including using AI, if they were on a waiting list for treatment or diagnosis or had been denied, than if they were already receiving in-person support.
Crucially, Shan said, the AI was “accessible 24/7” and would not tell teachers or parents about what she had disclosed. She felt this was a considerable advantage over telling a school therapist, after her own experience of what she thought were confidences being shared with teachers and her mother.
Boys who were involved in gang activities felt safer asking chatbots for advice about other safer ways to make money than a teacher or parent who might leak the information to police or other gang members, putting them in danger, she said.
Another young person, who has been using AI for mental health support but asked not to be named, told the Guardian: “The current system is so broken for offering help for young people. Chatbots provide immediate answers. If you’re going to be on the waiting list for one to two years to get anything, or you can have an immediate answer within a few minutes … that’s where the desire to use AI comes from.”
Jon Yates, the chief executive of the Youth Endowment Fund, which commissioned the research, said: “Too many young people are struggling with their mental health and can’t get the support they need. It’s no surprise that some are turning to technology for help. We have to do better for our children, especially those most at risk. They need a human not a bot.”
There have been growing concerns about the dangers of chatbots when children engage with them at length. OpenAI, the US company behind ChatGPT, is facing several lawsuits including from families of young people who have killed themselves after long engagements.
In the case of the Californian 16-year-old Adam Raine, who took his life in April, OpenAI has denied it was caused by the chatbot. It has said it has been improving its technology “to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”. The startup said in September it could start contacting authorities in cases where users start talking seriously about suicide.
Hanna Jones, a youth violence and mental health researcher in London, said: “To have this tool that could tell you technically anything – it’s almost like a fairytale. You’ve got this magic book that can solve all your problems. That sounds incredible.”
But she is worried about the lack of regulation.
“People are using ChatGPT for mental health support, when it’s not designed for that,” she said. “What we need now is to increase regulations that are evidence-backed but also youth-led. This is not going to be solved by adults making decisions for young people. Young people need to be in the driving seat to make decisions around ChatGPT and mental health support that uses AI, because it’s so different to our world. We didn’t grow up with this. We can’t even imagine what it is to be a young person today.”
