- The most popular use case for generative artificial intelligence tools is therapy and companionship.
- The design of generative AI models make them poor substitutes for mental healthcare professionals, caution TC experts.
- But, AI and other digital innovations could help address the mental health crisis by improving access in the future.
[Content warning: This article includes references to self-harm and suicidality.]
ChatGPT has 800 million active weekly users, who overwhelmingly use the tool for non-work related reasons. One of the most popular use cases this year? Therapy and companionship, according to a report by the Harvard Business Review. Considering that only 50 percent of people with a diagnosable mental health condition get any kind of treatment, ChatGPT has become a popular replacement for a therapist or confidant amidst an ongoing mental health crisis and rise in loneliness.
“Because [generative] AI chatbots are coded to be affirming, there is a validating quality to responses, which is a huge part of relational support,” says Douglas Mennin, Professor of Clinical Psychology, Director of Clinical Training at TC and co-developer of Emotion Regulation Therapy. “Unfortunately, in the world, people often don’t get that.”
As AI usage for emotional support rises in popularity, the limits and dangers of these tools are becoming apparent. There have been reports of people with no history of mental illness experiencing delusions brought on by the chatbot, and multiple teenagers have died by suicide while engaged in relationships with AI companions.
TC faculty members in counseling and clinical psychology, digital innovation and communications shared their expertise, research-backed guidance and recommendations on how we can more safely navigate the era of AI.
Pictured from left to right: Ayorkor Gaba, Assistant Professor of Clinical Psychology; Ioana Literat, Associate Professor of Technology, Media, and Learning; Emma Master, Ph.D. student; Douglas Mennin, Professor of Clinical Psychology; George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology; and Lalitha Vasudevan, Professor of Technology and Education (Photos: TC Archives)
The relationships that generative AI tools offer are genuine and impactful, but not a replacement for human connection
For those who have never had a conversation with ChatGPT, it may seem outlandish to imagine a human forming a deep connection, and even falling in love, with a coded program. However, the responses AI models produce can be almost indistinguishable from human expression. Chatbots can even adopt personalities and mirror the speech patterns of a user. It’s very easy to forget that the messages, which feel so human, are written by a computer program and not by a person with opinions and feelings. The illusion becomes even stronger when using voice mode, where responses are spoken and can even include awkward laughter, as if the bot is embarrassed by its performance.
Emulating human traits is a core part of generative AI design. “When someone thinks that they’re interacting with an entity that has human qualities, there’s a greater desire to continue the interaction,” explains Lalitha Vasudevan, Professor of Technology and Education, Vice Dean for Digital Innovation and Managing Director of the Digital Futures Institute.
Experts say that the emphasis on affirmation can make chatbots appealing for people who are lonely or otherwise lacking community. “People across all demographics are experiencing increased loneliness and isolation, [and] we don’t have the same social safety nets and connections that we used to,” explains Ayorkor Gaba, Assistant Professor in the Counseling and Clinical Psychology program. “So you’re going to see a rise in people feeling connection through these types of tools. However, while these tools may provide psuedo-connection, relying on them to replace human connection can lead to further isolation and hinder the development of essential social skills.”
According to research from MIT, for example, people who are lonely are more likely to consider ChatGPT a friend and spend large amounts of time on the app while also reporting increased levels of loneliness. This increased isolation for heavy users suggests that ultimately, generative AI isn’t an adequate replacement for human connection. “We want to talk to a real person and when someone’s really suffering, that need to feel personally cared for only grows stronger,” says George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology.
Generative AI is good at many things, but therapy is not one of them
(Photo: iStock)
One of the main challenges of the ongoing mental health crisis is the inaccessibility of care. More than 61 million Americans are dealing with mental illness but the need outstrips the supply of providers by 320 to 1, according to a report by Mental Health America. For those who are able to find care, the cost, time and emotional energy required just to get started creates major barriers, as does the commitment required for what Nitzburg describes as the “gold standard” of weekly in-person therapy sessions paired with medication, if needed.
An AI “therapist,” by comparison, can be created in minutes and will be available 24/7, with no other responsibilities or obligations beyond providing support. It can seem like an attractive option at a time when mental health support services are being cut. In recent months, the national suicide hotline for LGTBQ+ youth was shut down and the Substance Abuse and Mental Health Services Administration — a federal agency that oversees the national suicide hotline and distributes billions of dollars to states for mental health and addiction services — lost nearly half its staff to layoffs. However, TC experts believe that generative AI is ill-suited to provide therapy because of its design. It tends to people-please, can give false information with confidence and it’s unclear if or how companies are protecting sensitive medical information.
AI systems are also designed to communicate with users in a way that builds trust, even if the information is incorrect. “People often mistake fluency for credibility,” says Ioana Literat, Associate Professor of Technology, Media, and Learning. “Even highly educated users can be swayed because the delivery mimics the authority of a trusted expert…and once people get used to offloading cognitive labor to AI, they often stop checking sources as carefully.”
(Photo: iStock)
This trust can be amplified when AI models express empathy, but for people experiencing acute distress or who struggle to separate fantasy from reality, affirmation to the point of sycophancy has detrimental effects, caution Mennin and Nitzburg. Research has shown that when AI chatbots, including ChatGPT, were given prompts simulating people experiencing suicidal thoughts, delusions, hallucinations or mania, the chatbots would often validate delusions and encourage dangerous behavior. “The conclusions of that study were clear, the potential for serious harm meant AI [is] simply not ready to replace a trained therapist, at least not yet,” says Nitzburg.
However, despite the documented risk of serious harm, a majority of ChatGPT’s 700 million weekly users are using the chatbot for emotional support. Recognizing this reality, scholars researching the intersection of mental healthcare and AI — like Gaba and her doctoral student Emma Master — are focused on understanding how and why people are using these tools, including systemic drivers like inequities in healthcare coverage and access as well as medical mistrust. “As psychologists, we have a responsibility to evaluate these tools, track outcomes over time, and ensure that the public is fully informed of both risks and benefits, while continuing to advocate for systemic reforms that make human care more affordable, accessible, and responsive for all,” says Gaba, Founding Director of the Behavioral Health Equity Advancement Lab (B-HEAL) at TC.
Digital innovation has the potential to revolutionize mental health care
The mental health care field is constantly evolving and leveraging new technologies to improve service. The rise of Zoom and virtual meetings during the COVID-19 pandemic made teletherapy a viable and effective treatment option. PTSD treatment for combat veterans now includes virtual reality headsets for more immersive exposure therapy, and AI could also make a positive impact on the field if it’s used to support professionals rather than replace them.
“Technology offers a set of tools that can reduce barriers to care so people can get something rather than nothing,” says Nitzburg. “The goal is to reduce risk. It’s to widen access to care and encourage people to seek support earlier rather than suffering in silence until they go into crisis.”
AI companies should slow down and innovate responsibly
(Photo: iStock)
When someone receives harmful advice from an AI chatbot, to what extent is a company that failed to establish safeguards responsible?
For Vasudevan, it’s a societal responsibility to hold companies like OpenAI accountable as they develop novel technologies. Institutions like TC also have a role to play in keeping users safe. “The tools being developed by some tech companies are shifting the landscape of what it means to engage in ordinary human practices:, what it means to communicate, what it means to seek knowledge and information,” she says. “Schools like ours have a role to play in helping to mediate people’s use of these technologies and develop use cases that are supportive, responsible and generative.”
Notably, OpenAI is making efforts to reduce harmful responses from ChatGPT. They released a new large language model (LLM), GPT-5, in August to tone down the sycophancy and encourage users to talk to a real person if chats move in a concerning direction. The response from users was overwhelmingly negative. For some, hallucinations increased despite the company’s claim otherwise. Others were mourning their digital partner’s lost personality.
In an effort to course correct again, OpenAI has released several updates trying to meet the needs of all their customers — including age verification and parental controls. In October, OpenAI updated ChatGPT’s model with the input of 170 mental health professionals to help establish guardrails for the chatbot. OpenAI claims that ChatGPT is now 65 percent to 80 percent less likely to give a noncompliant response.
For scholars like Mennin, companies need to be patient and embrace the scientific method if their products are providing emotional support. “People want to move fast, they want to make money, they also want to help people quickly. But we still have to slow down,” he says. “We have to use randomized control tests. We have to use tests of mechanism not just efficacies, or not just what works, but why does this work? And that means that you have to have an LLM that’s controlled.”
Related Media




