As chatbots powered by artificial intelligence explode in popularity, experts are warning people against turning to the technology for medical or mental health advice instead of relying upon human health care providers.
There have been a number of examples in recent weeks of chatbot advice misleading people in harmful ways. A 60-year-old man accidentally poisoned himself and entered a psychotic state after ChatGPT suggested he eliminate salt, or sodium chloride, from his diet and replace it with sodium bromide, a toxin used to treat wastewater, among other things. Earlier this month, a study from the Center for Countering Digital Hate revealed ChatGPT gave teens dangerous advice about drugs, alcohol and suicide.
The technology can be tempting, especially with barriers to accessing health care, including cost, wait times to talk to a provider and lack of insurance coverage. But experts told PBS News that chatbots are unable to offer advice tailored to a patient’s specific needs and medical history and are prone to “hallucinations,” or giving outright incorrect information.
Here is what to know about the use of AI chatbots for health advice, according to mental health and medical professionals who spoke to PBS News.
How are people using AI chatbots for medical and mental health advice?
People are typically turning to commercially available chatbots, such as OpenAI’s ChatGPT or Luka’s Replika, said Vaile Wright, senior director of Health Care Innovation at the American Psychological Association.
People may ask questions as far-ranging as how to quit smoking, address interpersonal violence, confront suicidal ideation or treat a headache. More than half of teens said they used AI chatbot platforms multiple times each month, according to a survey produced by Common Sense Media. That report also mentioned that roughly a third of teens said they turned to AI companions for social interaction, including role-playing, romantic relationships, friendship and practicing conversation skills.
READ MORE: Analysis: AI in health care could save lives and money — but not yet
But the business model for these chatbots is to keep users engaged “as long as possible,” rather than give trustworthy advice in vulnerable moments, Wright said.
“Unfortunately, none of these products were built for that purpose,” she said. “The products that are on the market, in some ways, are really antithetical to therapy because they are built in a way that their coding is basically addictive.”
Often, a bot reflects the emotions of the human who is engaging them in ways that are sycophantic and could “mishandle really critical moments,” said Dr. Tiffany Munzer, a developmental behavioral pediatrician at the University of Michigan Medical School.
“If you’re in a sadder state, the chatbot might reflect more of that emotion,” Munzer said. “The emotional tone tends to match and it agrees with the user. it can make it harder to provide advice that is contrary to what the user wants to hear.”
What are the risks of using AI for health advice?
Asking AI chatbots health questions instead of asking a health care provider comes with several risks, said Dr. Margaret Lozovatsky, chief medical information officer for the American Medical Association.
Chatbots can sometimes give “a quick answer to a question that somebody has and they may not have the ability to contact their physicians,” Lozovatsky said. “That being said, the quick answer may not be accurate.”
- Chatbots do not know a person’s medical history. Chatbots are not your physician or nurse practitioner and cannot access your medical history or provide tailored insights. Chatbots are built on machine-learning algorithms. That means they generally produce the most likely response to a question based on their ever broadening diet of information harvested from the wilds of the internet.
- Hallucinations happen. Response quality is improving with AI chatbots, but hallucinations still occur and could be deadly in some cases. Lozovatsky recalled one example a few years ago, she asked a chatbot, “How do you treat a UTI (urinary tract infection)?” and the bot responded, “Drink urine.” While she and her colleagues laughed, Lozovatsky said this kind of hallucination could be potentially dangerous when a person asks a question where the accuracy of an answer may not be so obvious.
- Chatbots boost false confidence and atrophy critical thinking skills. People need to read AI chatbot responses with a critical eye, experts said. It is especially important to remember that these responses typically have muddled origins (you have to dig for source links) and that a chatbot “does not have clinical judgement,” Lozovatsky said. “You lose the relationship you have with a physician.”
- Users risk exposing their personal health data on the internet. In many ways, the AI industry amounts to a modern-day wild West, especially with regard to protecting people’s private data.
Why are people turning to AI as a mental health and medical resource?
It is not unusual for people to seek answers on their own when they have a persistent headache, sniffles or a weird, sudden pain, Lozovatsky said. Before chatbots, people relied on search engines (cue all the jokes about Dr. Google). Before that, the self-help book industry minted money on people’s low-humming anxiety about how they were feeling today and how they could feel better tomorrow.
Today, a search engine query may produce AI-generated results that show up first, followed by a string of websites that may or may not have information reflected accurately in those responses.
“It’s a natural place patients turn,” Lozovatsky said. “It’s an easy path.”
That ease of access stands in contrast to the barriers patients often encounter when trying to get advice from licensed medical professionals. Those obstacles may include whether or not they have insurance coverage, if their provider is in-network, if they can afford the visit, if they can wait until their provider can be scheduled to see them, if they are concerned about stigma related to their question, and if they have reliable transportation to their provider’s office or clinic when telehealth services are not an option.
Any one of these hurdles may be enough to motivate a person to feel more comfortable asking a bot their sensitive question than a human, even if the answer they receive could potentially endanger them. At the same time, a well-documented loneliness epidemic nationwide is partially fueling a rise in the use of AI chatbots, Munzer said.
“Kids are growing up in a world where they just don’t have the social supports or social networks that they really deserve and need to thrive,” she said.
How can people safeguard against bad AI advice?
If people are concerned about whether their child, family member or friend is engaging with a chatbot for mental health or medical advice, it is important to reserve judgment when attempting to have a conversation about the topic, Munzer said.
“We want families and kids and teens to have as much info at their fingertips to make the best decisions possible,” she said. “A lot of it is about AI and literacy.”
Discussing the underlying technology driving chatbots and motivations for using them can provide a critical point of understanding, Munzer said. This could include asking about why they are becoming such an increasing part of daily living, the business model of AI companies, and what else could support a child or adult’s mental wellbeing.
A helpful conversation prompt Munzer suggested for caregivers is to ask, “What would you do if a friend revealed they were using AI for mental health purposes?” That language “can remove judgement,” she said.
One activity Munzer recommended is for families to test out AI chatbots together, talk about what they find and encourage loved ones, especially children, to look for hallucinations and biases in the information.
WATCH: What to know about an AI transcription tool that ‘hallucinates’ medical interactions
But the responsibility to protect individuals from chatbot-generated harm is too great to place on families alone, Munzer said. Instead, it will require regulatory rigor from policymakers to avoid further risk.
On Monday, the Brookings Institution produced an analysis of 2025 state legislation and found “Health care was a major focus of legislation, with all bills focused on the potential issues arising from AI systems making treatment and coverage decisions.” A handful of states, including Illinois, have banned the use of ChatGPT to generate mental health therapy. A bill in Indiana would require medical professionals to tell patients they are using AI to generate advice or inform a decision to provide health care.
One day, chatbots could fill gaps in services, Wright said — but not yet. Chatbots can tap into deep wells of information, she said, but that does not translate to knowledge and discernment.
“I do think you’ll see a future where you do have mental health chatbots that are rooted in the science, that are rigorously tested, they’re co-created for the purposes, so therefore, they’re regulated,” Wright said. “But that’s just not what we have currently.”
We’re not going anywhere.
Stand up for truly independent, trusted news that you can count on!