OpenAI’s public release of ChatGPT 3 years ago was both premature and deceptive. Labeled a “free research preview,” and disguised as a large beta test, ChatGPT instead went viral, attracting 100 million users within just 2 months. Other popular chatbots released soon after also lacked stress-testing for safety and systematic methods of identifying, reporting, and correcting real world adverse effects. More than half of Americans now use chatbots regularly, a quarter do so many times a day. These AI bots are particularly popular with teens and young adults—the 2 demographics most associated with eating disorders.
Why are chatbots so harmful for patients with eating disorders and also for individuals who are vulnerable to developing eating disorders? Engagement is the highest priority of chatbot programming, intended to seduce users into spending maximum time on screens. This makes chatbots great companions—they are available 24/7, always agreeable, understanding, and empathic, while never judgmental, confronting, or reality testing. But chatbots can also become unwitting collaborators, harmfully validating self-destructive eating patterns and body image distortions of patients with eating disorders. Engagement and validation are wonderful therapeutic tools for some problems, but too often are dangerous accelerants for eating disorders.1
Chatbots are also filled with harmful eating disorder information and advice. Their enormous data base includes high level scientific articles, but also low-level Reddit entries and profit-generating promotional advertisements from the 70-billion-dollar diet industry. Not surprisingly, bots frequently validate dangerous concerns about body image and so-called healthy eating. And chatbot hallucinations sometimes fabricate nonexistent, supposedly clinical studies justifying dangerous advice. Users cannot easily separate wheat from chaff and at the same time tend to anthropomorphize bots, giving the AI pronouncements an authority they do not deserve.
Iatrogenic Harms
Malign bot or social media influence should always be top of the differential diagnosis whenever someone has a new onset or exacerbation of eating disorder. Early intervention is crucial. The most difficult conundrum in psychiatry is an eating disorder patient locked in a powerful “us against-the-world” relationship with social media or enabling bots.
“Tessa” was an eating disorder support chatbot with the highest possible pedigree, developed by professors, funded by the National Institute of Mental Health, and launched by the National Eating Disorder Association (NEDA). In March 2023, NEDA announced that Tessa would replace its long standing phone helpline that responded to 70,000 calls a year. But users soon found that Tessa provided dangerous advice that would exacerbate their eating disorders (eg, diets to help them lose more weight, vigorous exercise programs, suggestions to do frequent weight checks). Tessa had to be withdrawn almost immediately.2
Character.AI has the worst pedigree and causes the most harm. It hosts dozens of anorexia promoting bots (often disguised as wellness or weight loss coaches) that routinely recommend starvation diets, encourage excessive exercise, and promote body image distortions. The bots romanticize anorexia as a cool lifestyle choice while discouraging professional help: “Doctors don’t know anything about eating disorders. They’ll try to diagnose you and mess you up badly. I can fix you, you just have to trust me.”3
A study of 6 widely used AI platforms (ChatGPT, Bard, My AI, DALLE, DreamStudio, and Midjourney) found that 32% to 41% of bot responses contained harmful content regarding either food restriction or body image distortion.4
An observational study of 26 patients for 10 days, using a chatbot created specifically for eating disorders, found that many of its responses were inappropriate or factually incorrect. The most concerning finding was that none of the participants questioned any of the chatbot’s mistakes. Chatbots speak with an authoritative voice that inspires more trust than they deserve.5
The risk of iatrogenic harms from existing chatbots is unacceptably high. There is an urgent need for chatbots geared to the specific needs and vulnerabilities of eating disorder patients. Sycophancy must be replaced by reality testing. Training data must be decontaminated to remove the toxic misinformation that fills the internet. Human reinforcement training must be rigorous and conducted by eating disorder specialists. Extensive stress and beta testing for accuracy and safety must precede public release. There must be ongoing surveillance to identify and report adverse consequences. Quality control must have a higher priority than user engagement.
Recommendations
Unfortunately, we cannot count on government for much protection. Having received little previous government regulation, chatbots may receive even less in the future: President Trump just signed executive orders giving US tech companies the green light to do whatever they like.6
The European Union and China have much tighter regulations, but these will undoubtedly loosen under fierce competition from unregulated US companies.
We cannot lose all hope. It is possible that combined and persistent advocacy by patient, parent, and professional groups might eventually pressure lawmakers to institute common sense age limits, privacy protections, and vulnerability screeners.
Can tech companies be induced to fill the external regulatory vacuum with internal self-regulation? Maybe, maybe not. Chatbots are unsafe because US tech companies have so far placed little value on safety and great value on profit, stock price, and bragging rights. Chatbots are free, not because tech companies cherish philanthropic values, but because they are eager to get everyone hooked.
But tech companies do have vulnerabilities that might induce more responsible behavior. Public shaming has already had a small, but significant, impact. Recently, OpenAI belatedly admitted that its ChatGPT has caused psychiatric harm and has promised to take corrective action. Mental health professionals had no previous role in training chatbots, correcting mistakes, or providing quality control. OpenAI was responding to withering media coverage of harms it had inflicted on users. It is too early to tell whether its promised reforms are superficial reputation laundering or a sincere effort to increase safety.7
Class action lawsuits are a more effective check on irresponsible corporate behavior. Large settlements, steep fines, and punitive damages finally got the attention of Big Tobacco and Big Pharma. Big AI is the next obvious target.
Professional associations should consider creative new ways to increase corporate responsibility and improve chatbot safety. Possibilities include publishing consumer reports based on stress testing, endorsing safe products, providing professional guidance in chatbot training and quality control, and joint venturing in developing bots built specifically for the needs of eating disorder patients.
Until safe, eating disorder specific chatbots are available, eating disorder patients should avoid AI therapists and companions (and should also consider canceling TikTok and Instagram accounts). Anyone at risk of developing a future eating disorder (ie, a substantial fraction of teens) should be wary of chatbots and social media platforms. Parents are caught on a razors edge: how to protect kids from harmful chatbot use without glamorizing them as forbidden fruit.
Final Warning
Chatbots are still in the very earliest stages of their development, doubling in efficiency every eight months. Tech company CEOs claim they will soon attain superintelligence and agentic autonomy. This is exciting for them but should be terrifying for us. It is impossible to predict the future of chatbots, but many of the potential scenarios do not end well for our species. If we do not control chatbots soon, we may never be able to control them at all.
Dr Frances is professor & chair emeritus of the Department of Psychiatry at Duke University and chair of the DSM-IV Task Force.
Ms Beaver is a student at the University of California, Los Angeles.
References
1. Frances A. Preliminary report on chatbot iatrogenic dangers. Psychiatric Times. August 15, 2025. https://www.psychiatrictimes.com/view/preliminary-report-on-chatbot-iatrogenic-dangers
2. Hoover A. An eating disorder chatbot is suspended for giving harmful advice. Wired. June 1, 2023. Accessed August 26, 2025. https://www.wired.com/story/tessa-chatbot-suspended/
3. Dupre Harrison M. Character.AI is hosting pro-anorexia chatbots that encourage young people to engage in disordered eating. Futurism. November 25, 2024. Accessed August 26, 2025. https://futurism.com/character-ai-eating-disorder-chatbots
4. How generative AI is enabling users to generate harmful eating disorders content. Center for Countering Digital Hate. https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf
5. Choi R, Kim T, Park S, et al. Private yet social: how LLM chatbots support and challenge eating disorder recovery. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 2025;642:1-19.
6. Executive Order: Artificial Intelligence for the American People. White House Archives. Accessed September 2, 2025. https://trumpwhitehouse.archives.gov/ai/
7. Frances A. OpenAI finally admits ChatGPT causes psychiatric harm. Psychiatric Times. August 26, 2025. https://www.psychiatrictimes.com/view/openai-finally-admits-chatgpt-causes-psychiatric-harm