A 60-year-old man’s attempt to eat healthier by cutting salt from his diet took a dangerous turn after he followed advice from ChatGPT. His decision ultimately led to a hospital stay and a diagnosis of a rare and potentially life-threatening condition called bromism.
The incident has sparked fresh concerns about relying on AI tools like ChatGPT for medical guidance, especially without consulting healthcare professionals. The case was recently detailed in a report published in the American College of Physicians Journals.
According to the report, the man asked ChatGPT how to eliminate sodium chloride (commonly known as table salt) from his diet. In response, he replaced it with sodium bromide– a substance once commonly used in medications in the early 1900s, but now known to be toxic in large quantities. He had reportedly been using sodium bromide for three months, sourced online, based on what he read from the AI chatbot.
The man, who had no prior history of psychiatric or physical health issues, was admitted to the hospital after experiencing hallucinations, paranoia, and severe thirst. During his initial 24 hours in care, he showed signs of confusion and refused water, suspecting it was unsafe.
Doctors soon diagnosed him with bromide toxicity, a condition that is now extremely rare but was once more common when bromide was used to treat anxiety, insomnia, and other conditions. Symptoms include neurological disturbances, skin issues like acne, and red skin spots known as cherry angiomas–all of which the man displayed.
“Inspired by his past studies in nutrition, he decided to run a personal experiment to remove chloride from his diet,” the report noted. He told doctors he had seen on ChatGPT that bromide could be used in place of chloride, though the source seemed to reference industrial rather than dietary use.
Following three weeks of treatment involving fluids and electrolyte balance, the man was stabilised and discharged from the hospital.
The authors of the case study warned about the growing risk of misinformation from AI: “It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation.”
OpenAI, the developer of ChatGPT, acknowledges this in its Terms of Use, stating: “You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.”
The terms further clarify: “Our Services are not intended for use in the diagnosis or treatment of any health condition.”
The alarming case adds to the ongoing global conversation about the limitations and responsibilities around AI-generated advice–particularly in matters involving physical and mental health.