A man was hospitalized with severe physical and psychiatric symptoms after replacing table salt with sodium bromide in his diet, advice he said he received from ChatGPT, according to a case study published in the Annals of Internal Medicine.
Experts have strongly cautioned against taking medical advice from artificial intelligence-powered chatbots.
“These are language prediction tools — they lack common sense and will give rise to terrible results if the human user does not apply their own common sense when deciding what to ask these systems and whether to heed their recommendations,” said Dr. Jacob Glanville, according to Fox 32 Chicago.
What’s happening?
A 60-year-old man concerned about the potentially negative health impacts of chloride on his body was looking for ways to completely remove sodium chloride, the chemical name for table salt, from his diet.
“Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet,” the case study’s authors wrote. “For three months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning.”
The “personal experiment” landed the man, who had “no past psychiatric or medical history,” in the emergency room, saying he believed he was being poisoned by his neighbor.
“In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability,” the authors said.
With treatment, the man’s symptoms gradually improved to the point where he could explain to doctors what had happened.
Why does bad medical advice from AI matter?
The situation highlighted the high levels of risk involved in obtaining medical advice, or other highly specialized knowledge, from AI chatbots including ChatGPT. As the use of AI-powered tools becomes more popular, incidents such as the one described in the case study are likely to occur more frequently.
“Thus, it is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” the case study’s authors warned.
They encouraged medical professionals to consider the public’s increasingly widespread reliance on AI tools “when screening for where their patients are consuming health information.”
What’s being done about AI misinformation?
Unless and until governments enact regulatory guardrails constraining what kinds of advice and information AI can and cannot dole out to people, individuals will be left to rely on their own common sense, as Glanville recommended.
However, when it comes to complex, scientifically dense information that requires specialized knowledge and training to properly understand, it is questionable how far “common sense” can go.
The subject of the case study had received some level of specialized academic training with regards to nutrition. Apparently, this was not enough for him to recognize that sodium bromide was not a suitable alternative for table salt in his diet.
Consequently, the best way to protect oneself and one’s family from the harmful effects of AI misinformation is to limit reliance on AI to specific, limited instances and to approach any AI-provided advice or data with a high level of skepticism.
To take things a step further, you can use your voice and reach out to your elected representatives to tell them that you are in favor of regulations to rein in AI-generated misinformation.
Join our free newsletter for good news and useful tips, and don’t miss this cool list of easy ways to help yourself while helping the planet.