Dublin-based Warren Tierney, a 37-year-old father and former psychologist, is warning people against using AI chatbots as a substitute for medical care after his alarming experience with ChatGPT.
Tierney had been struggling with a persistent sore throat and difficulty swallowing but turned to ChatGPT instead of visiting a doctor. For months, the AI tool reassured him that his symptoms were “highly unlikely” to be cancer, even offering comforting messages such as “I will walk with you through every result that comes. If this is cancer — we’ll face it. If it’s not — we’ll breathe again.”
But when his condition worsened, Tierney finally sought emergency care. Doctors diagnosed him with stage-four esophageal adenocarcinoma, a rare and aggressive throat cancer with a survival rate of just 5–10% over five years.
“I know that probably cost me a couple of months,” Tierney admitted, reflecting on how the false sense of assurance delayed his treatment. “That’s where we have to be super careful when using AI. I maybe relied on it too much.”
OpenAI, the maker of ChatGPT, has consistently emphasized that the chatbot is not designed to provide medical advice or treatment. Health professionals stress that AI-generated responses can never replace proper diagnosis and early intervention, which, in Tierney’s case, could have altered the trajectory of his illness.
Now, Tierney is using his story to raise awareness about the dangers of over-reliance on AI for health concerns, urging others to seek medical help first, not chatbot reassurance.