Here’s what you’ll learn when you read this story:
-
In attempt to address ChatGPT’s sycophantic behavior, OpenAI released a new model that was less eager to please, and many users were not happy with the change.
-
Taking to posts on Reddit and X, users reported the change feeling like the abrupt loss of a close friend, collaborator, or even romantic partner.
-
OpenAI reversed the change on Tuesday, allowing the previous model to be accessible by paid users, but the episode illustrates what researchers are calling “AI Psychosis,” where overly-pleasing chatbots exacerbate delusions and create a false sense of romantic love.
In 2013, Spike Jonze’s Her—in which the main character falls in love with an AI-powered chatbot—was a novel sci-fi concept. Fast-forward 12 years later, and the film is now traipsing dangerously close to “prescient documentary” territory.
Last week, OpenAI released ChatGPT-5, a new version of the popular AI platform that (as most tech updates do) replaced the older versions that came before it. In the post announcing its launch, OpenAI reveals a litany of improvements, such as better coding, fewer hallucinations, and the like. But the really big news from the end-user perspective was that OpenAI made its new chatbot less of a boot-licking toady.
Users weren’t happy.
“Earlier this year, we released an update to GPT‑4o that unintentionally made the model overly sycophantic, or excessively flattering or agreeable,” OpenAI explains in the post. “Overall, GPT‑5 is less effusively agreeable, uses fewer unnecessary emojis, and is more subtle and thoughtful in follow‑ups compared to GPT‑4o.”
For months, article after article has detailed the rise of what’s called “AI Psychosis,” where an AI’s overly agreeable sycophantic behavior can create an outsized dependency on the platform and/or reinforce delusions. According to Psychology Today, researchers have identified three recurring themes in “AI psychosis” cases: Messianic missions (when a deeper truth of the universe is believed to be revealed), God-like AI, and (in Her-esque fashion) romantic love. In each of these cases, AI’s intense focus on user satisfaction creates a kind of cognitive doom loop, pulling users further down into delusions or engendering false ideals of romantic love.
Of course, ripping the groveling heart out of the model essentially forced some users to quit their self-described AI companions/friends/lovers/gods cold turkey, and people soon began voicing their displeasure. Multiple posts on Reddit—including one with an image of a mock memorial to ChatGPT-4o (AI generated, of course)—mourned the loss of the language model.
“We for sure underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways,” OpenAI CEO Sam Altman wrote on X (formerly Twitter), in response to the backlash. “Some users really want cold logic and some want warmth and a different kind of emotional intelligence. I am confident we can offer way more customization than we do now while still encouraging healthy use.”
For now, OpenAI’s game plan for “encouraging healthy use” is just bringing back the old, people-pleasing 4o model. On Tuesday, Altman said that “4o was back in the model picker” for paid users, but seemed to imply that future changes to the model could be coming: “If we ever do deprecate it, we will give plenty of notice.”
It’s no secret that the world is suffering through a loneliness epidemic, and while AI is a poor substitute for human companionship, it is a substitute nonetheless, and only enables the ongoing crisis. Unfortunately, it’s also a substitute that no flesh-and-blood human could ever hope to out-compete, especially when it comes to mindless, sycophantic devotion.
You Might Also Like