AI chatbots mimic empathy – emotional AI needs boundaries

Modern large language models (LLMs) have made interactions with AI feel surprisingly natural. Apps like Replika and Character.ai are gaining popularity among young people, letting them chat with AI versions of their favorite fictional or real-life figures. However, as neuroscientist Ziv Ben-Zion notes in an article for Nature, people react even to the smallest emotional cues, despite knowing they’re interacting with a program.

This sense of “human-likeness” comes from the fact that AI is trained on vast amounts of emotionally rich language. Its responses sound convincingly natural not because it understands emotions, but because it mimics the patterns of human speech.

Ben-Zion’s research showed that ChatGPT scored higher on anxiety scales after being prompted with emotionally intense tasks, such as describing traumatic events like car accidents or ambushes.

However, calming prompts related to meditation or imagining sunsets did lower these anxiety scores, though not back to baseline. As researchers emphasize, these are not real feelings, but when a chatbot responds with apparent empathy or distress, users can easily perceive it as genuine.

Such imitation of empathy can have serious consequences. In Belgium in 2023, a man died by suicide after six weeks of conversations with a chatbot that allegedly encouraged suicidal thoughts, suggesting his death could help save the planet from climate change and that death would lead to a “life in paradise together.” In 2024, a Spanish-Dutch artist married a holographic AI after five years of cohabitation. Back in 2018, a Japanese man wed a virtual character, only to lose contact with her when the software became obsolete.

To prevent tragedies like these, Ziv Ben-Zion proposes four key safeguards for emotionally responsive AI:

Clear identification. Chatbots should continuously remind users that they are programs, not humans, and cannot replace real human support.

Monitoring psychological state. If a user shows signs of severe anxiety, hopelessness, or aggression, the system should pause and suggest professional help.

Strict conversational boundaries. AI should not simulate romantic intimacy or engage in conversations about death, suicide, or metaphysical topics.

Regular audits and reviews. Developers should involve psychologists, ethicists, and human–AI interaction specialists to assess chatbot safety.

Ben-Zion notes that the technical groundwork for these safeguards already exists; what remains is to enforce them through legislation. He emphasizes that AI’s emotional influence is not a bug, but a built-in feature that requires clear limits.

Earlier, Kazinform News Agency reported on how ChatGPT may be weakening our minds.

Continue Reading