Research reveals bias toward additive advice in mental health support

From “try yoga” to “start journaling,” most mental health advice piles on extra tasks. Rarely does it tell you to stop doing something harmful. New research from the University of Bath and University of Hong Kong shows that this “additive advice bias” appears everywhere: in conversations between people, posts on social media, and even recommendations from AI chatbots. The result? Well-intentioned tips that may leave people feeling more overwhelmed than helped.

With mental health problems rising worldwide and services under strain, friends, family, online communities and AI are often the first port of call. Understanding how we advise each other could be key to making that support more effective.

A collection of eight studies involving hundreds of participants, published in Communications Psychology, analysed experimental data, real-world Reddit advice, and tested ChatGPT’s responses. Participants advised strangers, friends, and themselves on scenarios involving both harmful habits, like gambling and missing beneficial activities, such as exercise.

Key findings:

  • Additive dominates – Across every context, people suggested adding activities far more than removing harmful activities.
  • Feasibility and benefit – Doing more was seen to be easier and more beneficial than cutting harmful things out.
  • Advice varies by relationship – cutting harmful things out is viewed as easier for our close friends than for ourselves.
  • AI mirrors human bias – ChatGPT gave predominantly additive advice, reflecting patterns in online social media.

In theory, good advice should balance doing more with doing less. But we found a consistent tilt towards piling more onto people’s plates and even AI has learned to do it. While well-meaning, it can unintentionally make mental health feel like an endless list of chores.”


Dr. Tom Barry, Senior Author, Department of Psychology, University of Bath, England

Co-author, Dr. Nadia Adelina from the Department of Psychology at the University of Hong Kong, Hong Kong said:

“As AI chatbots become a major source of mental health guidance, they risk amplifying this bias. Building in prompts to explore what people might remove from their lives could make advice more balanced and less overwhelming.”

This research was supported by the Research Promotion Fund of the Department of Psychology, University of Bath, England.

Source:

Journal reference:

Barry, T. J., & Adelina, N. (2025). People overlook subtractive solutions to mental health problems. Communications Psychology. doi.org/10.1038/s44271-025-00312-8.

Continue Reading