Study: How Online Language Choices May Signal Self-Harm Risk – News Center

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

Signs that an individual might be on the verge of self-harm are often found in their online actions, but can word choices in posts indicate who is at particular risk and when?

A new study published in the Nature journal npj Mental Health Research provides insights into the discussion and course of self-harm online. Led by Dr. Ryan L. Boyd, assistant professor of psychology in the School of Behavioral and Brain Sciences at The University of Texas at Dallas, and Dr. Charlotte Entwistle of the University of Liverpool, the research demonstrates that online support community posts indicating decreased social connectedness and heightened negative emotions can predict self-harm behaviors, like suicidal thoughts and self-injurious behavior, weeks before their occurrence.

The results also highlight how these communities might unintentionally reinforce harmful patterns of thinking and behavior through social media engagement like “likes” or “upvotes.”

“Our research highlights not only early linguistic predictors of self-harm, but also how online interactions may unintentionally reinforce harmful thoughts and behaviors,” said Boyd, the senior author of the paper. “This has profound implications for community-driven mental health interventions and highlights the need for thoughtful moderation of online support communities.”

“This has profound implications for community-driven mental health interventions and highlights the need for thoughtful moderation of online support communities.”

Dr. Ryan L. Boyd, assistant professor of psychology in the School of Behavioral and Brain Sciences

Entwistle, the lead author of the study, said the research set out to understand the dynamics surrounding self-harm in natural online settings, including how personality, emotions, and social factors interact with self-harm behaviors.

Using a form of artificial intelligence called natural language processing, in which computers analyze and interpret human language, the researchers analyzed more than 66,000 posts by nearly 1,000 Reddit users who self-identified as being diagnosed with borderline personality disorder (BPD).

“One of the main reasons we focus on BPD is that it’s very strongly associated with self-harm and suicidality,” Entwistle said. “Sometimes self-harm behavior is even used among the criteria for a BPD diagnosis. It’s an extremely high-risk community with alarmingly high suicide rates.”

Boyd described the work as a rare combination of theory, computational language analysis and social feedback.

“While there has been work in each of these domains, nothing I’m aware of has put it together in this way to look at self-harm in the population with BPD,” he said.

Distinctive Focus

Entwistle said the new study differs from other studies in that it focuses on both nonsuicidal self-injury and suicidality simultaneously and is focused on those with BPD.

“We’ve looked at patterns and changes that led up to and followed self-harm events, whereas most previous work aimed to predict suicidality or suicidal ideation without examining the weeks that follow,” she said.

In Reddit’s BPD communities, users share experiences and seek connections and support just like people in forums for other health conditions, such insomnia or hearing loss. Users are encouraged to upvote posts they agree with or find valuable or relevant to the conversation. The researchers noted that, in the BPD forums, more negative, hostile and extreme posts attract the most community engagement and favorable responses.

“Posts about suicide were upvoted more than average,” Entwistle said. “Posts that contained more negative emotions — using words that indicated anger, sadness and anxiety — were upvoted more as well, as were posts with swear words.”

Boyd said the online behavior reflects the traditional view that humans are wired for social connection and interaction and find it rewarding.

“Getting more social interaction around topics of harm can reinforce users focusing on harmful behaviors,” he said. “There’s something of a social contagion effect. If you see other people talking about self-harm and getting more engagement than you, it might — without intending to do so — lead you to focus more on self-harm in order to get the same compassion and care from the community.”

Negative Language and Higher Risk

The researchers also found that posters who used hostile and negative emotive language were more at risk for imminent self-harm. The question remains as to whether the dynamic of rewarding negative posts constitutes reinforcement of negative behaviors and if it is specific to BPD communities.

“A similar study examining suicidality in a more general population Reddit sample came to the opposite conclusions. That community was in fact unsupportive of negative and hostile posts,” Entwistle said. “Although this effect may be specific to the BPD communities in our research, the consequences warrant further attention on possible negative effects of informal online support communities more broadly.”

Boyd said that while online communities can have immensely positive effects in giving a person access to people around the world who are dealing with similar problems, they are not without risk.

“Members of these communities have to be mindful about where, and how, they’re stepping in, because it’s possible to provide support in a way that might not be as helpful as we’re intending,” he said. “One can contribute to a downward spiral by engaging. I think that there’s a larger conversation to be had in these communities about what types of things need to be supported and in what ways.”

Boyd emphasized, though, that members of these communities aren’t doing anything wrong per se.

“Helping others who are struggling with harmful thoughts and behaviors can be incredibly validating and valuable,” he said. “However, what our findings do suggest is that — as members of support communities — we may need to rethink how social media frames engagement. The ways that we interact online, especially around distressing content, might be unintentionally causing harm.”

Entwistle said that the study also has implications for clinical practice by revealing emotional and social problems as the main triggers of suicidal thoughts and self-harm behaviors.

“Our findings have identified several important precursors to self-harm in high-risk individuals, which therefore highlight the most critical areas to target through clinical intervention,” she said. “Our study has uncovered key linguistic predictors of self-harm, laying the groundwork for more advanced predictive models that could aid in early intervention.”

Other authors of the paper included researchers from Lancaster University and the University of Kansas. The research was supported in part by grants from the National Institute on Alcohol Abuse and Alcoholism (R01 AA028032) and the National Institute of Mental Health (R01 MH125702), both components of the National Institutes of Health. The content of this document is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies listed.

Continue Reading