Flagging misinformation on social media reduces engagement, study finds

The analysis also revealed how labeling appears to elicit different results for different content.

For example, labels had a large effect when the problematic content contained altered media. “Seeing altered media is a situation where a warning label can succinctly explain ‘this photo isn’t real. This never happened,’ and that can have a large effect,” Ugander said.

By contrast, when labels were added to content with outdated information, the effect was more modest.

The study was also able to shed light on how misleading posts work their way through online social networks. Warning labels were more effective when they were attached to content from accounts that readers themselves didn’t follow. The overall result, the researchers said, is that the spread of fact-checked content was still broad, but less “viral.”

“When misinformation gets labeled, it stops going as deep,” Ugander said. “It’s like a bush that grows wider, but not higher.”

Ugander added that academic research into online misinformation is of vital importance to modern society: “These platforms have a huge impact on how we communicate and lead our lives,” he said. “Adding warning labels isn’t the whole solution, but it should be viewed as an important tool in fighting the spread of misinformation.”

The corresponding author of the study is Martin Saveski of the University of Washington. Co-authors are Isaac Slaughter of the University of Washington and Axel Peytavin of Stanford University.

The research was supported in part by a University of Washington Information School Strategic Research Fund Award, cloud computing credits by Google, and an Army Research Office Multidisciplinary University Research Initiative award.

Continue Reading