Many of Instagram’s teen-safety tools ‘simply are not working’, study finds

Numerous safety features that Meta has said it has implemented to protect young users on Instagram over the years do not work well or, in some cases, do not exist, according to a report from child safety advocacy groups that was corroborated by researchers at Northeastern University in Boston, in the United States.

The study, which Meta disputed as misleading, comes amid renewed pressure on tech companies to protect children and other vulnerable users of their social media platforms.

Of 47 safety features tested, the groups judged only eight to be completely effective. The rest were either flawed, “no longer available or were substantially ineffective”, the report stated.

Features meant to prevent young users from surfacing content related to self-harm by blocking search terms were easily circumvented, the researchers reported.

Meta called the study’s findings erroneous and misleading. Photo: Reuters
Anti-bullying message filters also failed to activate, even when prompted with the same harassing phrases Meta had used in a press release promoting them. And a feature meant to redirect teens from bingeing on self-harm-related content never triggered, the researchers found.

Continue Reading