Numerous safety features that Meta has said it has implemented to protect young users on Instagram over the years do not work well or, in some cases, do not exist, according to a report from child safety advocacy groups that was corroborated by researchers at Northeastern University in Boston, in the United States.
The study, which Meta disputed as misleading, comes amid renewed pressure on tech companies to protect children and other vulnerable users of their social media platforms.
Of 47 safety features tested, the groups judged only eight to be completely effective. The rest were either flawed, “no longer available or were substantially ineffective”, the report stated.
Features meant to prevent young users from surfacing content related to self-harm by blocking search terms were easily circumvented, the researchers reported.