“I would not recommend to any parent that they rely on these new limits that have been put in place.”
A Meta spokeswoman said Teen Account controls for Instagram had been rolled out gradually in New Zealand, because of the high number of users, and it was not clear if they were in place during Williams’ filming.
Maxwell’s experiment was “the experience of a test account” and was “not representative of the experience of Instagram’s broader community”, Meta said last month.
“We’d like to learn more about the account before we can discuss specifics. While no technology is perfect, we’ve put additional safeguards in place to help make sure teens are only seeing content that’s appropriate for them,” a Meta spokeswoman said yesterday.
Rollout from today
Meta’s Sydney-based regional policy director, Mia Garlick, said some Kiwi teens would get the new protections from today, with users aged under 16 automatically shifted to Teen Accounts. However, because of the numbers involved, it would be a gradual rollout.
“We will scale up over the coming weeks,” she said.
Teen Account protections include:
- Only friends can see posts, stories, reels, friend lists and followed pages, and personal information such as your birth date
- Only friends can message you
- A sleep mode mutes notifications between 10pm and 7am
- Teens are notified after they have spent more than 60 minutes on Facebook
- “Age-inappropriate” content is filtered
- 13- to 15-year-olds need parental permission to change settings; 16- and 17-year-olds can change some settings themselves
- Meta says parents can’t see Teen Account users’ chats, chat history or search history.
- If the optional parental controls are enabled, parents can see Teen Account users’ friends, who they have blocked, and the time they have spent on Facebook, Instagram and Messenger (see Meta’s guide to enabling parental controls, and its guide to Teen Accounts, here)
- If parental controls are enabled, parents can set daily time-limits on Facebook, or block the service overnight
Meta will continue to take people’s age on faith when they sign up for its services, but Garlick said there were many systems that sought to detect suspicious activity, indicating an inauthentic age, once a user was on one of its platforms.
If someone tried to change their age after signing up, they were now asked to upload an ID under the new protections. They could also be asked to take a biometric test via webcam, which would gauge their age from facial features. Garlick said it would raise privacy issues if those age tests were introduced for all accounts from the get-go.
A new AI system to detect age based on behaviour was being trialled in Australia that automatically moved under-18s to Teen Accounts, she said. Once a pilot was complete, the technology could be implemented in New Zealand.
Garlick hosted an online briefing for New Zealand media yesterday as Australia prepares to introduce its under-16 ban social media in December and New Zealand lawmakers consider following suit.
Curzon said Meta’s moves, and those by its peers, were the result of “pressure from the global movement, which is growing by the day, to restrict children’s access to social media”.
Garlick said the protections were part of an ongoing programme that pre-dated the law change across the Tasman.
More transparency needed, b416 says
Curzon said Meta and other social media platforms needed to be more transparent with their data.
The Herald asked Garlick how many teen accounts have had parental controls enabled since they were introduced for Instagram in New Zealand in February. She said Meta does not publicly share that data.
Garlick did say: “When we move people into teen accounts, nine out of 10 opt to stay in the settings with the more restrictive controls”, which can only be changed with parental consent.
Government mulls options
A possible U16 social media ban has become part of the Government’s work programme (a move that supersedes an earlier private member’s bill), but with no timetable set so far.
Curzon said in b416’s view, a 400% surge in high or very high psychological distress in 15– to 24-year-olds in the 11-year period to 2023 – as tracked by the New Zealand Health Survey and the Mental Health Foundation – was tied to the rise of social media apps on smartphones.
Garlick said the Australian legislation had been “rushed” and had not taken full account of the social media platform’s own efforts.
The U16 ban risked pushing teenagers to unrelated corners of the internet, she sadi.
Curzon said age assurance pilots run by Australia’s e-Safety Commissioner earlier this year in preparation for the implementation of the new law had proved biometric and other age-verification measures were practical.
While welcoming the New Zealand Government’s increasing focus on a possible social media age limit, Curzon said speed was of the essence, given the pace of technology. She says AI chatbots are a growing mental health threat that would equal the damage her organisation sees caused by social media, but almost entirely unregulated.
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.