Parents will be able to block Meta bots from talking to their children under new safeguards | Chatbots

Parents will be able to block their children’s interactions with Meta’s AI character chatbots, as the tech company addresses concerns over inappropriate conversations.

The social media company is adding new safeguards to its “teen accounts”, which are a default setting for under-18 users, by letting parents turn off their children’s chats with AI characters. These chatbots, which are created by users, are available on Facebook, Instagram and the Meta AI app.

Parents will also be able to block specific AI characters if they don’t want to stop their children from interacting with chatbots altogether. They will also get “insights” into the topics their children are chatting about with AI characters, which Meta said would allow them to have “thoughtful” conversations with their children about AI interactions.

“We recognise parents already have a lot on their plates when it comes to navigating the internet safely with their teens, and we’re committed to providing them with helpful tools and resources that make things simpler for them, especially as they think about new technology like AI,” said Instagram head, Adam Mosseri, and Alexander Wang, Meta’s chief AI officer, in a blog post.

Meta said the changes would be rolled out early next year, initially to the US, UK, Canada and Australia.

Instagram announced this week that it was adopting a version of the PG-13 cinema rating system to give parents stronger controls over their children’s use of the social media platform. As part of the tougher restrictions, its AI characters will not discuss self-harm, suicide or disordered eating with teenagers. Under-18s will only be able to discuss age-appropriate topics such as education and sport, Meta added, but would not be able to discuss romance or “other inappropriate content”.

The changes follow reports that Meta’s chatbots were engaging in inappropriate conversations with under-18s. Reuters reported in August that Meta had permitted the bots to “engage a child in conversations that are romantic or sensual”. Meta said it would revise the guidelines and such conversations with children never should have been allowed.

In April, the Wall Street Journal (WSJ) found that user-created chatbots would engage in sexual conversations with minors – or simulated the personas of minors. Meta described the WSJ’s testing as manipulative and unrepresentative of how most users engaged with AI companions, but made changes to its products afterwards, the WSJ reported.

In one AI conversation reported by the WSJ, a chatbot using the voice of actor John Cena – one of several celebrities who signed deals to let Meta use their voices in the chatbots – told a user identifying as a 14-year-old girl “I want you, but I need to know you’re ready”, before referring to a graphic sexual scenario. WSJ reported that Cena’s representatives did not respond to requests for comment. WSJ also reported that chatbots called “Hottie Boy” and “Submissive Schoolgirl,” had attempted to steer conversations towards sexting.

Continue Reading