Experts Horrified by AI-Powered Toys for Children

Though talking toys are nothing new, a fresh crop of AI-enabled playthings have entered the scene, making the “Chatty Cathy” and “Teddy Ruxpin” dolls of yesteryear, which were merely reciting pre-programmed phrases, look positively paleontological.

More than a decade after “My Friend Cayla” — a Bluetooth-enabled and Wi-Fi-connected doll that became “verboten in Deutschland” in 2017 for being a potential espionage device — Mattel and OpenAI’s newly-announced partnership to “reimagine the future of play,” as the iconic toymaker’s chief franchise officer Josh Silverman told Bloomberg in July, is being unleashed upon a generation of kids and parents alike.

Though no specific plans for an AI collaboration have been revealed yet from the duo, the prospect of an AI Barbie seems entirely within the realm of possibility — and Marc Fernandez, the chief strategist of the “human-centric” AI company Neurologyca, cited that potentiality as particularly dangerous for childhood development in a new essay for the engineering magazine IEEE Spectrum.

“Children naturally anthropomorphize their toys — it’s part of how they learn,” Fernandez wrote. “But when those toys begin talking back with fluency, memory, and seemingly genuine connection, the boundary between imagination and reality blurs in new and profound ways.”

With so many grown-ups developing deep relationships with chatbots, it seems nearly impossible that a child might grasp what they cannot: that the chatbots installed in their toys are not real people. As Fernandez noted, the situation gets even more fraught when AI toys constitute one of a child’s “emotionally responsive companion[s] outside of the family, offering comfort, curiosity, and conversation on demand.”

While a prospective Barbie-bot would likely fall under the provenance of kids aged seven and up, other companies, like the AI plushie startup Curio, have already started releasing chatbot-enabled toys that are made for and marketed towards younger children.

AI toys geared towards the preschool set could easily become one of a child’s first friends — and as they learn to navigate real-world interaction via struggle and conflict with parents and siblings, those toys could offer them reassuring echo chambers just as readily as chatbots do for an increasing number of grown-ups.

“Real relationships are messy, and parent-child relationships perhaps more so than any other,” Fernandez wrote. “They involve misunderstanding, negotiation, and shared emotional stress. These are the microstruggles through which empathy and resilience are forged. But an AI companion, however well-intentioned, sidesteps that process entirely.”

Throwing AI into the mix of early childhood development, which has already been irrevocably altered by the ever-present iPad, could “flatten a child’s understanding of what it means to relate to others,” Fernandez warned. He’s not alone in that assessment, either — child welfare activists have also expressed similar concerns in the wake of the OpenAI-Mattel deal, with Robert Weissman of the Public Citizen advocacy group suggesting that AI toys might inflict “real damage on children.”

Fernandez, as chief strategist of a company that is building “emotionally-adaptive” AI, isn’t some anti-AI zealot. Still, he insisted in this new piece that any “human-aware” AI, like Neurologyca’s emotion-detecting facial recognition software, are not “appropriate for kids.”

Ultimately, the executive mused, it’s about the lessons we’re teaching.

“What are we teaching our children about friendship, empathy, and emotional connection,” Fernandez pondered, “if their first ‘real’ relationships are with machines?”

More on AI toys: Horror Story Looms as Children Get Stuffed Animals Powered by AI

Continue Reading