The Federal Trade Commission is issuing orders to seven companies that provide consumer-facing AI-powered chatbots seeking information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens.
AI chatbots may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users. AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.
The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” said FTC Chairman Andrew N. Ferguson. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
The FTC is issuing the orders using its 6(b) authority, which authorizes the Commission to conduct wide-ranging studies that do not have a specific law enforcement purpose. The recipients include:
- Alphabet, Inc.;
- Character Technologies, Inc.;
- Instagram, LLC;
- Meta Platforms, Inc.;
- OpenAI OpCo, LLC;
- Snap, Inc.; and
- X.AI Corp.
The FTC is interested in particular on the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule.
As part of its inquiry, the FTC is seeking information about how the companies:
- monetize user engagement;
- process user inputs and generate outputs in response to user inquiries;
- develop and approve characters;
- measure, test, and monitor for negative impacts before and after deployment;
- mitigate negative impacts, particularly to children;
- employ disclosures, advertising, and other representations to inform users and parents about features, capabilities, the intended audience, potential negative impacts, and data collection and handling practices;
- monitor and enforce compliance with Company rules and terms of services (e.g., community guidelines and age restrictions); and
- use or share personal information obtained through users’ conversations with the chatbots.
The Commission voted 3-0 to issue the 6(b) orders to the seven companies. Commissioners Melissa Holyoak and Mark R. Meador issued separate statements.
The lead staff on this matter are Alysa Bernstein and Erik Jones in the FTC’s Bureau of Consumer Protection.