China outlines rules to regulate human-like AI companion apps

China’s internet regulator issued new draft rules on Saturday that aim to regulate the use of artificial intelligence “companions,” which are defined as systems that interact with humans and display “human-like traits and behavior.”

The new rules, called “Interim Measures for the Administration of Anthropomorphic Interactive Services Using Artificial Intelligence,” were issued by the Cyberspace Administration of China or CAC, and will be up for public comment until January 25, 2026, Reuters reported.

According to the CAC, the rules would be applied to any application or service that uses AI to simulate human personality traits and offer what it terms “anthropomorphic interactive services.”

The proposed regulations will require makers of AI companion apps to make it clear to their users that they’re interacting with an AI system, and not a human, through regular pop-up warnings. They must also ask users to take a break after two-hours of continuous use, the rules state. In addition, they’ll be required to create systems that can assess user’s emotions and identify if they’re becoming dependent or addicted to the AI. If they identify such a case, they’ll be required to restrict their service to the user in question.

Furthermore, AI companion apps will be required to establish an emergency protocol, so that if a user expresses thoughts about suicide or self-harm, a human will take over the interaction from the AI system.

There are a number of prohibitions in the draft document, too. It bars AI companions from endangering national security, spreading rumors and inciting “illegal religious activities,” and they’re also not allowed to use obscenities or promote violence or criminal acts. In addition, chatbots must be prevented from encouraging self-harm or suicide or making false promises. Controls must also be introduced to prevent chatbots from “emotional manipulation” that convinces users to make bad decisions.

The draft Chinese law comes at a time when adoption of AI companion apps is dramatically accelerating. In October, a report by the South China Morning Post revealed there are now more than 515 million generative AI users in China, resulting in growing concern about the psychological impact they have.

The market for AI companion apps has become too large and consequential for regulators to ignore, with various studies showing how they can form emotional bonds with their users and, in some cases, cause significant harm. Earlier this year, a Frontiers in Psychology study showed that 45.8% of Chinese university students reported using AI chatbots in the last 30 days, and those that did so exhibited significantly higher levels of depression compared to non-users.

China isn’t the only country that’s stepped in to try and regulate the use of AI companions. In the U.S., California became the first state to pass similar legislation, when Governor Gavin Newsom signed Senate Bill 243 into law in October. That bill, which will take effect on January 1, requires app makers to remind minors every three hours that they’re speaking to an AI system and not a human, and urge them to take a break.

The SB 243 bill also mandates companion apps to introduce age verification and prohibits them from representing themselves as healthcare professionals or showing sexually explicit images to minors. The law stipulates that individuals can sue companies for violations and seek up to $1,000 per incident in compensation, plus legal costs.

When he signed SB 243 into law, Newsom warned of the risk of AI technology exploiting, misleading and endangering children, and China’s regulatory authority has cited similar justifications for its own law. According to the CAC, the new rules will “promote the healthy development and standardized application of artificial intelligence-based anthropomorphic interactive services, safeguard national security and public interests, and protect the legitimate rights and interests of citizens, legal persons and other organizations.”

Image: SiliconANGLE/Dreamina

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.

Continue Reading