China proposes new rules over child safety fears, suicide risks

AI regulation: China proposes new rules over child safety fears, suicide risks 

In the rapid age of artificial intelligence, China has come forward with new rules for AI firms, aiming to protect young minds from online abuse and chatbots that promote self-harm and violence.

Rapid proliferation of chatbots and AI models has been the main driving force behind the planned regulations. Recently, China’s DeepSeek made headlines worldwide as it topped download charts.

Online safety net for children

The draft rules published by the Cyberspace Administration of China (CAC) will focus on measures, aiming to ensure child online safety. The rules will oblige AI companies to offer personalized settings.

The firms would not only impose time limits on usage but also get parental consent before providing emotional companionship services.

Compulsory human-based intervention

The operators will be required to have mandatory human intervention in any chat related to self-harm and suicide. They must notify the user’s guardians in the case of such conversation.

Content moderation

Under the planned proposal, the developers will be responsible for moderate content generation. They must ensure that their model won’t generate any content that promotes gambling and violence .

Responsible use of AI

The regulations would also promote the responsible and ethical use of AI, preventing the sharing of “content that endangers national security, damages national honour and interests [or] undermines national unity.”

Once the regulatory draft is finalized, these rules will be applied to all AI services and products, marking a paradigm shift in efforts to regulate AI bots, which are under fire over safety concerns.

Earlier this week, China also issued new draft rules, aiming to regulate AI with human-like interaction.

The recent move also highlights Beijing’s effort to govern AI and strengthen consumer-oriented safety and ethical requirements.

Continue Reading