|
AI chatbots have the ability to manipulate users' emotions. Photo: AFP . |
China is planning to tighten controls on AI chatbots capable of influencing human emotions, particularly those with content that promotes suicide, self-harm, or other harmful behaviors. This move is outlined in a draft of new regulations released by the Cyberspace Administration of China on December 27.
According to CNBC 's translation of the document , the draft targets "human-like interactive artificial intelligence services," meaning AI systems capable of mimicking human personality, behavior, and communication through text, images, audio, or video . These regulations will apply to all AI products and services publicly offered in China once finalized. The public comment period extends until January 25th.
Winston Ma, a visiting professor at New York University (NYU) Law School, believes this could be the world's first attempt to establish a legal framework specifically for AI systems with "human-like" characteristics. He notes that this proposal comes as Chinese companies are accelerating the development of AI virtual assistants.
Compared to China's AI regulations issued in 2023, Mr. Ma believes the new draft shows a shift in focus from "content safety" to "emotional safety," reflecting growing concern about the psychological impact of AI on users.
The draft regulations state that AI chatbots must not create content that encourages suicide, self-harm, or verbal violence, nor any form of emotional manipulation that harms mental health. In cases where a user expresses suicidal intentions, the technology service provider must have someone take over the conversation and immediately contact the user's guardian or designated person.
Furthermore, AI chatbots are prohibited from creating content related to gambling, pornography, or violence. Minors are only permitted to use AI for emotional support with the consent of a guardian and must adhere to time limits.
|
China is tightening regulations on AI chatbots. Photo: Bloomberg . |
The proposal was announced shortly after two major Chinese AI chatbot startups, Z.ai and Minimax, filed for initial public offerings (IPOs) in Hong Kong in December.
Minimax is known for its Talkie AI app and its domestic version, Xingye, which accounted for more than one-third of the company's revenue in the first three quarters of the year, with an average of over 20 million monthly active users. Meanwhile, Z.ai (Zhipu) stated that its technology has been deployed on approximately 80 million devices, although it did not disclose specific numbers of active users.
In a global context, the impact of AI on human mental health is receiving increasing attention, as many countries and technology companies begin to confront the social risks posed by this technology.
Source: https://znews.vn/trung-quoc-ra-don-voi-chatbot-ai-post1615236.html









Comment (0)