March 2026
Cyberspace Administration of China Issued the Interim Measures for the Administration of Anthropomorphic Interactive Services of Artificial Intelligence (Draft for Comment)
On December 27, 2025, the Cyberspace Administration of China (CAC) issued the Interim Measures for the Administration of Anthropomorphic Interactive Services of Artificial Intelligence (Draft for Comment) (hereinafter referred to as the "Draft"). The Draft aims to regulate AI interactive services that simulate human personality traits, thinking patterns and communication styles, and to clarify the primary security responsibilities of service providers.
It covers core requirements, including service standards, data security, user protection (including minors and the elderly), security assessment, supervision and inspection. The Draft prohibits eight categories of acts that endanger national security, induce addiction, or infringe upon legitimate rights and interests. It also advocates a combination of categorized and graded regulation and industry self-regulation to ensure the healthy and compliant development of such services.
For relevant practitioners, the following contents of the Draft are noteworthy:
I. Scope of Application
Pursuant to Article 2 of the Draft, the Measures shall apply to products or services that use artificial intelligence technologies to provide the public within the territory of the People's Republic of China with emotional interaction with humans through texts, images, audios, videos or other means by simulating human personality traits, thinking patterns and communication styles (hereinafter referred to as "anthropomorphic interactive services"). Where laws or administrative regulations provide otherwise, such provisions shall prevail.
That is, if an AI is designed for emotional interaction with humans, both product and service providers—whether domestic or overseas—shall be subject to regulation as long as they target users within the territory of China.
II. Possible Compliance Obligations
According to the service standards stipulated in Chapter II of the Draft, if the Draft is finally implemented, product or service providers may assume the following compliance obligations:
A. Generated content shall comply with laws, regulations, social morality and ethics requirements;
B. Fulfill primary security responsibilities;
C. Protect data security and the security of users' personal information;
D. Provide anti-addiction functions and measures;
E. Protect special groups such as minors and the elderly;
F. Provide functions for service termination and complaint filing;
G. Conduct security assessments;
H. Label and mark artificial intelligence services;
I. Complete filing procedures.
III. Contents of Primary Security Responsibilities
According to Articles 8 and 9 of the Draft, the primary security responsibilities of providers mainly include:
A. Establish and improve the review of algorithm mechanisms and principles;
B. Conduct ethical review of science and technology;
C. Implement information release review;
D. Ensure cybersecurity, data security and personal information protection;
E. Prevent telecom and online fraud;
F. Formulate management systems for major risk preparedness and emergency response;
G. Adopt secure and controllable technical support measures;
H. Deploy content management technologies and personnel commensurate with product scale, business orientation and user base.
In addition, providers shall perform security responsibilities throughout the entire life cycle of anthropomorphic interactive services, complying with security requirements at all stages, including design, operation, upgrade and service termination. Security measures shall be designed and implemented synchronously with service functions to enhance endogenous security. Providers shall strengthen security monitoring and risk assessment during operation, timely detect and correct system deviations, handle security issues, and retain network logs in accordance with the law.
Furthermore, the Draft requires providers to possess security capabilities such as mental health protection, emotional boundary guidance and early warning of dependency risks. Providers shall not take replacing social interaction, controlling users' psychology, or inducing addiction and dependency as design goals.
IV. Circumstances Requiring Security Assessment
According to Article 21 of the Draft, providers shall conduct a security assessment and submit a security assessment report to the provincial-level cyberspace administration authority in their jurisdiction under the following circumstances:
A. Launching an anthropomorphic interactive service function or adding relevant functions;
B. Major changes to anthropomorphic interactive services caused by the adoption of new technologies or applications;
C. Having 1 million or more registered users or 100,000 or more monthly active users;
D. Potential risks to national security, public interests, legitimate rights and interests of individuals and organizations, or inadequate security measures during the provision of anthropomorphic interactive services;
E. Other circumstances specified by the CAC.
In other words, security assessments are required upon function launch, major changes, reaching certain user scales, or when security risks arise. A one-time security assessment is not sufficient for long-term compliance.
Conclusion
Overall, the Draft reflects China's focus on the ethical governance of artificial intelligence, consistent with China's basic ethical principle of "enhancing human well-being and respecting the right to life". Meanwhile, it imposes relatively high compliance requirements on providers of anthropomorphic AI interactive services or products.
Under the current framework, regulation focuses on security. Practitioners may conduct self-inspections based on the Draft, or establish a higher-standard self-security protection system to mitigate the impact of legal uncertainties on business operations.
It covers core requirements, including service standards, data security, user protection (including minors and the elderly), security assessment, supervision and inspection. The Draft prohibits eight categories of acts that endanger national security, induce addiction, or infringe upon legitimate rights and interests. It also advocates a combination of categorized and graded regulation and industry self-regulation to ensure the healthy and compliant development of such services.
For relevant practitioners, the following contents of the Draft are noteworthy:
I. Scope of Application
Pursuant to Article 2 of the Draft, the Measures shall apply to products or services that use artificial intelligence technologies to provide the public within the territory of the People's Republic of China with emotional interaction with humans through texts, images, audios, videos or other means by simulating human personality traits, thinking patterns and communication styles (hereinafter referred to as "anthropomorphic interactive services"). Where laws or administrative regulations provide otherwise, such provisions shall prevail.
That is, if an AI is designed for emotional interaction with humans, both product and service providers—whether domestic or overseas—shall be subject to regulation as long as they target users within the territory of China.
II. Possible Compliance Obligations
According to the service standards stipulated in Chapter II of the Draft, if the Draft is finally implemented, product or service providers may assume the following compliance obligations:
A. Generated content shall comply with laws, regulations, social morality and ethics requirements;
B. Fulfill primary security responsibilities;
C. Protect data security and the security of users' personal information;
D. Provide anti-addiction functions and measures;
E. Protect special groups such as minors and the elderly;
F. Provide functions for service termination and complaint filing;
G. Conduct security assessments;
H. Label and mark artificial intelligence services;
I. Complete filing procedures.
III. Contents of Primary Security Responsibilities
According to Articles 8 and 9 of the Draft, the primary security responsibilities of providers mainly include:
A. Establish and improve the review of algorithm mechanisms and principles;
B. Conduct ethical review of science and technology;
C. Implement information release review;
D. Ensure cybersecurity, data security and personal information protection;
E. Prevent telecom and online fraud;
F. Formulate management systems for major risk preparedness and emergency response;
G. Adopt secure and controllable technical support measures;
H. Deploy content management technologies and personnel commensurate with product scale, business orientation and user base.
In addition, providers shall perform security responsibilities throughout the entire life cycle of anthropomorphic interactive services, complying with security requirements at all stages, including design, operation, upgrade and service termination. Security measures shall be designed and implemented synchronously with service functions to enhance endogenous security. Providers shall strengthen security monitoring and risk assessment during operation, timely detect and correct system deviations, handle security issues, and retain network logs in accordance with the law.
Furthermore, the Draft requires providers to possess security capabilities such as mental health protection, emotional boundary guidance and early warning of dependency risks. Providers shall not take replacing social interaction, controlling users' psychology, or inducing addiction and dependency as design goals.
IV. Circumstances Requiring Security Assessment
According to Article 21 of the Draft, providers shall conduct a security assessment and submit a security assessment report to the provincial-level cyberspace administration authority in their jurisdiction under the following circumstances:
A. Launching an anthropomorphic interactive service function or adding relevant functions;
B. Major changes to anthropomorphic interactive services caused by the adoption of new technologies or applications;
C. Having 1 million or more registered users or 100,000 or more monthly active users;
D. Potential risks to national security, public interests, legitimate rights and interests of individuals and organizations, or inadequate security measures during the provision of anthropomorphic interactive services;
E. Other circumstances specified by the CAC.
In other words, security assessments are required upon function launch, major changes, reaching certain user scales, or when security risks arise. A one-time security assessment is not sufficient for long-term compliance.
Conclusion
Overall, the Draft reflects China's focus on the ethical governance of artificial intelligence, consistent with China's basic ethical principle of "enhancing human well-being and respecting the right to life". Meanwhile, it imposes relatively high compliance requirements on providers of anthropomorphic AI interactive services or products.
Under the current framework, regulation focuses on security. Practitioners may conduct self-inspections based on the Draft, or establish a higher-standard self-security protection system to mitigate the impact of legal uncertainties on business operations.
The contents of all newsletters of Shanghai Lee, Tsai & Partners (Content) available on the webpage belong to and remain with Shanghai Lee, Tsai & Partners. All rights are reserved by Shanghai Lee, Tsai & Partners, and the Content may not be reproduced, downloaded, disseminated, published, or transferred in any form or by any means, except with the prior permission of Shanghai Lee, Tsai & Partners.
The Content is for informational purposes only and is not offered as legal or professional advice on any particular issue or case. The Content may not reflect the most current legal and regulatory developments. Shanghai Lee, Tsai & Partners and the editors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The contributing authors' opinions do not represent the position of Shanghai Lee, Tsai & Partners. If the reader has any suggestions or questions, please do not hesitate to contact Shanghai Lee, Tsai & Partners.
.png)
