China became the first jurisdiction in the world to enact a regulation specifically targeting generative AI services when the Cyberspace Administration of China's Interim Measures for the Management of Generative Artificial Intelligence Services took effect on 15 August 2023. For global operators deploying AI agents or generative AI tools to users in China, the regime creates obligations that differ structurally from the EU AI Act and the US framework. This guide sets out what matters most for cross-border compliance in 2026.
Key takeaways
- China's Interim Measures apply based on service delivery: any provider offering generative AI to users in China is in scope, regardless of the provider's place of establishment or nationality.
- Providers must ensure their training data governance meets Chinese information security standards including the Cybersecurity Multi-Level Protection Scheme (MLPS 2.0) and must comply with content requirements that reflect Chinese political and social norms.
- Providers above defined user thresholds or operating in sensitive sectors must undergo a security assessment by the CAC before launching their services in China.
- The filing and registration requirement means many generative AI services must be registered with the CAC before public deployment, creating a market access checkpoint absent from EU or US regimes.
- For EU-based operators, the Chinese regime and the EU AI Act create parallel compliance obligations that do not map directly onto each other and must be addressed as separate programmes.
Background: the Chinese regulatory context for AI
China has built its AI governance framework through a series of sectoral and technology-specific regulations, rather than through a single comprehensive statute. The algorithmic recommendation regime (effective March 2022) addressed recommendation systems and their societal influence. The deep synthesis regulation (effective January 2023) targeted deepfakes and AI-generated synthetic media. The Interim Measures for Generative AI, issued jointly by six ministries and agencies and effective August 2023, extended the framework to cover the full range of generative AI services: text, image, audio, video, and code generation.
The Measures exist alongside, not instead of, China's broader data governance framework: the Personal Information Protection Law (PIPL, effective November 2021), the Data Security Law (DSL, effective September 2021), and the Cybersecurity Law (CSL, effective June 2017). A generative AI provider operating in China must comply with all four layers simultaneously. The interaction between the Measures and PIPL is particularly significant for AI systems that process personal data to generate outputs, which describes most enterprise AI agent applications.
The CAC is the primary regulator for the Measures but enforcement involves multiple agencies depending on the type of violation. Content violations involving political or national security material may involve the Ministry of State Security. Data violations may involve the PIPL's enforcement authorities. This multi-agency structure creates enforcement uncertainty that is itself a compliance risk for foreign operators unfamiliar with the coordination arrangements.
Scope: who the Measures apply to
Article 2 of the Interim Measures defines scope by activity: the Measures apply to organisations and individuals that use generative AI technology to provide products or services to Chinese users in the form of text, images, audio, video, code, or other content. The key jurisdictional criterion is service delivery to Chinese users, not the provider's place of establishment. A European company providing a generative AI chatbot accessible to users in China is within scope.
The Measures apply to services offered to the public (to-consumer and to-business-to-consumer) but explicitly do not apply to providers developing generative AI technology for internal use only, without providing services to the public. For enterprise software vendors whose products include generative AI features used by business customers' internal teams, the line between "public service" and "internal tool" requires careful analysis, particularly where the business customer's employees use the tool in customer-facing workflows.
Providers offering their services exclusively outside of China and without Chinese-language access or Chinese user accounts face a lower practical risk of enforcement, but they remain technically within scope if Chinese users access their services regardless of the provider's intent. The practical approach for most operators is to assess whether Chinese users represent a material proportion of their user base and to calibrate compliance investment accordingly.
Core obligations: training data, content, and transparency
Article 7 of the Measures establishes the training data governance obligations. Providers must use legally obtained data sources for training, must comply with intellectual property law, and must obtain consent or have another legal basis under PIPL where training data contains personal information. Where the training data relates to images, audio, or video of specific individuals, the biometric data provisions of PIPL apply.
The content compliance obligation is the most distinctive feature of the Chinese regime by comparison with any Western equivalent. Article 4 of the Measures requires that generated content must adhere to core socialist values, must not contain content that subverts state power or socialist order, and must not include content that discriminates against specific groups or endangers social stability. The content standard is subjective and the line between permissible and prohibited content is not defined with the precision that EU AI Act provisions achieve for their prohibition categories.
This creates a specific compliance challenge for global operators. A European company that complies with the EU AI Act's Article 5 prohibition list (which focuses on manipulation, social scoring, biometric surveillance, and similar harms) may still be producing content that violates Chinese content obligations. The two lists are not coextensive. A model trained on global data and optimised for the global market may produce outputs that satisfy EU prohibited practices standards while violating Chinese content standards, and vice versa.
Article 9 of the Measures establishes a transparency obligation: providers must clearly indicate to users when content has been generated by AI. This is structurally similar to the transparency requirements in Article 50 of Regulation (EU) 2024/1689, which requires disclosure when users interact with AI systems unless the context makes it obvious. The specific disclosure mechanism differs, but the principle of AI-generated content labelling is now present in both regimes.
Security assessment and the registration requirement
The security assessment requirement links the Measures to the broader cybersecurity architecture of the Cybersecurity Law and the CAC's Security Assessment Measures for Internet Information Services (2022). Providers that use generative AI to provide public-facing services and that cross defined thresholds of users or influence must submit their service to a security assessment before launch.
The assessment framework evaluates: the source and governance of training data; the content filtering and moderation mechanisms; the model's alignment with content requirements; the provider's cybersecurity measures including MLPS 2.0 level compliance; and the data localisation arrangements for any personal data processed. For a foreign provider, meeting the MLPS 2.0 requirements typically requires establishing a China-domiciled infrastructure arrangement, which raises the cost and complexity of market entry significantly.
The registration system (备案, bèi'àn) for generative AI services is a market access checkpoint. Providers that meet the threshold for public offering must complete filing with the CAC before public launch. The filing includes technical details of the service, its intended user base, and the governance arrangements in place. The CAC maintains a public register of filed services. Non-filed services operating in China risk enforcement action including service suspension.
For foreign providers, the practical path to compliance typically involves one of two approaches. The first is establishing a Chinese entity or joint venture that holds the necessary licences and filing registrations, effectively creating a separate Chinese service instance. The second is implementing geographic restriction technology that prevents Chinese users from accessing the service, combined with monitoring to detect and address circumvention. The first approach allows market access; the second avoids the obligation by removing the jurisdictional nexus.
Interaction with the EU AI Act for cross-border operators
Operators subject to both the Chinese Measures and the EU AI Act face compliance architectures that share some goals but require different evidence, different documentation, and different technical implementations.
The EU AI Act's high-risk categorisation is based on use case and sector. The Chinese regime's obligations apply to generative AI regardless of use case, focusing instead on public availability and content characteristics. An operator running an AI agent for financial services in both the EU and China faces EU AI Act high-risk obligations (Annex III, point 5(b) for creditworthiness or access to financial products) and Chinese Measures obligations simultaneously. The documentation required for EU compliance (risk management system, technical documentation, logging, human oversight evidence) does not satisfy the Chinese security assessment criteria, because the Chinese assessment evaluates training data governance and content compliance on Chinese-specific criteria that have no direct EU equivalent.
For the certification dimension, the Agent Certified framework provides a structured assessment against EU AI Act obligations. A business that completes an Agent Certified assessment in preparation for EU compliance will have documented the dimensions most relevant to any subsequent Chinese compliance analysis, but will need China-specific work on content governance and data localisation.
For comprehensive cross-jurisdictional comparisons, see the US, EU, UK liability comparison and the Asia-Pacific AI governance landscape for regional context. The EU-specific regulatory framework for cross-border operators is covered in detail on agentliability.eu's extraterritorial reach analysis.
What cross-border operators should do now
Operators with a Chinese market presence or Chinese user base should assess whether their generative AI services meet the threshold for CAC filing and, if so, whether a filed or restricted service model better serves their compliance and commercial objectives. That assessment should involve China-qualified legal counsel, as the operational mechanics of the filing process and the security assessment are not fully legible from the text of the Measures alone.
Operators who have determined that their services are technically in scope but who serve primarily enterprise customers in China should audit whether their training data governance documentation is adequate for a potential CAC assessment. The data governance requirements in the Measures are structurally similar to, but more specific in Chinese law terms than, the data governance obligations in Article 10 of the EU AI Act. An operator with strong EU-compliant data governance has the foundation for Chinese compliance but will need to verify specific provisions around data localisation, consent standards, and Chinese IP law.
Operators planning to exclude Chinese users should implement geographic restriction at the account creation and access control layer, document that decision and its technical implementation, and review periodically as the Measures continue to evolve. The CAC has indicated further guidance on specific applications of the Measures is forthcoming, and the regulatory landscape may shift as the Chinese government develops its approach to regulating more advanced AI systems.
Frequently asked questions
What are China's Interim Measures for Generative AI Services?
The Interim Measures for the Management of Generative Artificial Intelligence Services were issued jointly by the CAC and five other Chinese ministries and agencies, and took effect on 15 August 2023. They apply to organisations providing generative AI services to users in China, covering text, image, audio, video, and code generation. The Measures establish obligations on training data governance, content compliance, transparency, security assessment, and registration.
Does the China CAC Generative AI regime apply to non-Chinese companies?
Yes. The regime applies based on service delivery to Chinese users, not on the provider's place of establishment. Any organisation providing generative AI services to users located in China is in scope. Non-Chinese companies that serve Chinese users must comply or implement effective geographic restrictions preventing Chinese user access.
What is a security assessment under China's Generative AI regime?
The security assessment is a pre-launch evaluation conducted by the CAC that reviews the provider's training data governance, content filtering mechanisms, cybersecurity measures (including MLPS 2.0 compliance), and alignment with Chinese content standards. It applies to providers meeting defined thresholds for public-facing services in China. For foreign providers, the assessment effectively functions as a market access requirement.
How does the China generative AI regime compare to the EU AI Act?
Both regimes address AI governance but with different emphases. The EU AI Act is risk-categorised by use case and focuses on safety, transparency, and fundamental rights. The Chinese regime applies specifically to generative AI and emphasises content compliance against Chinese political and social norms, data localisation, and security assessment. Compliance programmes for the two regimes do not substitute for each other and must be developed in parallel for operators serving both markets.
What are the consequences of non-compliance with China's Generative AI Measures?
The CAC can issue corrective orders, impose fines, and suspend service provision in China. For foreign providers, service blocking is the most significant practical consequence, effectively preventing access to the Chinese market. Civil liability for harm caused by non-compliant AI-generated content applies under general Chinese tort law provisions.
References
- Cyberspace Administration of China, Ministry of Science and Technology, and four other ministries. Interim Measures for the Management of Generative Artificial Intelligence Services (生成式人工智能服务管理暂行办法). Issued 10 July 2023, effective 15 August 2023.
- Cyberspace Administration of China. Personal Information Protection Law of the People's Republic of China (PIPL), effective November 2021. Provisions on biometric data and AI-generated content involving personal information.
- Cyberspace Administration of China. Data Security Law of the People's Republic of China (DSL), effective September 2021.
- Cyberspace Administration of China. Measures for the Administration of Algorithmic Recommendations (算法推荐管理规定), effective March 2022.
- Cyberspace Administration of China. Regulations on the Administration of Deep Synthesis Internet Information Services (互联网信息服务深度合成管理规定), effective January 2023.
- National Technical Committee on Cybersecurity Standardisation. GB/T 22239-2019. Cybersecurity Multi-Level Protection Scheme (MLPS 2.0).
- Regulation (EU) 2024/1689. Article 50: transparency obligations for certain AI systems, for comparison with Chinese disclosure requirements.
- OECD. AI Principles (2024 revision). Section on transparency as a shared standard referenced across jurisdictions.