December 25, 2025 – China has removed a comprehensive AI law from its 2025 agenda but is not retreating from AI regulation. Instead, it is prioritising pilots, standards and targeted rules to manage AI-related risks while keeping compliance costs low. This phased approach offers flexibility but leaves firms navigating fragmented frameworks and overlapping obligations. Coordinating safety testing, transparency requirements and data governance remains the core challenge. A unified statute may still emerge once real-world risks and pilot outcomes are clearer, but for now incremental steps will shape China’s path to AI governance.
Sometimes, the fastest way to govern a moving target is to stop aiming for a bullseye. China has applied this wisdom to artificial intelligence (AI), quietly removing plans for a single, high-level and comprehensive legal framework from the 2025 legislative schedule, published in May. Beijing is instead prioritising pilots, standards and targeted measures, and seizing the opportunity to learn from international experiences ahead of codifying an overarching statute.
The removal of the comprehensive AI legal framework proposal surprised many outside observers. The delay preserves regulatory flexibility for a technology still in its early stages, but comes at the expense of forcing stakeholders to rely on the existing fragmented AI rules.
Despite the removal, Chinese officials and state media seem to be signalling the arrival of the high-level AI legislation is imminent. In May 2025, Chinese state-owned newspaper Legal Daily argued that such legislation remains a critical part of ‘pushing for the healthy development of AI’. The National People’s Congress republished another Legal Daily commentary from June 2025 which said that existing technology and privacy laws do not cover AI-specific risks, such as algorithmic bias and discrimination.
Observers have debated the motivation behind the quiet removal of the high-level AI legislation. While some critics see it as a needless delay, others view it as a deliberate pause to give space for the technology to mature. Shanghai Jiao Tong University Professor Bu Shou argues that updating existing statutes and issuing targeted rules is all that is needed to mitigate the risks arising from AI development in China.
China currently relies on existing statutes, industry standards and sector-specific measures to govern AI use. But as Professor Florence G’sell notes, government regulations tend to outperform self-regulations because such industry-led standards and internal governance programs commonly prioritise performance over risk mitigation and accountability.
Companies using AI face higher compliance costs when fragmented frameworks clash and there is no high-level statute to guide them. Problems also arise from inconsistencies between emerging AI regulations and existing statutes. Shanghai’s Regulation of the Shanghai Municipality on Promoting the Development of the Artificial Intelligence Industry expands access to public data for AI development, but it is not clear whether the initiative relies on consent, invokes a public-interest rationale, or something else as indicated in the Personal Information Protection Law (PIPL). The regulation does not account for foreseeable challenges, such as training data comes from different sources and some of the data was not collected and used after gaining informed consent or based on other scenarios stated in the PIPL.
Algorithmic transparency is another area of incongruity affecting AI development in China. Some rules ask firms to explain how their systems work, all the while trade-secret and security rules limit what they can disclose. Contradictory regulations increase the cost of doing business, especially for small- and medium-sized enterprises without large compliance teams.
Without comprehensive AI legislation, these tensions will only grow. Official state newspaper, People’s Daily, has stressed the need to coordinate development and security in AI legislation.
Coordination is the key lever for lowering compliance costs for companies deploying AI. A high-level coordinating statute could provide a forum to resolve conflicts and set uniform baselines for safety testing, bias evaluation and incident reporting.
Other countries and regions offer different solutions. The European Union’s tiered AI Act offers strong safeguards and legal certainty but demands heavy compliance, which is much easier for large firms to absorb than it is for small- and medium-sized enterprises. Japan’s lighter principle-first approach sits closer to China’s pilot‑and‑standards path but with weaker leverage. South Korea’s law focuses on promotion and regulation, providing an example of how to balance innovation with safeguards.
Despite hopes that China would introduce a comprehensive legal framework on AI to resolve the problems with the existing regulatory regime, for now, the government seems set on an incremental path. Regulators will keep issuing targeted measures, refine security assessments and expand pilots in AI initiatives such as healthcare and smart cities. Standard‑setting bodies will shape technical requirements for model evaluation, watermarking, data governance and cybersecurity testing. Major tech-hubs, such as Shanghai, Beijing and Shenzhen, will serve as testbeds for data access, AI product procurement and regulatory supervision mechanisms.
China may still release a comprehensive law in the coming years. Similar to how a 2016 fraud case accelerated the legislation of the PIPL, AI incidents resulting from unmitigated risks, such as model collapse, systemic vulnerabilities in widely deployed models and AI‑enabled frauds, would expose the limitations of existing regulations and could stir public opinion towards a comprehensive AI law. The world will watch closely as the Chinese regulations are released and as testing, alignment and deployment get underway.








