← Slow Justice, Fast Order Vol. 4 6 / 13 한국어
Vol. 4 — Slow Justice, Fast Order

Chapter 5 — Beijing's Eight Months


1. Layer Cake: The Chinese Regulatory Method

China's AI regulation is not a single law.

Where the EU attempted to govern all AI through one comprehensive statute (the AI Act), China opted to issue targeted regulations rapidly each time a new problem emerged. A "layer cake" — regulation stacked in tiers. According to Matt Sheehan of the Carnegie Endowment for International Peace, this approach was deliberate: "Chinese regulators have steadily built up bureaucratic know-how and regulatory capacity by issuing targeted AI regulations in sequence. Reusable regulatory tools like the algorithm registry served as regulatory scaffolding, making each subsequent regulation smoother to construct4."

In March 2022, the world's first regulation on algorithmic recommendation took effect. A law governing the content algorithms of TikTok's Chinese twin, Douyin (抖音), existed in China before it existed anywhere else on earth.

This regulation had a political origin. As Sheehan observed, the threat that algorithmic recommendation technology posed to the Party was concrete: "The Party could always hold someone accountable for the decision to put a particular story on the front page. Toutiao (今日头条) changed that — it handed decision-making authority to the algorithm5." The person to hold accountable had vanished. In a world where algorithms curated the news, the Party's traditional methods of content control no longer worked. The regulation was, in part, an effort to recover that control.

But the regulation also had a human catalyst. On September 8, 2020, the Chinese magazine Renwu (《人物》) published an investigative report titled "Delivery Riders, Trapped in the System" (外卖骑手,困在系统里). Reporter Lai Youxuan (赖佑玄) had spent six months embedded with delivery drivers across multiple cities. Platform algorithms were tightening delivery windows quarter after quarter. According to Renwu, the maximum time allowed for a three-kilometer delivery was one hour in 2016, forty-five minutes in 2017, and thirty-eight minutes in 2018. By 2019, the industry-wide standard had been cut by ten minutes compared to three years earlier6.

A driver from Guizhou named Xiaodao (小刀) captured the structure most precisely: "The system treats us like helicopters, but we are not helicopters (系统当我们是直升机,但我们不是)7." Every delivery was a race against death. Traffic accidents were commonplace. As long as the food did not spill, a rider falling off a scooter was not considered a serious incident8.

Inside the timetable the algorithm had built, food mattered more than people.

In Chengdu alone during a seven-month period in 2018, roughly ten thousand traffic violations by delivery drivers were recorded, along with 196 accidents and 155 casualties — an average of one driver injured or killed per day9. Sociologist Sun Ping (孙萍) named this phenomenon the "counter-algorithm" (逆算法). Her analysis: "The behavior of riders challenging traffic laws is a labor practice forced upon them over a long period under the control and discipline of the system's algorithm. The direct result of this 'counter-algorithm' is a sharp increase in the number of delivery riders involved in traffic accidents10."

Public outrage accelerated the push for algorithmic regulation11. According to a Carnegie Endowment analysis, the anti-algorithmic-management campaign by delivery workers directly influenced China's platform regulation. Like the call-center agents in South Korea whom we will meet in Chapter 9, China's delivery riders were people who had been run over by algorithms.

In January 2023, the world's first regulation specifically targeting deepfakes took effect.

In August 2023 — eight and a half months after ChatGPT's launch — a generative AI regulation took effect.

In September 2025, a regulation mandating the labeling of AI-generated content took effect. It required both explicit labels and implicit labels (embedded metadata) — a standard that the United States, as of 2026, has still not achieved at the federal level.

In January 2026, cybersecurity law amendments addressing AI took effect — the first time AI was formally incorporated into Chinese national law12. The layer cake of ministerial regulations had begun rising into the domain of national legislation.

The speed is impressive. By one analysis, in the first half of 2025 alone, China issued the same volume of national AI requirements as it had in the previous three years combined13. But examine what these regulations share, and something else comes into view.


2. "Core Socialist Values"

Article 4 of the Interim Measures for the Management of Generative AI Services14. The scope of prohibited content is sweeping:

Providers of generative AI services shall adhere to core socialist values and shall not generate content that incites subversion of state power, overthrow of the socialist system, endangerment of national security and interests, damage to the national image, incitement of national separatism, undermining of national unity and social stability, promotion of terrorism and extremism, incitement of ethnic hatred and ethnic discrimination, dissemination of violence and obscenity, or dissemination of false and harmful information.

"Core socialist values." Enforcement of this provision rests entirely at the CAC's discretion. The government alone decides which AI outputs violate these values.

Article 14 is more direct. When illegal content is discovered, enterprises must "immediately take measures such as halting generation, halting transmission, and deletion (及时采取停止生成、停止传输、消除等处置措施)" and report to the supervising authorities15. Self-censorship was codified into law.

This is the core structure of China's AI regulation. The reason it is fast and the reason it is problematic share the same root. Without consensus, speed is easy. With unlimited discretion, enforcement becomes arbitrary.

This arbitrary enforcement is structural. Deepfake videos mimicking popular influencers became deletion targets under the deep-synthesis regulation, yet during the same period, AI-generated videos serving state propaganda circulated on official government channels. The same law was applied differently. The efficiency of authoritarian regulation rests on this opacity.

The Didi (滴滴) case illustrates this starkly. Two days after Didi's IPO on the New York Stock Exchange in 2021, the CAC launched a cybersecurity investigation and the app was pulled from app stores. The company was hit with a fine of 8.026 billion yuan (approximately $1.2 billion)16. "Investigate one case and warn the whole sector (做到查处一案、警示一片)17" — regulation was simultaneously a legal enforcement action and a political message.


3. Preemptive Self-Censorship

In September 2021, ByteDance made an announcement.

Douyin (the Chinese version of TikTok) would limit users under fourteen years old to forty minutes of daily use and block access between 10 p.m. and 6 a.m. The move came six months before the CAC's algorithmic regulation took effect on March 1, 2022.

Why act first? According to industry reporting and analysis, it was widely interpreted as a preemptive move in anticipation of CAC pressure. If you self-impose the rules before the regulation arrives, the calculus likely went, you can maintain your relationship with the authorities on more favorable terms. Corporate self-censorship outpaced regulation.

This scene reveals the central dynamic of China's AI ecosystem. Before regulation arrives explicitly, companies try to discern where the "line" is and act within it. The line's location is not clearly written in any official document. It is inferred from relationships, signals, precedents, and past enforcement patterns. Jeffrey Ding described this mechanism: "Part of the reason why China's internet is so censored is that they put the onus on companies to control their content so it's not politically sensitive18." At the same time, Ding pointed to another dimension of Chinese regulation: "In some cases, there is a willingness to let companies experiment first and then introduce regulation after the fact19." Chinese regulation was a hybrid — a mixture of top-down command and after-the-fact response.

This is legal uncertainty taken to its extreme.

The event that most dramatically demonstrated the cost of that uncertainty occurred on October 24, 2020, when Jack Ma (马云) gave a speech at the Bund Summit (第二届外滩金融峰会) in Shanghai. Ant Group's $37 billion IPO — the largest listing in world history — was ten days away. Ma took the podium and aimed squarely at the regulators. "Today's banks are still operating with a pawnshop mentality. Collateral and guarantees — that is the pawnshop (今天的银行延续的还是当铺思想,抵押和担保就是当铺)." He called the Basel Accords an "old man's club" (老年人俱乐部). "You cannot manage an airport the way you manage a train station, and you cannot manage the future with yesterday's methods (不能用管理火车站的办法管机场,不能用昨天的办法来管未来)." Then he added one more line: "China's financial sector fundamentally has no system. The risk is, in fact, the risk of 'the absence of a financial system' (中国的金融基本上没有体系,其风险实际上是'缺乏金融体系'的风险)20." The expressions on the faces of financial regulators sitting in the audience hardened.

Ten days later, on November 3, the Shanghai Stock Exchange suspended Ant Group's IPO. The official announcement cited "significant issues in the regulatory environment" that could cause the company to "fail to meet listing qualifications or information disclosure requirements21" — no specific reason was given. The deliberate ambiguity of the official statement was itself an exercise of power. Regulators summoned Ma and Ant Group executives Jing Xiandong (井贤栋) and Hu Xiaoming (胡晓明) for a "regulatory interview" (监管约谈)22. The official term was "interview," but for a Chinese entrepreneur, it was a summons. Ma disappeared from public view for months afterward.

No official confirmation of a direct causal link between the speech and the IPO suspension has ever been issued. The internal mechanisms of Chinese political decision-making are opaque by nature. But the temporal correlation is difficult to deny, and major outlets including Caixin (财新) and the overwhelming majority of analysts have interpreted the speech as the trigger for the suspension.

ByteDance, it appears, read the line and yielded in advance. Ma crossed it and paid the price. Every company in China's AI ecosystem was watching and learning this lesson.


4. Baidu's Dilemma

On March 27, 2023, Baidu unveiled Ernie Bot (文心一言).

It was four months after ChatGPT's launch. Founder Robin Li (李彦宏) conducted a live demonstration at a press conference. When the video went public, Baidu's stock price dropped. The market was underwhelmed.

But Li had already completed something he considered a higher priority: the CAC filing. Ernie Bot became one of the first large language models in China to complete official registration. It set a benchmark for regulatory compliance. In a May 2023 analyst call, Li was candid: "We believe that regulators' active engagement in generative AI in the early stage will raise the bar to entry, and we are well positioned for that23." He had turned regulation into a competitive weapon.

When the regulation took effect on August 15, Li went so far as to call it "more pro-innovation than regulation24." Within three weeks, eleven companies, including Baidu, received approval to launch generative AI services for the public. Yet in a TIME interview a year later, a subtle duality appeared in Li's tone: "You don't want to be one step or a half-step ahead of the technology map — because that will be a speed bump for innovation25." At the same time, he acknowledged: "The Chinese government is pro-innovation. They always say, 'We support your innovation efforts.' But at the same time, they have to take care of all the concerns of stakeholders26." Praising regulation while asking it to slow down — the bilingual speech of a Chinese tech executive.

The product lagged. Regulatory compliance led. That was Baidu's judgment about which mattered more in the Chinese market.

As of late March 2025 (announced in April), more than 346 generative AI services had completed official filings with the CAC27. These are the licensed companies. Services that have not filed cannot operate in China. A permitted ecosystem was taking shape.

The logic of this structure is simple. When regulation becomes a barrier to entry, companies already licensed gain an advantage. Baidu understood that completing its filing mattered more than completing Ernie Bot.

Not every company made the same choice. Liang Wenfeng (梁文锋), founder of DeepSeek — established in July 2023 — took a different path. By positioning his venture as a research project rather than a consumer service, he effectively sidestepped the scope of the Interim Measures for Generative AI28. In a July 2024 interview with Anyong (《暗涌》, a publication under 36Kr), Liang said: "We did not intentionally set out to be the catfish. We just accidentally became one (我们不是有意成为一条鲶鱼,只是不小心成了一条鲶鱼)29." It was an ecosystem in which circumventing regulation had itself become a condition of innovation. Liang diagnosed the structural problem of China's AI industry head-on: "Chinese AI cannot remain in a follower position forever. We often say that the gap between Chinese AI and the U.S. is one to two years, but the real gap is the gap between creation and imitation (真实的gap是原创和模仿之差)." And he added: "Most Chinese companies are accustomed to following, not innovating (大部分中国公司习惯follow,而不是创新)30."


5. The Structural Causes of Speed

Why could China produce a regulation in eight months?

Because the legislative path was skipped entirely. What the CAC issues are ministerial regulations (部门规章). No deliberation by the National People's Congress (China's legislature) is required. A thirty-day public consultation exists, but there is no systematic rebuttal from an opposition party, no scrutiny from an independent press, no organized pressure from civil society groups. Enforcement authority is concentrated in the CAC — license issuance, content removal, business suspension, and fines all handled by a single agency — eliminating the need for coordination among multiple bodies.

If five conditions are required to produce fast regulation:

China possesses all five. And it is precisely those five conditions that produce arbitrary enforcement alongside fast regulation.

In some areas, this speed has yielded tangible results. In China, the forty-minute screen-time limit for Douyin users under fourteen was already in effect in 2021, and the revised Law on the Protection of Minors took effect the same year31. The United States, as of 2026, still lacks comprehensive federal legislation for youth online protection — between America's debate-and-delay and China's command-and-enforce, the ones actually harmed are teenagers.


6. The Trade-Off Between Speed and Legitimacy

In Chapter 1, we met Augustus. Where Gracchus's slow justice was thwarted, Augustus's fast order arrived. Pax Romana, 207 years. But the capacity for self-correction vanished, and tyrants followed.

China's AI regulation is the modern edition of fast order. The world's first algorithmic regulation, the first deepfake regulation, a generative AI regulation in eight months. The speed is real. In cases like youth protection, so are the tangible effects.

But the same structure produced the "core socialist values" clause — a provision of unlimited discretion. The state alone decides which AI outputs are dangerous. Companies act by guessing where the line is. According to a 2026 study by Jennifer Pan of Stanford University and Xu Xu of Princeton University (published in PNAS Nexus), a comparison of Chinese-made LLMs — Baichuan, ChatGLM, Ernie Bot, DeepSeek — with non-Chinese models such as GPT-3.5, GPT-4, and Llama 2, using 145 politically sensitive questions, found that Baichuan refused 60.23 percent of political questions and DeepSeek refused approximately 36 percent. These figures stand in stark contrast to the 0–2.8 percent refusal rate of American-made models32.

Censorship is not simply a matter of blocking certain outputs. Yaqiu Wang of the University of Chicago (formerly a China researcher at Human Rights Watch) identified the structural difference: "Political censorship is not just about making the model not generate 'sensitive' political content — it distorts the underlying training data distribution of the model itself33." When the world the model learns from is biased, bias in its outputs is inevitable. Wang also pointed to the essential distinction between restrictions in American and Chinese models: "The key distinction is intent. Restrictions in U.S. models are generally safety-oriented and, at least in principle, open to public debate. China's models are state-directed, and there is no avenue for appeal34."

More pervasive than overt censorship is "soft censorship" — instead of directly refusing a query, the model deflects, changes the topic, or naturally inserts the government's perspective35. The less visible censorship is, the more effective it becomes.

Matt Sheehan of the Carnegie Endowment captured the essence of this structure: "The regulations provide Chinese citizens with meaningful protection from Chinese companies, but they do not provide that same protection from the actions of the party-state36." An asymmetry: protecting citizens from corporate algorithms while granting immunity to the state's own algorithmic control.

Yet as Sheehan himself noted, China's AI governance regime was not created solely through top-down directives from Communist Party leadership. His analysis: "These regulations are the product of a dynamic and iterative policymaking process involving diverse actors both inside and outside the party-state37." China's algorithmic transparency regulations preceded the EU's Digital Services Act. The 2021 tech crackdown — including Alibaba's $2.8 billion antitrust fine and the dismantling of predatory peer-to-peer lending structures whose annualized interest rates ran into the thousands of percent — represented a level of consumer protection that the West was only discussing38. The binary "authoritarian = bad regulation" fails to capture this reality. The problem is that the same speed and the same enforcement power enable both citizen protection and state control simultaneously — pulling on just one strand cannot reveal the whole net.

Place the EU AI Act and China's AI regulations side by side in 2030, and it may be possible to judge which was the better regulatory framework. For now, we do not yet know.

In the next chapter, we go to Washington. Neither the EU's three years nor China's eight months — there is a third answer. It is to not regulate at all.


Notes