1. The Confidence of April 2021
It was a spring day in Brussels.
On April 21, 2021, the European Commission officially submitted the world's first comprehensive AI regulation bill. At the press conference, Executive Vice-President Margrethe Vestager laid out its vision. "Europe will set the global standard for AI regulation." Her guiding principle was straightforward: "The only meaningful digital transformation is one that puts people at the center." Vestager illustrated why this principle was necessary with a single scenario. "Imagine never being called in for a job interview. And then finding out that it's because an AI system has never been trained on someone like you — how would you even know?"
The bill's architecture was precise. It sorted AI systems into four tiers by risk level. Unacceptable risk: real-time mass biometric surveillance, social credit scoring, and similar practices. High risk: medical diagnostics, judicial decision support, hiring systems, and the like. Limited risk: chatbots — which must disclose to users that they are AI. Minimal risk: spam filters, game AI, and comparable applications. High-risk systems faced transparency obligations and mandatory pre-deployment conformity assessments.
The preparation had taken more than two years. An internal group of AI experts within the Commission had refined the bill through dozens of stakeholder consultations. The expectation was clear: just as GDPR had become a global standard for data protection in 2018, this regulation would spread worldwide through the "Brussels Effect."1
Seven months later, ChatGPT was released.
2. What ChatGPT Changed
The history of AI regulation divides into before and after ChatGPT.
The world envisioned by the 2021 bill was one of "purpose-specific AI." A medical AI that reads X-ray images. A hiring AI that screens résumés. A financial AI that evaluates creditworthiness. Each system was confined to a particular domain, and its risk could be assessed based on that domain.
ChatGPT was a different species entirely. It could answer medical questions, analyze job postings, draft credit assessments. It provided legal advice, wrote code, composed poetry. It belonged to no domain while penetrating all of them.
This was the problem of "General Purpose AI" (GPAI) — a concept that appeared nowhere in the bill.
Benifei and Tudorache essentially started over from scratch. They needed to design a new regulatory framework for foundation models — large language models like GPT-4, Claude, and Gemini. These models could not be slotted into any single risk category. Regulation had to vary depending on the risk level of the applications built on top of them.
For Benifei, it was also a personal blow. Nearly two years spent refining the logical architecture of eighty-five articles. Dozens of stakeholder consultations, all-nighters with the Commission's technical experts, the bill brought to completion. And then punctured overnight. As he later reflected, "We were not fitting the technology to the legislation — we had to fit the legislation to the technology." That was democratic institutions exercising their capacity for self-correction. Painful, but necessary.
The redesign began in earnest in early 2023. Benifei and Tudorache held dozens of hearings with AI company engineers, academics, and civil society organizations. They had to start from the most basic question: what is a foundation model? Some experts viewed GPT-4 as a "product"; others called it "infrastructure." If it was a product, it could be regulated individually. If it was infrastructure, regulatory responsibility would cascade across every application built on top of it. That distinction reshaped the entire structure of the bill.
A new concept was inserted: "General Purpose AI" (GPAI). In particular, large models whose training compute exceeded a certain threshold would be subject to a new category called "systemic risk." Transparency obligations, copyright compliance, safety evaluations. On June 14, 2023, the European Parliament adopted the amended bill — now including the GPAI provisions — by a vote of 499 to 28, with 93 abstentions, securing a mandate for trilogue negotiations.2
This became the most fiercely contested issue in the negotiations later that year.
3. The Weight of Lobbying
While the bill was being redesigned, lobbyists descended on Brussels.
OpenAI, Google, Meta, and Microsoft expanded their European lobbying operations. The argument that "regulation kills innovation" followed the same logic as the financial lobby in Chapter 3 and the factory owners' lobby in Chapter 2. European companies were no different. France's Mistral AI and Germany's Aleph Alpha demanded softer GPAI regulations under the logic that Europe needed to cultivate its own "champions."
The governments of France, Germany, and Italy joined in. The three countries' industry ministers sent a joint letter: "Excessive regulation will hand Europe's AI competitiveness to the United States and China." Behind that letter lay the urgency of their domestic companies. Mistral AI had risen to become Europe's largest AI company within a year of its founding. Aleph Alpha positioned itself as Europe's "sovereign AI." The three governments argued that these companies could not be shackled by regulations written before they even existed.
On November 9, 2023, that logic exploded. At a Telecommunications Working Group meeting, France, Germany, and Italy abruptly demanded the withdrawal of the foundation model regulation provisions. The parliamentary delegation walked out of the meeting room two hours ahead of schedule. The EU's three largest economies had tried, at the last moment, to kill the bill's core. More than 600 hours of negotiation over three years had nearly been undone.3
Benifei refused to yield. "We cannot accept moving too far in the direction of limiting the protection of citizens' fundamental rights. Nor can we accept an approach that gives governments too much discretion on very, very sensitive matters."
It was regulatory capture, AI-era edition. Unlike the senators of Chapter 1 who charged in wielding chairs, this resistance arrived in tailored suits bearing official letterhead. But the structure was identical: the regulated shaping the rules meant to regulate them.
At the same time, on the other side of the globe, the opposite scene was unfolding. On March 1, 2022, China's Cyberspace Administration (CAC, 国家互联网信息办公室) had implemented the world's first algorithmic recommendation regulation — the "Provisions on the Management of Algorithmic Recommendations in Internet Information Services" (互联网信息服务算法推荐管理规定). From public release to enforcement: just three months. And this was not a single agency's decision. The CAC, the Ministry of Industry and Information Technology (MIIT, 工业和信息化部), the Ministry of Public Security (公安部), and the State Administration for Market Regulation (国家市场监督管理总局) — four ministries had jointly issued the regulation. There were no stakeholder consultations, no lobbying, no opposition amendments. While lobbyists from three countries in Brussels were trying to overturn the foundation model provisions thousands of kilometers to the west, Beijing enacted its algorithmic regulation with stamps from four ministries. The difference in speed was a difference in systems.8
Benifei dismissed the industry argument outright. "I cannot understand the claim that this regulation stifles European companies' innovation. This law doesn't apply only to European companies. It applies to every company that launches an AI system in the European internal market." The logic that regulation becomes not a disadvantage but a condition for market access — GDPR had already proved it.
Yet lobbying's reach ran deep. Benifei later recalled: "One morning I came to the office and found an amendment on my desk that lobbyists had left the night before. It matched, word for word, an unreleased draft version we had been discussing internally."
While he held the lobbyist's document in his hand, thousands of kilometers from Brussels, algorithms were already reshaping people's lives. In a call center in Seoul, 240 agents had been mobilized to correct errors in an AI system — unaware that the very AI they were training would replace them. While Benifei was negotiating the definition of "unacceptable risk," that risk was already someone's daily reality.
4. Thirty-Six Hours
On the evening of December 8, 2023, three delegations gathered in a meeting room of the Justus Lipsius building in Brussels.
The European Commission delegation. The European Parliament delegation. The Council of the European Union delegation, representing ministers from all twenty-seven member states. This was the trilogue — the final negotiation to determine the bill's ultimate form.
Twenty-one critical issues lay on the table. Two were the most contentious.
The first: real-time biometric surveillance. The technology to identify and track specific individuals by recognizing faces in crowds. Benifei called it an "unacceptable risk." "This is the domain where the use itself is so dangerous that we simply do not permit it. Real-time biometric identification in public spaces, predictive policing, emotion recognition in workplaces and schools..." Civil society demanded a total ban. National governments wanted exceptions for security and counterterrorism.
The second: the stringency of GPAI regulation. What level of transparency and safety requirements should be imposed on foundation models like GPT-4?
Night bled into dawn. After midnight, delegates crouched in the corridor outside the meeting room, rereading instructions from their capitals. Negotiators said things in the hallway they could never say at the official table. "Our minister said absolutely not on this one." "Parliament's position is that without that provision, they'll kill the entire bill." Fifteen of the twenty-one issues were resolved before midnight. The remaining six were the crux.
At 3 a.m., informal negotiations broke out in the corridor. Under fluorescent lights, the French delegation and the parliamentary delegation faced off. The issue was the scope of exceptions for real-time biometric surveillance. France, citing the terrorist threat, demanded broad exceptions for law enforcement's use of facial recognition. "The Paris Olympics are eight months away. If we cannot identify terror suspects in crowds, who guarantees public safety?" The parliamentary delegation pushed back. "Once surveillance infrastructure is built, it is never dismantled. After the Olympics end, the cameras remain." Civil society organizations issued statements outside the meeting room calling for a total ban. AlgorithmWatch and EPIC warned that any exception would open the door to abuse.
In the adjacent room, technical numbers were being traded over the threshold for GPAI "systemic risk." Where to set the baseline for training compute. A single number would determine whether models like GPT-4 fell under additional regulatory requirements or not.
By 5 a.m., empty coffee cups littered the table and delegates' eyes were bloodshot. Calls to capitals were constant — confirming the limits of permissible concessions. One delegate ate a slice of pizza with one hand while on the phone with his country's minister.
At 6 a.m., the negotiations came to the brink of collapse. The Council and Parliament clashed over the training-compute threshold for GPAI's "systemic risk" designation. The Council wanted to raise the threshold to reduce the number of regulated companies; Parliament wanted to prevent existing models like GPT-4 from slipping outside the standard. Tudorache pulled the parliamentary delegation into the hallway. "If we give up here, there's nothing until we write a new law five years from now." Benifei echoed him: "An imperfect agreement is better than a perfect deadlock." At 7:30 a.m., the number was set: 10²⁵ floating-point operations (FLOPs) of training compute. That single number broke the final deadlock.
In the early morning of December 9, 2023, the provisional agreement was announced. It had been a marathon of more than thirty-six hours.4 Benifei said: "It was long and intense, but the effort was worth it."4 Tudorache followed: "The EU has established the world's first robust regulation on AI. This law sets rules for large-scale AI models to ensure that systemic risks do not affect the EU, and provides strong safeguards to protect citizens and democracy from the misuse of technology by public authorities."
Biometric surveillance: banned, with exceptions permitted under strict conditions such as terrorism investigations. GPAI: foundation model providers subject to transparency obligations and copyright compliance requirements. Large models posing systemic risk — defined by training-compute thresholds — would face additional safety requirements.
This was what democratic consensus looked like. It was imperfect. No side got everything it wanted. But it was a process in which twenty-seven countries, 699 parliamentarians, and dozens of civil society organizations had participated.
5. 523 to 46
March 13, 2024. The European Parliament plenary session in Strasbourg.
Benifei took the podium. Three years of work would be completed or destroyed by this vote.
The result: 523 in favor, 46 against, 49 abstentions.5
Applause swept through the chamber. A camera caught Benifei wiping away tears. Three years. The night ChatGPT forced him to start over from a blank page. The morning he found the lobbyist's amendment on his desk. The dawn of the thirty-six-hour marathon. All of it compressed into these numbers.
After the vote, reactions from AI companies were mixed. OpenAI issued a statement: "We will cooperate with the implementation of the EU AI Act and agree on the importance of responsible AI development." Yet the same company had reportedly spent millions of euros on lobbying for regulatory relief in 2023 alone. The sound of applause and lobby receipts coexisted on the same day. Benifei was aware of the irony. The day the law passed was not the day enforcement began. Enforcement would begin the day a supervisory authority in each member state, following AI Office guidelines, identified its first actual violation.
One commentator summarized the bill's significance this way: "A complete 180-degree turn from the 'move fast and break things' mentality." Previously, regulation had been fragmented and reactive — it followed only after problems erupted. Now a harmonized, predictable, proactive framework had been established.
Commissioner Thierry Breton declared: "We withstood the pressure of lobbyists and special interest groups. The result is a regulation that is balanced, risk-based, and future-proof. It regulates as little as possible, and as much as necessary."
523 to 46. The numbers matter.
The 2010 Dodd-Frank Act passed the U.S. Senate by a vote of 60 to 39 — barely enough to survive a filibuster. The EU AI Act received 89 percent support. That difference affects enforcement power as well. When someone challenges the law, the democratic legitimacy of 523 to 46 is a formidable shield.
6. The Cost of Slowness, the Value of Slowness
Three years. How should we evaluate them?
Critics say: when the bill was submitted in 2021, AI was already being used in healthcare, hiring, and finance. How many decisions were made by unregulated AI systems during those three years? Mr. Park, whose loan was denied by an algorithm; Driver Lee, whose delivery-app account was restricted by a platform algorithm — they waited three years for the EU AI Act to take effect.
Advocates say: look at GDPR. The data-protection regulation that Europe proposed in 2012 and enforced in 2018 has since become a global standard, with more than 120 countries adopting similar legislation.6 Slow democratic consensus spread worldwide through the "Brussels Effect." The AI Act could follow the same trajectory.
In Chapter 2, we evaluated the sixty-four years of Britain's Factory Acts. That slowness had two sides. It was the time during which tens of thousands of children suffered, and it was also the time during which social learning and evidence-based legislation took shape.
The EU's three years carry the same duality. Redesigning the bill from the ground up after ChatGPT's arrival was not a failure. It was a democratic institution exercising its capacity for self-correction in response to technological change. The thirty-six-hour marathon negotiation was not inefficiency. It was the process of converging twenty-seven countries' divergent interests into a single agreement.
Yet the question remains: where will AI technology be by the time of full enforcement in 2027? Even now, new models appear every six months. By the time a regulation written over three years takes effect, the technology it was designed to regulate will have turned over three generations.
That question was equally valid in Seoul. During the years 2022–2024, while CEO Choi was preparing to launch his startup, nineteen AI-related bills coexisted in the National Assembly (국회). Some fell under the Ministry of Science and ICT (과학기술정보통신부), others under the Korea Communications Commission (방송통신위원회), still others under the Financial Services Commission (금융위원회) — different jurisdictions, different definitions of "AI system" in each bill. Some mandated algorithmic transparency; others adopted self-regulation as their guiding principle. Had all nineteen passed, the result would have been nineteen mutually conflicting regulatory regimes. South Korea experienced a different form of chaos than the EU's three-year process: a competition among nineteen bills. Ultimately, they converged into a single law in December 2024 — the "Framework Act on the Development of Artificial Intelligence and Establishment of Trust" (인공지능 발전과 신뢰 기반 조성 등에 관한 기본법). The vote was 260 in favor, 0 against — a number reached not through three years of EU-style negotiation, but through a different mechanism: bipartisan consensus. Yet what that consensus included and what it left out began to surface in January 2026, when the law took effect.9
7. The Conditions for a Brussels Effect
Whether the EU AI Act will become a global standard like GDPR remains an open question.
GDPR succeeded under specific conditions. A single, clearly defined issue: data protection. A powerful enforcement tool: fines of up to 4 percent of global annual revenue for violations. An economic logic that created compliance incentives for any company needing access to the European market.
The AI Act's conditions are more complex. AI is far broader and faster-moving than data. The definition of high-risk AI may be continually reinterpreted as technology evolves. Whether the AI Office will have sufficient capacity for enforcement, and whether each member state will enforce the law consistently, remain open questions.
As of 2026, the AI Act is in force. It is being applied in stages.7
On February 2, 2025, the prohibited AI practices provisions took effect (Art. 5). Social scoring, real-time remote biometric identification in public spaces, emotion recognition in workplaces and educational institutions, and profiling-based predictive policing became illegal across the EU. Starting August 2, 2025, GPAI model obligations took effect and the AI Office's enforcement powers were fully activated (the AI Office was established by a European Commission decision on January 24, 2024). On August 2, 2026, comprehensive regulation of high-risk AI systems will begin.
Fines are differentiated by violation type (Art. 99). For prohibited practices: up to 35 million euros or 7 percent of global annual revenue. For other obligation violations: up to 15 million euros or 3 percent. For submitting false information to regulatory authorities: up to 7.5 million euros or 1 percent.
Whether the AI Act can follow the same trajectory as GDPR, however, remains uncertain. GDPR became a global standard because a single, clearly defined issue (data protection) combined with the economic incentive of access to a 500-million-person EU market. The AI Act's scope is far broader and technically more complex. Brazil initially designed its digital markets bill on the EU model, then redesigned it based on the German, British, and Japanese models. The United States and China have explicitly refused wholesale adoption of EU rules. The "Brussels Effect" is not automatic. It is produced by enforcement power and the conditionalization of market access.
The law has been written. Enforcement remains. The British Factory Act of 1833, too, was less important for the law itself than for its enforcer — Leonard Horner.
Brussels's three years did not end with the vote in Strasbourg. They truly begin the day the first enforcement decision is handed down in a member state supervisory authority's office. When that day comes, and which AI system that decision targets — that is the moment the numbers 523 to 46 acquire real meaning.
On the night those numbers lit up the scoreboard, on the other side of the globe, in a co-working space in Seongsu-dong (성수동), Seoul, CEO Choi was staring at her laptop screen. A thirty-two-year-old startup founder building an AI-powered medical imaging diagnostic solution. She read the news of the EU AI Act's passage and felt two things simultaneously: opportunity and dread.
Opportunity: entering the European market would require compliance with the EU AI Act. Companies that could comply would earn global trust. Being the first in Korea to meet EU standards would become a competitive advantage. Dread: annual compliance costs for high-risk AI ran approximately 29,000 euros per unit, with certification costs adding another 17,000 to 23,000 euros. For a startup surviving on 500 million won in seed funding, those figures were a matter of life and death.
Staring at the screen, she recalled the night before she founded her company. After finishing medical school and her internship, she had personally read thousands of chest CT scans until one certainty crystallized: AI can do this better. That conviction now sat frozen in a co-working-space chair, confronting the arithmetic of regulatory compliance. The law that Brussels's negotiators had spent three years fighting to create arrived like this for a thirty-two-year-old in Seongsu-dong, Seoul.
She sent a Slack message to her co-founder: "The EU AI Act passed. Could be our opportunity, could be our death sentence." What she did not yet know: South Korea's AI Basic Act would pass nine months later, and its maximum fine would be set at 30 million won — an amount incomparable to the EU's 35 million euros.
In the next chapter, we go to Beijing.
Where Brussels reached its conclusion through a thirty-six-hour marathon negotiation and a vote of 523 to 46, Beijing reached the same conclusion in eight and a half months. How was China's AI regulation made? And what did that speed cost?
A comparison of the two systems offers the clearest illustration of the trade-off between speed and legitimacy.