← Slow Justice, Fast Order Vol. 4 7 / 13 한국어
Vol. 4 — Slow Justice, Fast Order

Chapter 6 — Washington's Vacuum


1. Three Hours of Comedy and Tragedy

Blumenthal asked Altman:

"What risks could the technology you have created pose?"

Altman replied: "My worst fears are that we — we the field, the technology, the industry — cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong." He was specific: "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models"3.

At the same hearing, New York University professor Gary Marcus was more direct: "We have built machines that are like bulls in a china shop — powerful, reckless, and difficult to control"4.

The exchange made headlines. But other senators' questions from the same hearing were also on the record. "What is ChatGPT?" "What is the difference between AI and the internet?" "If the program you built does something bad, can you just turn it off?"

It was comedy. It was also tragedy. At a hearing convened to discuss risks comparable to nuclear weapons, the legislators did not grasp the basic concepts of the technology they intended to regulate. Blumenthal had spoofed his own voice with AI to show where the technology had already arrived. Yet even after that demonstration, a colleague asked, "Can't you just turn AI off?"

The lawmakers did not understand the technology they were trying to regulate. This was the American edition of information asymmetry. In Chapter 1, the Roman Senate's ignorance of the economic mechanics of the latifundia was not genuine — they chose not to see. The U.S. senators genuinely did not know.

But the more important scene came after the hearing. Look at what Altman actually requested.

He called for the creation of a new regulatory agency, licensing for AI models, and independent testing before deployment. "The U.S. government might consider a combination of licensing and testing requirements for development and release of AI models above a certain threshold of capabilities," he testified5. It sounded reasonable. But ask who benefits under this structure. The companies that already possess large models. Licensing and pre-deployment testing become steeper barriers to entry for smaller competitors and open-source communities.

The regulated designing the regulation. Unlike the Roman senators of Chapter 1, who charged with chairs raised overhead, this capture of incumbent advantage took place politely, in a congressional hearing room.


2. The Most Expensive Show of Hands in the World

Four months later, on September 13, 2023, Senate Majority Leader Chuck Schumer attempted something more ambitious. He convened a closed-door AI forum in the Kennedy Caucus Room. U-shaped seating. Elon Musk at one end, Mark Zuckerberg at the other. Bill Gates, Sundar Pichai, and Satya Nadella between them. Twenty-two panelists and host senators gathered in a single room.

Schumer opened: "This is truly unique, and it needs to be unique, because tackling AI is a unique, one-of-a-kind undertaking." After the forum, he told reporters: "I asked everyone in the room: does government need to play a role in regulating AI? And every single person raised their hand"6.

The world's wealthiest tech entrepreneurs voted unanimously that government regulation was necessary. Musk told reporters on his way out: "The question is really one of civilizational risk. It's not like one group versus another group of humans. This is something that's potentially risky for all humans everywhere"7.

After the press conference, Zuckerberg slipped out down a back staircase to avoid reporters. Moments after a meeting convened to discuss a civilizational crisis.

Schumer continued the forum as a series. The AI Insight Forum. Closed-door briefings from Big Tech CEOs. The forums were held multiple times. Not a single bill emerged.

It was the most expensive show of hands in the world. Everyone agreed on a civilizational crisis, then everyone did nothing.


3. Lobbying's 310 Percent

In the meantime, a different kind of action was thriving.

In 2022, 158 organizations were engaged in AI-related lobbying. By 2023, the number had risen to roughly 450 — an increase of approximately 185 percent. By 2024, it reached 648 — an additional 41 percent. A cumulative increase of roughly 310 percent over two years8. CNBC reporter Megan Cassella reported: "More than 450 organizations registered just in 2023 to lobby the government specifically on AI." She added: "A lot of the biggest names in this space just launched their AI lobbying efforts for the first time in 2023. That's AMD, NVIDIA, OpenAI, Qualcomm, Cisco, just getting in the game"9.

Total federal lobbying expenditures by Big Tech in 2024 — not AI-specific, but all lobbying — reveal the scale:

A caveat is necessary. These figures represent each company's total federal lobbying spending, not AI-specific expenditures. Breakdowns for AI-dedicated lobbying are not publicly disclosed. But for a company like OpenAI, whose entire business is AI, a sevenfold increase signals the direction clearly enough.

Place these numbers alongside the data from Chapters 1 and 2. In 133 BC, the latifundia owners in the Roman Senate resisted agrarian reform. In Chapter 2, factory owners lobbied to block the Factory Acts for sixty-four years. The numbers have changed. The structure has not.

The regulated are invited as experts in the regulation debate. Legislators who do not understand the technology summon CEOs. The form of regulation CEOs prefer becomes the starting point of discussion. Incumbent capture operates politely and within the law.


4. A Castle on Sand

On October 30, 2023, President Biden signed an executive order on AI (Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence")12. As its full title suggested, safety and trust were the core concerns. It required disclosure of safety test results for frontier AI models, equity reviews, and agency-level AI governance frameworks. It was the most the executive branch could do without congressional legislation.

That was also its fatal limitation. A new administration could erase it in a day.

On January 20, 2025, Trump took office. A separate executive order signed the same day ("Initial Rescissions of Harmful Executive Orders and Actions") immediately revoked Biden's AI executive order (EO 14110). Three days later, on January 23, Trump signed a follow-up order (EO 14179, "Removing Barriers to American Leadership in Artificial Intelligence"), directing the modification or revocation of downstream actions tied to EO 14110 and ordering the development of an action plan to maintain U.S. AI supremacy within 180 days. It explicitly rejected "ideological bias or social agendas"13.

Two years of AI safety guidance and transparency frameworks prepared across federal agencies vanished in a day. An executive order is not legislation. It is a castle on sand, with no institutional permanence.

On December 11 of that same year, Trump signed yet another AI executive order. This one aimed to neutralize state-level AI regulation. It directed the Commerce Department to compile a list of "overly burdensome" state AI laws. It instructed the Justice Department to establish an "AI Litigation Task Force" to legally challenge state laws inconsistent with federal policy. And it leveraged $42 billion in federal broadband infrastructure funds from the existing BEAD program as a condition tied to "the removal of state AI regulations"14.

Legal scholars, however, have noted the order's constitutional limits. An executive order alone cannot invalidate existing state laws — only Congress or the courts can do that. State laws remain enforceable until legally challenged15.

In the absence of federal regulation, the administration sought to block state-level alternatives as well — an attempt to protect the regulatory vacuum by force of law.


5. The Day Silicon Valley Beat Silicon Valley

In the spring of 2024, a bill passed through the California state legislature. SB-1047, the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act."

It required safety testing for large AI models and mandated a mechanism to shut down systems capable of causing severe harm — a "kill switch." Geoffrey Hinton and Yoshua Bengio — a Nobel laureate and Turing Award recipients, the field's highest honors — signed letters of support. AI scientists argued for the bill's necessity.

On the other side stood Google, Meta, OpenAI, and Andreessen Horowitz. Their argument: the bill would stifle innovation. Regulating only large models while leaving smaller ones unregulated was illogical. California-based companies would be disadvantaged relative to firms in other states.

September 29, 2024. Sacramento. On Governor Gavin Newsom's desk, the Nobel and Turing laureates' letters of support and Big Tech's opposition briefs would have sat side by side. Newsom vetoed the bill. In his veto statement, he argued that "smaller, specialized models may emerge as equally or even more dangerous" than the large models targeted by SB-104716.

Critics read it differently. Look at the share of Big Tech contributions on Newsom's campaign finance list. The logical structure was the same as when the factory owners of Chapter 2 invoked laissez-faire to block the Factory Acts. When ideology and self-interest point in the same direction, distinguishing which is the true motive becomes impossible.


6. Self-Regulation Unmasked

Big Tech advanced a familiar argument: "You don't need to regulate us — we'll regulate ourselves."

In July 2023, major AI companies gathered at the White House to announce "voluntary commitments." AI safety testing, transparency reports, watermarking. Commitments with no enforcement mechanism.

The actual results of self-regulation tell a different story.

One day in December 2020, Dr. Timnit Gebru, co-lead of Google's AI ethics team, found her internal email account locked. She had attempted to publish a paper — "On the Dangers of Stochastic Parrots" — analyzing how large language models amplify biases embedded in their training data and how the environmental costs of training were being ignored. Management demanded revisions or retraction. On the day Gebru sent a protest email, Google notified her of termination.

What had Gebru been researching? The risks of the very technology Google was building. Her colleague Margaret Mitchell was fired from the same team months later. The company that had been the first to publish "AI Principles" systematically removed the researchers who studied those principles. That was the reality of self-regulation.

In 2023, Microsoft significantly downsized its AI ethics team — reducing oversight capacity at the very moment it was investing most aggressively in AI.

In November 2023, OpenAI's board crisis. A safety-minded board fired CEO Altman. Five days later, the decision was reversed. Pressure from Microsoft and investors, combined with a collective letter from seven hundred employees, forced the board to capitulate. A case study in how governance actually works. Business beat safety.


7. Vacuum or Asymmetry?

A fair question must be asked here. Is America's absence of regulation truly a "vacuum" in the pure sense?

There is a counterargument. The very regulatory flexibility that leaves AI ungoverned may be the reason the United States leads the world in the field. Of BCG's (Boston Consulting Group) top twenty-five most innovative companies in 2023, sixteen were American. In the first half of 2024, U.S. AI investment accounted for approximately 42 percent of global venture capital, while the EU accounted for roughly 6 percent17. The EU produces more AI researchers per capita than the United States, but those researchers migrate to America, where regulation is lighter and capital more abundant. Europe cultivates the talent; America harvests it.

The correlation deserves acknowledgment. But correlation is not causation. U.S. dominance in AI reflects a web of factors beyond regulatory flexibility: military R&D investment, immigration policy, English as the default language of research, and deep capital markets.

A more precise diagnosis exists. According to an analysis in Science, actual U.S. AI policy is not an absence of regulation but a "regulatory asymmetry." Export controls (the CHIPS Act, AI chip export restrictions), immigration controls, and federal procurement preferences for AI infrastructure companies — these represent substantial state intervention. The intervention is simply oriented toward industrial protection rather than consumer protection18.

Regulation that shields American AI giants from foreign competition is robust. Regulation that governs the impact those same giants have on domestic users is minimal. "Washington's Vacuum" is accurate not because the vacuum is total, but because it exists on only one side. A shield for industry, but none for citizens.


8. On CEO Choi's Desk

Seongsu-dong, Seoul. July 2023.

Three bundles of documents sat on CEO Choi's desk. Each bore the letterhead of a different law firm. Total legal fees: 12 million won (roughly $9,000). And the three bundles said three different things.

Choi had co-founded an AI-based medical imaging diagnostic startup in 2021. It received seed funding in 2022. By 2023, a single question had come to determine the venture's survival: was the company's AI solution classified as a "medical device" or as "software"? Law firm A: medical device approval required. Law firm B: it could be interpreted as an assistive tool, no approval necessary. Law firm C: current law could not determine the answer; they recommended applying for a regulatory sandbox.

Three lawyers, three answers. One said the business could proceed. Another said proceeding would be illegal. The third said no law existed to decide either way.

CEO Choi put the documents back in their folders. Through the window, the red-brick buildings of Seongsu-dong were visible. Investors had already started asking: "Is the legal risk resolved?" Partner companies hesitated: "What if regulations come later and we end up on the wrong side of the law?"

She later put it this way: "Some people say that when there's no law, anything goes. That's wrong. When there's no law, nothing is certain. The absence of law isn't freedom. It's lawlessness."

Her situation mirrored the structural reality of the U.S. AI startup ecosystem exactly — only the scale differed. An AI company operating nationally in the United States must simultaneously comply with the laws of fifty different states. Colorado's impact assessment mandate, Illinois's prohibition on AI-driven employment discrimination, California's automated decision-making system regulations. Just as CEO Choi received three conflicting opinions from three law firms, an American startup needs fifty different legal answers.

The absence of federal regulation is not freedom. It is fifty varieties of uncertainty.


9. The States' Patchwork

When the federal government does not act, the states do.

On May 17, 2024, the governor of Colorado signed SB 24-205 into law — the first comprehensive state-level AI law in the United States. It imposes impact assessments, risk management programs, and consumer notification requirements on AI systems making "consequential decisions" in employment, education, housing, health care, and financial services. Maximum fine for violations: $20,000. Originally set to take effect on February 1, 2026, it was postponed to June 30, 2026, following a special legislative session in August 202519. Illinois's HB 3773 (prohibiting AI-driven employment discrimination, effective January 2026) and California's FEHA amendment (regulating automated decision-making systems, effective October 2025) followed in quick succession.

Fifty states had begun writing fifty different AI laws. Trump's December 2025 executive order explicitly targeted Colorado's law.

States trying to fill the regulatory vacuum, and a federal administration trying to preserve the void. Until this tension is resolved in the courts, companies cannot know which laws are in force.


10. The Mirror of Chapter 2

During the sixty-four years Britain spent without a Factory Act, laissez-faire was the reigning ideology. "The market regulates itself." "Freedom of contract." "Regulation kills innovation."

At the time of this writing, the same sentences repeat in the United States with different words. "Don't impede innovation." "Self-regulation is sufficient." "Regulation kills competitiveness."

In Chapter 2 we saw how that ideology ended. The Sadler Committee's 682-page book of testimony. The official record that six-year-old children worked sixteen-hour days. That is what shattered the ideology.

What will be America's Sadler Committee for AI regulation? When the harms produced by AI are recorded dramatically enough, publicly enough, and officially enough — that is when Congress will move.

One thread is already visible. On May 16, 2025, a federal court in the Northern District of California (Judge Rita Lin presiding) granted preliminary class certification in Mobley v. Workday — the first such ruling in an AI hiring discrimination case. The claim was that Workday's AI hiring algorithm systematically screened out applicants over the age of forty. The court recognized the AI algorithm itself as a "uniform policy applied to all class members" and ruled that Workday was not merely a software vendor but an "active participant" in the hiring process20. It marked a turning point in shifting AI vendor liability from deployer to developer.

If Sadler's 682 pages in 1833 were evidence inscribed on children's bodies, the 682 pages of the AI era are written in denied loans, rejected résumés, restricted accounts, and miscalculated bail scores. The difference is that no one has yet bound them into a single volume. Mobley v. Workday may become its first chapter.

When that moment will come is unknown. And until it does, the harm accumulates.


11. The Paradox of the Regulatory Vacuum

What happens when the United States does not regulate?

One consequence is counterintuitive. Without federal regulation, fifty states move independently. California, Colorado, New York, and Illinois draft different AI rules. A company operating nationally must comply with all of them. The result of not regulating becomes a greater regulatory burden than regulation itself.

Then there is the EU's "Brussels Effect." Just as the GDPR influenced more than 150 countries worldwide, the EU AI Act becomes a de facto obligation for American companies seeking access to the European market. Even if the United States does not regulate, companies that must comply with European regulation apply those standards to their American operations as well. A country that does not regulate ends up importing foreign regulation. A paradox.

Yet the Brussels Effect itself is under pressure. Brazil's AI regulation bill (PL 2338/2023) initially followed the EU model but was ultimately redesigned to blend elements from the German, British, and Japanese approaches. The United States and China have rejected wholesale adoption of EU regulation. Critics argue that the "Brussels Effect" is producing regulatory fragmentation rather than global convergence21.

As of 2026, no comprehensive AI bill has passed the U.S. Congress. The EU AI Act is in force. China has registered 346 AI services. South Korea's AI Basic Act (AI 기본법) has taken effect. While the world builds regulatory frameworks, the world's foremost AI superpower has no comprehensive federal AI law.

This is the laissez-faire of the twenty-first century.

In the first three chapters of this book, we observed a pattern. Rome waited 130 years, and the Republic collapsed. Britain waited sixty-four years before it acted. Finance waited thirty-seven years and suffered a crisis. American AI is still waiting.

In Chapter 12, we will return to this question — when slow justice is defeated, fast order takes its place. Just as Augustus came after the Gracchi. What the United States is waiting for will become clear over the next six chapters.


Notes