← The Algorithm of Two Empires Vol. 2 8 / 18 한국어
Vol. 2 — The Algorithm of Two Empires

Chapter 7: Institutional Rigidity — Why America Is Slow


Opening: Same Day, Different Rooms

September 2025, Washington, D.C.

Ten o'clock in the morning, Room 226 of the Dirksen Senate Office Building. The hearing room of the Senate Judiciary Committee's AI Subcommittee. The ceiling is high, the walls are marble, and the gallery is about half full. Nameplates sit before each senator, staffers standing behind them. Cameras are trained on the witness table.

Mustafa Suleyman takes his seat at the witness table. He is Microsoft's head of AI. British-born, a co-founder of DeepMind, he is known in the AI world less as an engineer than as a visionary. The senators' questions are polished, competitively pointed without being sharp. How does he assess the risks of AI? What regulations does he consider necessary? Does Big Tech have the will to self-regulate?

Suleyman answers fluently and with confidence, then delivers the line that will be the most quoted from the day's hearing.

"Within eighteen months, every white-collar job will be automated."

A brief silence fell over the hearing room. Dozens of camera shutters fired in rapid succession. Some senators scribbled notes. Others turned to glance at their staffers. Then the next question followed. "In that case, what do you believe is the corporate responsibility for protecting workers during that transition?" Suleyman answered. The need for retraining investment, technology transition support, and stronger social safety nets. A clear and perfectly generic response.

By that moment, Suleyman's remarks were already spreading across social media. "Eighteen months." Those two words were extracted and posted on X (formerly Twitter), LinkedIn, and YouTube Shorts. Paralegals, accountants, content marketers, and junior software developers watched the clip on their respective screens. The comments fell into two categories. Fear and anger.


One o'clock in the afternoon, K Street, Washington, D.C. At one of the downtown restaurants, two Microsoft lobbyists were having lunch with three Senate staffers. This happens every day in Washington, at dozens of tables. K Street is the symbolic address of America's lobbying industry. Law firms, lobbying agencies, and policy consulting shops cluster along it. The words exchanged on this street change the content of congressional bills.

The agenda for that lunch appeared unrelated to the morning hearing. The topic was "the possibility of federal preemption of state-level AI regulations." The substance was this: states like California, Texas, and Illinois had begun crafting their own AI regulations. From a corporate perspective, having to comply with fifty different sets of rules across fifty states would be an enormous burden. A single set of federal rules would be far more efficient. Those federal rules, of course, did not yet exist. And so, the argument went, would it not be advisable to pause state regulations until they did?

The staffers asked questions. The lobbyists answered. Memos changed hands. Business cards were exchanged. The meal ended.


Six o'clock in the evening. Suleyman's remarks had gone viral on X. Within two hours, the clip had surpassed three million views. "Every white-collar job automated within eighteen months." Hundreds of thousands of middle managers, analysts, legal assistants, and copywriters across the country read that number. LinkedIn filled with posts asking, "Will I be okay?" Forums erupted in debate. Which jobs would survive? Which skills should people learn? Was Suleyman exaggerating, or was this already happening?

That same evening, in Big Tech's lobbying offices across Washington, D.C., nothing happened. That was the point.


All three scenes took place on the same day.

They compress the structural contradiction of American AI governance. In the morning, the risks of AI are testified to before Congress. In the afternoon, lobbying to block AI regulation proceeds. In the evening, hundreds of thousands of workers read clips of that testimony and worry about their futures. Between these three scenes, there is no institutional connective tissue. Testimony carries no binding force. Lobbying is legal. Fear does not easily translate into votes.

As of early 2026, the United States, the world's largest AI power, has not passed a single comprehensive federal AI bill.

79% of the public wants AI regulation. 84% of Republican voters and 81% of Democratic voters agree. These figures, from a survey conducted by the Center for International and Security Studies at Maryland (CISSM), University of Maryland, are remarkably high by the standards of American politics. Abortion rights, gun control, immigration policy: no issue that cleaves American politics has ever achieved this level of bipartisan support. Yet Congress does not act.

This chapter dissects why.

How does Big Tech's lobbying machine work? In what ways do Big Tech CEOs operate as twenty-first-century versions of Crassus, the figure we met in Volume 1, transacting with power? Why does "79% in favor" translate into "zero bills passed"? And how does the historical band of institutional adaptation that we identified in Volume 1 (fourteen to sixty-four years) apply to the age of AI?

America is slow not for lack of technology; its technology is world-class. America is slow because its institutions are rigid. And that rigidity is not an inherent failure of democracy, but the result of the specific way money operates within it.


Section A. Lobbying as a Shield — How Big Tech Blocks Regulation

The Tobacco Industry Reborn

American history is rich with cases in which lobbying has served as a survival mechanism for specific industries.

The tobacco industry spent decades from the 1950s through the 1990s funding research that cast doubt on the causal link between smoking and cancer, delaying regulatory legislation. The oil and gas industry spent billions of dollars on lobbying to withdraw from the Paris Agreement and weaken environmental regulations even after the science of climate change was firmly established. The National Rifle Association blocked federal gun control for more than twenty years after the 2012 Sandy Hook massacre. The common structure in these cases is simple. A small group with enormous economic interests concentrates resources; the diffuse public interest fails to organize.

In 2025, a new name joined this list. Big Tech.

This time, however, the scale is different. Just as the Industrial Revolution reshaped physical production, AI is reshaping the productivity of cognitive labor. The economic value of that transformation dwarfs the tobacco and oil and gas industries. The combined AI capital expenditure of the Big Four tech companies in 2026 is projected at $635 billion to $665 billion. The question of what regulatory environment will determine the returns on these investments is a multi-trillion-dollar matter.

And so the scale of lobbying is different, too.

Anatomy of the Numbers

Over the nine months from January through September 2025, the combined lobbying expenditure of the seven major tech companies totaled $50 million. Calculated by congressional session days, that comes to roughly $400,000 per day. If the median household income in Ohio is approximately $55,000 a year, Big Tech spent the equivalent of seven Ohio households' annual income on lobbying every single day.

Over the same period, total lobbying revenue on AI-related issues reached $92 million. The seven major tech companies' spending accounts for a portion of that. The rest comes from AI startups, semiconductor firms, cloud companies, and lobbying funds from the finance, healthcare, and legal industries — sectors for which AI has become a condition of survival.

Between 2022 and 2025, the number of lobbyists working on AI issues grew 168%. As of 2025, 3,570 lobbyists (26% of all registered lobbyists in Washington, D.C.) handle AI-related issues. One in four. This figure shows that AI is no longer merely a tech-sector concern; it has become the gravitational center of Washington's entire policy landscape.

The picture sharpens further at the company level.

Meta's lobbying expenditure for the first three quarters of 2025 was $19.7 million. An all-time high. Meta fielded eighty-seven lobbyists during this period. The U.S. House of Representatives has 435 members, which means Meta deployed one lobbyist for every six House members. This density matches levels seen during the tobacco industry's peak in the 1970s and '80s and the oil and gas industry's in the 1990s. Historians who documented those eras described that density as a threat to American democracy. The same ratio is now being replicated in the AI sector.

Alphabet (Google) spent $12.2 million over the same period — an 11% year-over-year increase. OpenAI spent $2.1 million — a 68% year-over-year increase. OpenAI's growth rate stands out. OpenAI began as a nonprofit. Its founding charter contained the phrase "AI for the benefit of all of humanity." That company's lobbying spending rose 68% in a single year. The moment the regulatory environment becomes directly tied to their business models, these companies invariably head to K Street. That is the logic of incentives.

Super PACs also entered the picture. In the third quarter of 2025 alone, three new Super PACs funded by Big Tech were established. Since the Supreme Court's 2010 Citizens United ruling dismantled campaign finance restrictions, Super PACs have become the most effective channel for circumventing direct contribution limits. Lobbying influences sitting members of Congress. Super PACs influence who becomes a member of Congress in the first place. Operating both channels simultaneously makes it possible to engineer the entire political ecosystem that delays regulation.

The Four-Pronged Lobbying Strategy

Big Tech's AI lobbying is not random. It is organized as a precise four-pronged strategy.

First, push for federal preemption. This is the core strategy. States like California, Texas, Illinois, and Colorado have begun drafting their own AI regulations. Rather than blocking these state regulations directly — too brazen — Big Tech calls for "unified regulation" at the federal level. The argument is persuasive. Complying with different regulations across fifty states imposes enormous costs on companies. Consistent federal regulation is better for both businesses and consumers. The federal regulation, of course, does not yet exist. Neutralizing state regulations until a federal rule is created is the practical effect of this strategy. "We want unified regulation" is functionally identical to "we want to maintain the current absence of regulation."

Second, build a self-regulation frame. AI safety pledges, transparency reports, internal AI ethics committees. Big Tech companies announce these aggressively. None are binding. With no legal obligation, there is no penalty for violations. Their actual function lies elsewhere: they construct the narrative that "we are already regulating ourselves." The purpose is to cast doubt on the need for external regulation.

Third, exploit technical complexity. The average member of Congress has a low understanding of AI. The anecdote from a 2023 Senate hearing in which a senator asked Google's CEO "why does my granddaughter's phone show her ads" is emblematic of this gap. AI lobbyists exploit this gap strategically. "AI is too complex a technology — we must fully understand it before writing regulations." Regulation is deferred until understanding is complete. But technology always changes faster than understanding. This logic is, by its nature, a logic of permanent deferral.

Fourth, shape elections through Super PACs. As already described, the combination of lobbying and Super PACs becomes a tool for engineering the entire legislative environment. It goes one step beyond blocking bills. It is about electing the members of Congress who will block them.

The Fate of SB-1047

A single case illustrates clearly how these strategies operate in practice.

In August 2024, the California state legislature passed an AI safety bill called SB-1047. The bill would have required companies developing AI models above a certain scale to conduct safety testing, build emergency shutdown mechanisms, and submit to third-party audits. It was the most ambitious state-level AI regulatory effort in the United States. Having passed both chambers of the state legislature, the bill awaited only Governor Gavin Newsom's signature.

Newsom vetoed it.

In the weeks before the veto, lobbying teams from Meta and Google shuttled repeatedly between Sacramento and San Francisco. The governor's office received arguments that the bill would stifle AI innovation and drive California's startup ecosystem to other states. Sam Altman of OpenAI publicly declared his opposition. Anthropic quietly submitted its objections. The academic community was split.

Newsom's veto statement ended with this: "We must not limit AI innovation but rather create an environment for the safe and responsible advancement of AI." Translation: not now.

In the state considered the most progressive in the country, home to the most powerful AI companies on Earth, AI regulation was blocked in this way. What this means at the federal level requires no explanation.

During the tobacco industry's heyday, cigarette companies lobbied most intensively against the state and federal agencies that were producing reports on the dangers of smoking. What they ultimately sought to prevent was the moment those reports became the basis for law. AI lobbying follows the same logic. What must be blocked is not regulation itself but the soil in which regulation can grow. SB-1047 was that soil.


Section B. Crassus in the Twenty-First Century — Big Tech CEOs and the Reconfiguration of Power

The Archetype: The Crassus Mechanism

In Volume 1, we met Marcus Licinius Crassus.

The richest man in Rome in the first century BC. His wealth came from fire. When flames broke out in Rome's wooden insulae, his private fire brigade arrived first. But the brigade did not douse the flames before negotiating a price. It waited until the property owner agreed to sell the building at a fraction of its value. Only then did it put out the fire. Crassus rebuilt the purchased buildings and rented them out. Rome's demand for housing was inexhaustible.

But Crassus's true innovation was not real estate speculation. Plutarch wrote: "No one in Rome was wealthier than Crassus." And that wealth was not merely used to acquire more property. It was used to build a mechanism for purchasing political power.

Crassus bought up the debts of Roman senators. A senator in debt to Crassus could not vote against his interests. When Caesar needed funding for his Gallic campaigns, Crassus provided it; when Pompey needed political support for his Eastern expeditions, Crassus supplied it. The First Triumvirate was born this way. Rome was ruled, formally, by the Senate and the popular assemblies; in practice, by an informal alliance of three men, whose financial foundation was Crassus.

The form differs, but the mechanism is the same. Wealth is invested in power, power designs institutions, institutions protect wealth, and within that protection, wealth grows larger. This is what Volume 1 named "the Crassus formula."

Musk: Crassus Inside the Government

Among the twenty-first-century versions of Crassus, the most extreme case in 2025 is Elon Musk.

Crassus never entered the Senate directly. He merely bought the debts of senators. Musk chose a different path. He walked directly into the government.

In the 2024 presidential election, Musk reportedly contributed more than $250 million to the Trump campaign (based on FEC filings and reporting by the NYT and WaPo). The result was the post of head of the Department of Government Efficiency (DOGE). Officially described as an "advisory committee," it exercised real authority over federal workforce reductions and budget cuts.

Trace the structure and the path becomes clear.

DOGE cuts the federal workforce. The eliminated functions are replaced by AI. As AI adoption accelerates, Tesla's autonomous driving software and xAI's Grok model directly benefit. Tesla gains the more its self-driving system operates free of federal regulation. xAI benefits the more its competitors (Anthropic, OpenAI) are subjected to stringent regulation.

On January 20, 2025, Trump revoked Biden's AI safety executive order (EO 14110). That same month, a new AI executive order (EO 14179) was signed, stripping safety testing requirements and reporting obligations. Whom these orders benefit is not a matter of opinion but of arithmetic. Without regulation, the side that moves fastest wins. The side that moves fastest is the one where resources are already concentrated: Big Tech.

The Anthropic-Pentagon dispute illustrates this structure from another angle. In February 2026, Anthropic announced it would phase out its contractual relationship with the Department of Defense over six months. The reason was clear. Anthropic's self-imposed red lines — a prohibition on autonomous weapons and on mass surveillance of American citizens — were, in its judgment, threatened by military contracts. The Pentagon responded by designating Anthropic a "supply chain risk." The order came from the Secretary of Defense. The very next day, OpenAI announced it had signed a contract with the Pentagon that included the same red lines. OpenAI stepped into the seat from which Anthropic had been removed.

Examine this sequence closely and a pattern emerges: how state power reacts when a company sets ethical boundaries. And within that pattern, which AI companies end up in the more favorable position relative to the state.

Just as Crassus privatized Rome's public infrastructure (fire services), DOGE is pushing the privatization and "efficiency" of federal infrastructure. The difference: Crassus used wealth to gain access to power; Musk is using power to design the ecosystem of wealth. The directions appear to be reversed, but the outcome is the same. Institutions are reconfigured to serve specific economic interests.

Zuckerberg: A Textbook in Tactical Pivoting

Mark Zuckerberg's moves in 2025 follow the Crassus formula in a more brazen fashion.

Through 2024, Meta's official stance was "responsible AI." The company announced investments in AI safety research and maintained the platform's content fact-checking system. In public interviews, Zuckerberg issued warnings about the risks of AI.

Starting in 2025, Meta changed course.

The fact-checking system was abolished. DEI (diversity, equity, and inclusion) programs were drastically scaled back. Zuckerberg was one of the few Silicon Valley CEOs to attend Trump's inauguration. And Meta's lobbying expenditure hit an all-time high of $19.7 million.

Zuckerberg's calculation was straightforward: improve relations with the Trump administration to reduce regulatory pressure on Meta's AI business. Two fronts mattered most. First, block federal regulation of Meta AI. Second, use federal preemption to neutralize the state-level AI regulations whose strongest provisions would hurt Meta.

When Crassus funded Caesar's campaigns, it was not pure support for Caesar. It was a calculation that Caesar's success would create a political environment favorable to Crassus. Zuckerberg's approach to Trump is no different. This is not an ideological conversion. It is a survival strategy.

Reducing this to a question of individual morality is an analytical trap. Condemning Musk and Zuckerberg personally is easy, but that narrative obscures the structure.

These men are actors optimized for the incentive structure that the American political system has created. In a country where Super PACs are legal, lobbying is legal, and granting executive branch positions to private citizens is legal, it would be irrational not to exploit these channels. As CEOs of publicly traded companies obligated to maximize returns for shareholders, using lobbying to block regulation is behavior that meets the expectations of their boards.

Crassus was not a bad person who imperiled the Roman Republic. The institutional vulnerabilities that made Crassus possible imperiled it. That was the lesson of Volume 1. And that lesson remains valid in 2026. The problem is not the individuals. It is the system.

The Asymmetric Table of Power

In this context, comparing the tools available to each actor that influences AI policy makes it clear why 79% public support yields zero bills.

Big Tech CEOs have lobbying funds, Super PACs, direct participation in the executive branch, media access, and specialized legal teams. These tools operate in real time, with concentrated force, and on repeat. They never pause from the moment an AI regulatory bill is introduced to the moment it is killed.

Middle-class white-collar workers have the vote. This tool operates once every two years, indirectly, and only over the long term. More critically, unless AI regulation becomes a deciding issue in voting decisions, this tool carries no force at all. The economy, inflation, immigration, abortion: these are the issues that decide elections. Virtually no voter changes their ballot because a candidate says, "Thank you for passing an AI law."

Labor unions have collective bargaining rights. But unions are virtually nonexistent in the AI industry. Union membership in Silicon Valley tech companies is vanishingly low compared to traditional manufacturing. Traditional manufacturing unions are trying to win a seat at the AI negotiation table, but labor organization within the AI sector itself remains in its infancy.

Civil society organizations have the ability to shape public opinion. But their funding is not remotely comparable to Big Tech's. AI ethics researchers testify before Congress. They write reports. But none of this is binding. Lobbyists have direct access to lawmakers; the congressional access of civil society researchers is comparatively limited.

Academics have research and testimony. Equally nonbinding.

This asymmetry is the mechanism by which "79% of the public wants AI regulation and zero laws exist." Public opinion is real. But unless that opinion translates into votes at the ballot box, it carries no weight in the legislative process. And AI regulation is not yet an issue that decides elections.


Section C. 79% in Favor, Zero Bills — The Paradox of Asymmetric Attention

The Paradox of the Numbers

Return to the CISSM survey at the University of Maryland.

Americans who support AI regulation: 79%. Among Republican voters: 84%. Among Democratic voters: 81%.

To grasp the significance of these numbers, comparison is necessary.

What is the level of bipartisan support for gun control? According to Gallup's long-term tracking surveys, "stricter gun laws are needed" typically polls at 57% to 60%. Bipartisan consensus on abortion? It does not exist. Bipartisan agreement on immigration reform? It has been elusive for decades. Healthcare reform? The battle over Obamacare demonstrated the limits.

AI regulation alone achieved 84% among Republicans and 81% among Democrats simultaneously.

And the number of comprehensive federal AI bills passed: zero.

The only AI-related federal law Congress has enacted is the TAKE IT DOWN Act, signed in May 2025. This law makes the nonconsensual distribution of deepfake sexual exploitation material a federal crime. An important law. But it stands far removed from the core issues of AI governance — AI safety, AI transparency, AI and employment, AI and bias, AI and data privacy, and the accountability of AI developers. Citing the TAKE IT DOWN Act as evidence that "America has an AI law" is misleading. This is not a regulation of AI. It is a regulation of a crime committed using AI as a tool.

Asymmetry of Attention

This paradox is explained by what might be called the "asymmetry of attention." The concept is simple. Not every group affected by regulation pays attention with the same intensity.

79% say they support AI regulation in a survey. But how many would change their vote based on a candidate's AI regulation stance? Election analysts give a pessimistic answer: vanishingly few. Grocery prices, mortgage rates, healthcare costs, the border: these prove decisive in the voting booth. Between a candidate who promises "I'll create AI regulation" and one who says "jobs come before AI regulation," the winner is determined not by AI positions but by economic messaging.

For Big Tech, by contrast, AI regulation is a multi-trillion-dollar question. Alphabet's projected AI capital expenditure for 2026 is $175 billion to $185 billion. Meta's is $115 billion to $135 billion. Microsoft's is $145 billion per year. Run the numbers on what regulatory environment determines the returns on these investments, and the economics of lobbying become self-evident. If $19.7 million in lobbying expenditure reduces regulatory pressure on hundreds of billions of dollars' worth of AI operations, the return on that investment is unmatched by any business.

Call it the asymmetry of attention. The attention of the many is dispersed and low in intensity. The attention of the few is concentrated and extreme in intensity. In democratic politics, this asymmetry repeatedly resolves in favor of the few. Tobacco did it. Oil and gas did it. Finance did it. Now AI is taking their place.

Is this an inherent flaw of democracy? The evidence says no — but the correction takes time. The asymmetry can be corrected. When AI harms become visible, when AI begins to deliver palpable shocks to employment, when the media covers victims' stories repeatedly, and when AI regulation becomes an issue that changes votes — the political calculus shifts. The Factory Acts went through this process. But that process took time. Sixty-four years.

State-Level Experiments: Bypassing the Gridlock

While the federal government remains gridlocked, a different picture is unfolding at the state level. The other face of America's federal system is showing.

As of 2025, AI-related bills have been introduced in all fifty states. Of those, thirty-eight states have adopted approximately one hundred AI-related measures. On January 1, 2026, state AI laws in California, Texas, Illinois, and elsewhere went into effect. Where the federal Congress is gridlocked, state legislatures are moving first.

Each state's approach varies. Colorado introduced impact assessments and notification requirements for high-risk AI systems. Texas imposed bias audit requirements on companies deploying AI systems. Illinois tightened regulations on AI use in hiring processes. California mandated labeling of AI-generated content and raised AI safety standards for autonomous vehicles.

These measures are imperfect. The criticism that varying standards across states genuinely increase the compliance burden on businesses has merit. But these state-level experiments prove two things. First, democratic institutions have the will and the capacity to regulate AI. It is the federal level that is blocked, not democracy itself. Second, these experiments are producing data on what works and what is unrealistic — data that will inform federal regulation when it arrives.

At the same time, the threat to these state-level experiments is growing.

On December 11, 2025, Trump signed the "National AI Policy Framework" executive order. The order established an AI litigation task force and directed the Department of Commerce to assess whether state AI regulations conflict with federal AI policy. It also included a provision requiring states to repeal AI regulations as a condition for receiving $42 billion in federal broadband subsidies from the BEAD program.

This was the moment Big Tech lobbying's core demand, neutralizing state regulation through federal preemption, was realized through executive order. Not by banning regulation outright, but by imposing economic penalties on the states that enact it.

The one hundred measures across thirty-eight states remain in effect. But the fight for their survival has begun.


Section D. Fourteen to Sixty-Four Years — Institutions Are Slower Than Technology

When Volume 1's Numbers Meet the Present

There is one figure extracted from Volume 1.

In 1769, Richard Arkwright opened the first water-powered spinning mill on the banks of the River Derwent. Lancashire cotton began to be spun into thread by machine. The formal starting point of the Industrial Revolution. Sixty-four years later, in 1833, Britain enacted the Factory Act. It prohibited factory labor by children under nine, limited the working hours of children under thirteen to nine hours a day, and introduced the factory inspectorate.

Sixty-four years. The figure presented in Volume 1 as a case study in institutional adaptation during the Industrial Revolution. The time elapsed from the moment a new technology begins to transform society to the moment institutions formally begin to address the damage of that transformation.

A single case does not make a "base rate." But examining other general-purpose technologies reveals a pattern.

The steam railway began commercial service on the Liverpool-Manchester line in 1830. Britain's Railway Regulation Act passed in 1844. Fourteen years. The automobile appeared with Karl Benz's patent in 1886. Licensing systems and speed limits became law in the 1903–1910s. Twenty to thirty years. Radio proliferated in the 1920s. The Federal Communications Commission (FCC) was established in the United States in 1934. Fourteen years. The internet was commercialized in 1991. The EU's GDPR took effect in 2018. Twenty-seven years. The United States still has no federal internet privacy law.

Social media took off in 2004 with Facebook. As of 2026, the United States has no comprehensive federal social media regulation. Twenty-two years and counting.

Generative AI went mainstream in November 2022 with ChatGPT. As of 2026, the United States has zero comprehensive federal AI bills. Three years have passed.

What these cases reveal is not a single number but a band. Institutional adaptation to civilian technology has moved within a range of fourteen to sixty-four years. When military urgency existed — nuclear energy (1942 to the Atomic Energy Act of 1946, four years) — the band compressed dramatically, but no such compression has been observed in civilian domains. Sixty-four years is not the "base rate" but the upper bound of this band. Where within this band AI's institutional adaptation will fall is the central question of this section.

The Speed Debate: Will This Time Be Different?

Applying the upper bound of sixty-four years naively would place AI regulation in 2086. This is plainly unreasonable. The speed of technological change has increased, the speed at which harms become visible has increased, and global competition is altering the political economy of regulation. The band will compress. The question is by how much.

There are reasons to expect it will be faster.

First, harms are becoming visible far more quickly than before. Deepfake-related harm, AI-powered scam calls, discrimination by hiring AI algorithms: these harms are reported in the media the moment they occur. When the first factories opened in 1769, the suffering of child laborers in Manchester took decades to enter public discourse. Today it takes days.

Second, global pressure exists. The EU AI Act took effect in 2024. China implemented generative AI regulations in 2023. The argument that America's regulatory vacuum gives foreign companies a relative advantage has begun to gain traction. The slogan "innovate without regulation" has begun to collide with the question "are we the only ones competing without regulation, and is that making us weaker?"

Third, the likelihood that AI becomes an election issue is rising. In 2025, direct AI-related layoffs totaled 55,000; in just January and February of 2026, 32,000 jobs were lost in the tech sector alone. Once AI begins to deliver visible shocks to employment, "AI regulation" becomes an "economic issue." The moment that happens, the political calculus changes.

But there are also reasons to expect it will be slower.

First, Big Tech's lobbying expenditure is at an all-time high. The institutional capacity to delay regulation is also at an all-time high. No previous technological revolution has been led by an industry with this level of financial resources and political access.

Second, the pace of AI development far outstrips the pace of regulatory design. Even when regulations are written, the technology may have advanced several generations by the time the law takes effect. This is misused as an argument for no regulation at all, but the more appropriate response is a change in regulatory design: regulating by risk category rather than by specific technology, the approach the EU has adopted.

Third, the Trump administration's deregulatory stance is unlikely to change before 2028. The federal preemption strategy continues.

Fourth, the geopolitical argument that "AI regulation benefits China" has become a new rationale for opposing regulation. The logic runs as follows: if the United States regulates AI, it slows American AI companies' pace of innovation. China regulates less, and so advances faster. Therefore AI regulation threatens U.S. national security. This logic is only half right. It ignores the fact that China's AI regulation is not simply "less" — it serves a different purpose.

China's Nine Months: The Other Side of Speed

November 2022: ChatGPT launches. August 15, 2023: China's Cyberspace Administration (CAC) enacts the Interim Measures for the Management of Generative AI Services (生成式人工智能服务管理暂行办法). The world's first binding regulation of generative AI. Nine months from the appearance of ChatGPT.

The figure is impressive. And the contrast ("China: nine months; the United States: three-plus years and zero laws") is dramatic. But reading this contrast as implying "China's AI regulation is superior" is to see only half the picture.

Read the text of China's interim measures on generative AI and two categories of provisions emerge.

The first contains the kind of content one would expect in a democratic country: mandatory clear labeling of AI-generated content, user privacy protections, transparency requirements on algorithmic use of data, and protections for minors. These are genuine elements of serious AI governance.

The second category is different. "Only content that adheres to socialist core values shall be generated." "Content that incites subversion of state power or the socialist system is prohibited." "Content that promotes ethnic or religious discrimination is prohibited" — a provision that, depending on context, could target AI-generated content about the situation of Uyghurs in Xinjiang. "AI models that generate content challenging the authority of the Communist Party are prohibited."

China's generative AI regulation is not simply regulating AI. It regulates AI as a means of controlling the political expression that AI makes possible. The speed of regulation is fast. But the direction of regulation diverges from what democracy envisions.

China's post-2025 AI regulatory measures share the same structure. The November 2024 "Qinglang" (清朗, "Clear and Bright") algorithm governance campaign requires platform algorithms to "propagate correct values." The AI-generated content labeling rules that took effect in September 2025 mandate both explicit and implicit labels, and the scope of "implicit labels" is a critical question.

Stated symmetrically, the picture looks like this.

The absence of U.S. AI regulation: Big Tech comes first. While regulation is absent, Big Tech is free to dominate markets and reshape society. Citizens come second.

The speed of Chinese AI regulation: the Party comes first. Regulation arrived quickly, but it operates in the direction of reinforcing Party authority. Citizens come second.

The two countries arrive at the same destination for different reasons. Neither is building AI governance for its citizens. In China, the Party is the benchmark; in the United States, capital is. This symmetrical failure is the central problem of global AI governance today. Between these two enormous failures, the EU is attempting, imperfectly, a third path.

The EU AI Act: Evidence That Democracy Can Regulate

The EU AI Act took effect in 2024, approximately twenty-one months after the emergence of ChatGPT. Slower than China's nine months but faster than America's three-plus years. What matters more is the substance.

The EU AI Act adopted a risk-based regulatory framework. It classifies AI systems into four tiers by risk level: unacceptable (prohibited), high risk, limited risk, and minimal risk. Biometric remote surveillance, social credit systems, and indiscriminate deepfake generation are, in principle, prohibited. AI used in hiring decisions, educational assessments, access to essential services, and law enforcement is classified as high risk and subject to stringent standards. Provisions governing high-risk AI systems take effect on August 2, 2026. Penalties for violations reach up to 35 million euros or 7% of global revenue. American AI companies operating in the EU market must comply.

Criticism that the EU's approach is imperfect is fair. The definition of "high risk" is too narrow, leaving many important AI applications outside the regulatory scope. Compliance costs burden startups disproportionately compared to large companies. Concerns that it may hinder innovation are legitimate.

But the EU AI Act proves one decisive fact: that it is not structurally impossible for democratic institutions to regulate AI. The problem lies not in democracy but in the specific political structure of the United States: the lobbying system, federal-state conflict, and a polarized Congress. "Democracy is slow" is an accurate description of America's current condition, but an inaccurate description of democracy's nature.

If Sixty-Four Years Repeats

Volume 1 traced in detail what happened during the sixty-four years it took to enact the Factory Acts.

The average age at death for a Manchester textile worker was seventeen. Compared to thirty-eight for a rural laborer — twenty-one years shorter. The weekly wage of a Lancashire handloom weaver collapsed from 25 shillings in 1805 to 4.5 shillings in 1835 — an 84% decline in thirty years. Children worked in factories from five in the morning until nine at night, and no law prevented it. Technology raised productivity, and the fruits of that productivity went to factory owners. What was left for workers was longer hours, lower wages, and shorter lives.

The Factory Acts were not passed when the technology matured. They were passed when suffering reached a level that could no longer be ignored, when the condition of urban workers came to the attention of middle-class reformers, when the Chartist movement began to generate political pressure. When suffering became politics, institutions moved.

Will the sixty-four years of the AI era be different?

In 2025, direct AI-related layoffs in the United States alone totaled 55,000. In just January and February of 2026, 32,000 workers lost their jobs in the tech sector. 69% of paralegals' tasks are already classified as automatable by AI. COBRA insurance premiums run $584 a month, and student loan payments do not pause. The World Economic Forum projects that 92 million jobs will be displaced by AI and robotics by 2030. Simultaneously, projections call for 170 million new jobs to be created. But the fact that the transition between these two numbers will not be smooth is the essence of the problem.

A laid-off paralegal who wants to become an AI trainer or prompt engineer needs retraining. Retraining takes time, and during that time, living expenses continue. COBRA premiums, student loans, the mortgage: these debts do not wait for retraining to finish. If institutions do not support this transition, the displaced will either migrate to lower-wage service jobs or be pushed out of the labor market entirely.

Place the Lancashire handloom weaver of 1769 alongside the paralegal of 2025 and the surface conditions differ. The weaver was poor; the paralegal is middle class. But the structure of suffering is the same. Technological change commoditizes their skills. Productivity rises while their wages stagnate or vanish. Institutions do not support the transition. They lack the political power to change those institutions.

If sixty-four years repeats, what will happen to the displaced during that time is something we have already seen.


Connection to Volume 1: Institutions Are Slower Than Technology

The thesis of this chapter was already established historically in Volume 1.

Institutions are slower than technology.

Volume 1 narrated this fact through the Roman smallholders (second to first century BC), the Lancashire handloom weavers (1805–1835), and the "discerning" figures who appeared in both cases, Crassus and Arkwright. Between the speed at which technology transforms society and the speed at which institutions respond to that transformation, there has always been a gap. Some profit from that gap. Others suffer within it.

What Volume 1 captured was not merely the existence of this gap. It also captured the structure that maintains it. Crassus bought the debts of senators. Arkwright monopolized the factory system through patents and capital. Both had the incentive and the means to resist the institutional changes that would close the gap. And while they resisted, the suffering of the displaced continued.

Big Tech in 2026 is the twenty-first-century version of this structure. Lobbying, Super PACs, direct participation in the executive branch: these are the modern equivalents of Crassus's purchase of senatorial debts and Arkwright's patent monopoly. The forms have changed, and the scale has grown enormously. But the structure (incentives and means for maintaining the gap concentrated in the hands of the discerning) remains identical.

Volume 2, Chapter 7 applies this thesis in the present tense.

Where Volume 1 proved "the existence of institutional adaptation" historically, Chapter 7 of Volume 2 analyzes "the obstacles to institutional adaptation" in real time. Lobbying, polarization, federal-state conflict, the framing of U.S.-China competition. These obstacles are pushing the period of institutional adaptation toward the upper bound of the band.

A bridge for the reader: In Volume 1, we saw that the Factory Acts took sixty-four years. During those sixty-four years, the average age at death for a Manchester worker was seventeen. If "sixty-four years" repeats in the United States today, what will happen to the displaced during that time is something we already examined in Chapter 5. COBRA premiums, student loans, the holes in the safety net. The suffering of the eighteenth century and the suffering of the twenty-first take different forms. But the people who endure that suffering while institutions catch up to technology are always the same.

This pattern, however, is not permanent. The Factory Acts were eventually passed. It took sixty-four years, but they passed. The GDPR was eventually enacted. It took twenty-seven years, but it was enacted. When suffering becomes visible, when the political calculus shifts, institutions move.

When the tipping point arrives cannot be known today. But what 55,000 direct AI-related layoffs tell us is this: that tipping point is drawing closer.


Transition: Closing Part 2, Toward Part 3

Part 2 ends here.

America's strengths are real. 60% of the world's AI venture capital is concentrated in the Bay Area. NVIDIA controls 92% of the AI chip market. The Stargate Project is a plan to pour $500 billion into AI infrastructure over four years. Its weaknesses are equally real. 55,000 workers have been directly laid off due to AI, yet the federal retraining system does not function. Three years in, the number of federal AI laws stands at zero. Big Tech lobbying converts 79% public support into zero bills.

Enormous strengths and enormous self-contradictions coexisting in the same country. This is America's self-portrait as of early 2026.

Now turn the mirror around.

In Part 3, we examine China through the same frame. The statement "China possesses strengths and weaknesses that are the exact inverse of America's" is only half true. Regulation with speed, but its purpose serves the Party, not citizens. Vast data, but behind it, information control imposes an invisible ceiling on innovation. The clarity of a state-led strategy, but a real estate crisis and a demographic cliff are shaking its fiscal foundations.

And there are China's displaced. The age-35 crisis. Graduate students reduced to delivery riders. Young people choosing tang ping (躺平, "lying flat"). They stand before the same AI revolution as America's displaced. The difference lies in the structure of the safety nets they face, the shape of the institutions, and the distance from power.

America does not yet have an answer. China has one, but it is an answer to a different question.

To see this symmetry and asymmetry at the same time. That is what we will do in Part 3.


America's AI hegemony rests on the dollar, GPUs, and immigrants. But the dollar's supremacy is cracking. GPU exports are being voluntarily restricted. The doors of immigration are narrowing. Is this self-cannibalization or strategic adjustment? It is time to hold up the mirror called China.


Appendix: Key Figures at a Glance

The principal figures cited in this chapter are compiled here.

Big Tech Lobbying Expenditure (January–September 2025)

Company / Item Figure Source
Seven major tech companies, combined $50 million Issue One
Total AI-related lobbying revenue $92 million BGOV
Growth in AI lobbyists (2022–2025) +168% Sludge
Share of all lobbyists handling AI 26% (3,570) Sludge
Meta lobbying expenditure $19.7 million (all-time high) Issue One
Meta lobbyists 87 Issue One
Alphabet lobbying expenditure $12.2 million (+11% YoY) Issue One
OpenAI lobbying expenditure $2.1 million (+68% YoY) Issue One
New Big Tech Super PACs 3 (2025 Q3) Issue One
Daily lobbying (by congressional session day) ~$400,000 Issue One

AI Regulation: Public Opinion vs. Legislative Status

Item Figure Source
Public support for AI regulation (all adults) 79% CISSM / Univ. of Maryland
Republican voter support 84% CISSM
Democratic voter support 81% CISSM
Comprehensive federal AI bills passed 0 Baker Botts
AI-related federal laws enacted TAKE IT DOWN Act (1) Baker Botts
State-level AI measures adopted 38 states, ~100 measures Drata

U.S. AI Regulation: Key Timeline

Date Event
2025.01.20 Biden AI safety EO 14110 revoked
2025.01 EO 14179 — promotes AI innovation, removes safety testing requirements
2025.05 TAKE IT DOWN Act signed
2025.12.11 National AI Policy Framework EO — checks state regulation
2026.01.01 State AI laws take effect in CA, TX, IL, and others

Historical Band of Institutional Adaptation

Technology Emergence Institutional Response Years Elapsed
First spinning mill (1769) Factory Act (1833, UK) 64 years (upper bound)
Automobile (1886) Licensing and speed limit laws (1903–1910s) 20–30 years
Steam railway (1830) Railway Regulation Act (1844, UK) 14 years
Radio (1920s) FCC established (1934, U.S.) ~14 years
Internet (1991) GDPR (2018, EU) 27 years
Generative AI (2022) Interim Measures (2023.08, China) 9 months
Generative AI (2022) AI Act (2024, EU) 21 months
Generative AI (2022) U.S. federal AI law 3+ years, ongoing

Anthropic-Pentagon Dispute (February 2026)

Item Detail Source
Anthropic federal contract phase-out 6 months (announced 2026.02.27) CNN / CNBC
Pentagon designation of Anthropic "Supply chain risk" Washington Post
Anthropic red lines Ban on autonomous weapons and mass surveillance of U.S. citizens CNN
OpenAI Pentagon contract Signed immediately after Anthropic's removal (includes same red lines) Fortune

Investor Lens: For the Investor Who Has Read Part 2

The investment implications running through the American section converge on a single point: the engine of innovation is intact, but the institutional transmission is broken.

The Silicon Valley ecosystem (Ch. 3) remains the world's strongest innovation infrastructure. The four-way cycle of VC capital, university research, immigrant talent, and capital markets cannot be replicated in the near term. This is the structural basis for the American tech premium.

Yet the dual structure of the dollar and GPUs (Ch. 4) creates concentration risk. The combined CapEx of the Big Five tech companies (including Oracle) rivals the total revenue of the semiconductor industry, meaning exposure to this group is effectively a bet on the entire AI cycle. This is where diversification is needed.

The spread of the displaced (Ch. 5) and institutional gridlock (Ch. 7) translate into political risk. As white-collar unemployment becomes visible, pressure for AI regulation will intensify. But because the lobbying structure blocks comprehensive legislation, regulation is likely to arrive "suddenly and all at once." The EU's AI Act is the precedent. The practical lesson of Part 2 is to distinguish in advance between sectors vulnerable to a regulatory shock (AI hiring platforms, surveillance technology, autonomous driving) and sectors that would benefit (compliance, AI auditing).

The discerning profile (Ch. 6) is a map to the next generation of growth stocks. The structure in which a one-person enterprise produces the output of a hundred creates investment opportunities in SaaS platforms, AI agents, and creator infrastructure. The common feature of these sectors: low fixed costs and high margins.

In Part 3, we apply the same questions to China. America's strengths are not automatically China's weaknesses, and the reverse is equally true.


Ch. 7 first draft completed: 2026-03-03 Next: Part 3 — China's Algorithm (Ch. 8: When the State Becomes the Discerning)