← The Algorithm of Two Empires Vol. 2 4 / 18 한국어
Vol. 2 — The Algorithm of Two Empires

Chapter 3: The Ecosystem Called Silicon Valley — Why AI Exploded Here


Opening: The Boy from Tainan

Spring 1973. Bangkok, Thailand. The dust of the latest coup had not yet settled. In this city, a family made a decision. The father was a refinery engineer; the mother, a schoolteacher. Originally from Tainan in southern Taiwan, they had moved to Bangkok five years earlier for the father's work. But they could no longer keep their children in a country where tanks appeared on the streets. They bought two plane tickets. The tickets were for the children, not the parents. A ten-year-old brother and a nine-year-old brother would go first. Their destination: Tacoma, Washington, where an uncle they had never met lived.

Nine-year-old Jensen Huang did not speak a word of English. The uncle believed he had found a good boarding school. What the brothers were actually sent to was the Oneida Baptist Institute in rural Kentucky — a reform school that took in students expelled from county schools. On the first night, the boy's roommate was a seventeen-year-old with stab wounds. The boy's daily chore was scrubbing bathrooms for a hundred teenage boys; his brother worked the school's tobacco fields. The only contact with their parents was cassette tapes recorded and mailed back and forth. Two years later, their parents arrived in Beaverton, Oregon, and the brothers finally reunited with their family. The boy transferred to a public school and skipped two grades.

Nineteen years later. In 1992, Jensen Huang completed his master's in electrical engineering at Stanford. That same year, he and two colleagues rented a small office in Santa Clara, California. Their entire capital was $2 million from a venture capital firm. The company's name was NVIDIA. What exactly they would build was not yet clear. They had only a direction: graphics chips.

November 2025. NVIDIA's quarterly earnings call. Jensen Huang stepped up to the microphone. In his right hand was a slip of paper. On it were numbers. Third-quarter total revenue: $57 billion. Data center revenue alone: $51.2 billion. Trailing twelve-month total revenue: $187.1 billion. That single company had generated 78% of Samsung Electronics' annual revenue on its own. Since its founding, NVIDIA's share price had risen tens of thousands of percent, and it had briefly held the title of the world's most valuable company by market capitalization.

That same day, in a bureaucrat's office in Washington, D.C., the Trump administration was finalizing a proposal to raise H-1B visa fees to as much as $100,000. This was the visa that immigrants like Jensen Huang had used to work in the United States.

The distance between these two scenes (the quarterly results of the world's most valuable technology company and the policy aimed at blocking the immigrants who built it) is the subject of this chapter.


Jensen Huang's story is not a tale of individual success. Read it that way and you miss the point.

The real question: Why was an immigrant boy from Tainan able to build the world's most valuable technology company? Had he stayed in Taiwan, would NVIDIA have been born? Had he gone to Beijing, would today's NVIDIA have been possible? Had he chosen London, would a company generating $187.1 billion in annual revenue in 2025 have emerged from the United Kingdom?

The answer lies not in the individual but in the ecosystem. Jensen Huang's genius matters less than the fact that his genius landed in Silicon Valley. Even if he had boarded the same plane, a different destination would have made him a different person today.

This chapter disassembles the structural conditions of the ecosystem called Silicon Valley. It examines how four pillars — VC capital, the civilian transfer of military technology, immigrant talent, and the university research pipeline — interlocked to create the epicenter of the AI revolution. It traces the pressures now bearing down on each pillar. And it asks where the structure of this ecosystem resembles, and where it diverges from, the innovation clusters of Industrial Revolution-era Britain 250 years ago.


Section A: The Capital Accelerator — The VC Ecosystem and the Age of Mega-Rounds

In 2025, total venture capital investment in AI startups worldwide reached $211 billion, a 75% increase over the previous year. Sixty percent of that money poured into the Bay Area, the roughly 80-kilometer radius stretching from San Francisco to Palo Alto. $127 billion concentrated in a single metropolitan corridor.

The number needs a benchmark to register. South Korea's total national defense budget in 2025 was $59 billion. The AI investment funneled into this one corridor exceeded twice the Korean defense budget. And this was not government spending. It was private capital.

Break it down at the San Francisco local level and the concentration sharpens further. Of $126 billion in local AI investment, $113 billion flowed into just 92 companies, all of which had raised mega-rounds of $100 million or more. The remaining hundreds of AI startups split $13 billion among themselves. Capital concentration was narrowing not merely to the United States, not merely to the Bay Area, but to a handful of firms within the Bay Area itself.

A traditional VC round at the Series A stage runs $5 million to $50 million. By 2025, the very label "venture capital" had become inadequate. The concept itself had collapsed.


In February 2026, Anthropic announced its Series G round. $30 billion. A single fundraise exceeding half of South Korea's annual defense budget. Amazon committed over $40 billion as a strategic investment in this round; Google added $20 billion more. The company's valuation reached $380 billion.

OpenAI's valuation was heading toward $500 billion. This organization, founded as a nonprofit in 2015, had reached that scale in ten years. Elon Musk's xAI was trading at roughly $60 billion. The data-labeling startup Scale AI carried a valuation of $135 billion. Anysphere, maker of the coding assistant Cursor, had crossed $1 billion in annual recurring revenue (ARR).

These numbers all point in one direction. AI investment has already left the domain of venture capital. Single rounds the size of national budgets, Big Tech's "equity wars" to lock in cloud supply — this is investment on a quasi-sovereign scale. Investors naturally expect returns, but bets of this magnitude are closer to strategic positioning than revenue calculation. Amazon invested in Anthropic not for a share of Claude AI's revenue, but because the cloud platform Anthropic uses for training is AWS.


Why does this money collect here and not elsewhere? Look at structure rather than geography, and the answer appears.

Silicon Valley's VC ecosystem differs fundamentally from every other place on earth in three structural respects.

The first is a culture of tolerated failure. Chapter 11 of the U.S. Bankruptcy Code is designed around rehabilitation. A founder who runs a company into the ground has a legal basis for starting over. In Silicon Valley, the "serial entrepreneur" is not an object of contempt but a credential of trust. Investors are paradoxically more willing to back a founder who has failed once before — failure becomes an information asset. This is not some cultural quirk of American optimism. It is an institutional structure forged over decades by bankruptcy law, the tax code, and investment norms.

The second is the power-law investment model. Major VCs like Sequoia Capital and Andreessen Horowitz (a16z) operate on a structure where one or two portfolio companies returning 100x can offset every other loss. This is not "tolerating" failure; it is a revenue model that incorporates failure. These firms will therefore bet into uncertainty whenever they see even a sliver of potential upside. A single failure weighs little against the total portfolio loss, while a single success can flip the entire fund's returns. For startups, this model means that Silicon Valley is the only ecosystem on earth where capital permits the uncertainty inherent in ideas.

The third is network density. Stanford University, UC Berkeley, Sequoia Capital, Andreessen Horowitz, Google, Meta, OpenAI, and Anthropic all sit within a 50-kilometer radius. A founder with an idea can drive twenty minutes to meet a VC. That VC partner ran into a Stanford professor at a party the night before. That professor's doctoral student is interviewing at the VC's office tomorrow morning. The speed of this cycle — from idea to capital, from research to startup — is unlike that of any other city in the world.

This density is no accident. It began when Stanford professor Frederick Terman mentored the founders of Hewlett-Packard in the 1950s and provided them with university facilities and office space. The deliberate choice to physically close the distance between industry and the university built, over decades, an entire ecosystem.


But a pause is necessary here, because describing the VC ecosystem as an intrinsic American advantage reveals only half the truth.

In 2025, nine of the top ten global open-weight AI models were Chinese-made (ChinaTalk, based on select benchmarks). DeepSeek; Alibaba's Qwen (Tongyi Qianwen, 通义千问); ByteDance's (字节跳动) Doubao (豆包); Baidu's (百度) ERNIE (Wenxin Yiyan, 文心一言); and Pangu (盘古)-series models backed by Huawei (华为). All were built without VC mega-rounds. They are counter-evidence that capital is not a sufficient condition for innovation.

In the United States, $211 billion concentrated in a few players: OpenAI, Anthropic, xAI. In China, government subsidies, the internal funds of major internet conglomerates, and privately backed researcher groups like DeepSeek (funded by a hedge fund) each pushed AI in their own way. The result: the United States dominates in capital scale; China leads in the diversity and deployment speed of open models. Which advantage proves more durable in the long run remains an open question.

Capital does not create everything. But without capital, certain kinds of innovation become impossible. Training a single GPT-4-class model costs roughly $100 million. Gemini Ultra costs more. Only two ecosystems on earth can absorb investment at that scale: Silicon Valley and Chinese state-backed funds. The frontier AI race demands capital in some segments and talent in others.


One more fact stands out in the 2025 AI investment landscape. Of $560 billion in total AI investment, one-third concentrated in just five companies. This is not simple capital concentration. It is a structural outcome.

The fundamental nature of AI demands capital concentration. Under compute-scaling laws, model performance improves in proportion to the amount of compute used for training. Better models therefore require more GPUs, more data, more power. All three cost money. In a game of scale, the side with more capital wins.

The result is what might be called the "oligarchization of AI." When five companies control the AI infrastructure supply chain, every other startup can build only a thin application layer on top of their APIs. The structure mirrors the railroad monopolies of the 1800s, which constrained the freedom of "businesses on the rails." When a railroad company raised freight rates, the farmers who depended on it suffered. When OpenAI raises its API prices, the startups building services on top of it suffer.

This structure is also the paradox of the innovation ecosystem that Silicon Valley created. The capital that accelerates innovation becomes, once sufficiently accumulated, the barrier that constrains it.


Section B: From DARPA to ChatGPT — The Civilian Transfer of Military Technology

Dawn, March 13, 2004. The Mojave Desert, near the California-Nevada border. Temperatures hovered near freezing. Fifteen vehicles lined up at the starting line. They looked nothing alike: modified Humvees, off-road trucks, battered pickups. But they shared one feature. No one sat in the driver's seat. Behind the wheel, behind the windshield, in the passenger seat, no human being.

This was the first Grand Challenge, hosted by the Defense Advanced Research Projects Agency (DARPA), an arm of the U.S. Department of Defense. The rules were simple. Drive 240 kilometers autonomously and win $1 million. "Autonomously" meant exactly what it said: no human inside the vehicle. Full self-driving.

A starting gun fired. The vehicles crossed the line.

The one that traveled farthest stalled at 11.9 kilometers, its wheels caught on a fence. The rest covered less. One flipped immediately after launch. Military brass made no effort to hide their disappointment. The press mocked the event as "the greatest robot failure show in history." The $1 million prize went unclaimed.

Yet that "failed competition" planted the seed of the global autonomous-driving industry.

The following year, in the 2005 edition, Stanford Racing's vehicle "Stanley" became the first to complete the full course. The team's central figure, Sebastian Thrun, later joined Google and founded its self-driving division, the unit now known as Waymo. Other participants from the 2005 race became the core engineers of Uber's autonomous-driving team (Uber ATG). DARPA had funded failure, and that failure built an industry.


A single pattern recurs throughout American technological innovation: military crisis, DARPA investment, foundational technology accumulation, civilian transfer, industrial revolution.

The internet originated in 1969 as ARPANET, a military project to build a decentralized network that could survive a Soviet nuclear strike. It became the World Wide Web, the dot-com boom, and today's internet economy.

GPS began in 1973 as NAVSTAR, a military positioning system. Its original purpose was to determine precise coordinates anywhere on earth using 24 satellites. After civilian access opened in 1995, it became smartphone navigation. Uber and Airbnb were built on top of it. Uber's entire business model depends on GPS accuracy.

Voice recognition started with DARPA's soldier voice-command program in 1971 and became Siri and Alexa. Electronic Design Automation (EDA) tools likewise descended from defense research.

Deep learning and AI followed the same path. DARPA's Strategic Computing Initiative (SCI), launched in 1983, invested $850 million. Its targets were autonomous land vehicles, pilot-assist systems, and warship command AI. None of those direct objectives were achieved. But the initiative accumulated foundational research in parallel computing and neural networks. In 2025, the U.S. Department of Defense's official AI budget stood at $1.7 billion — and the classified budget not captured in that figure was far larger.


The DARPA model rests on three pillars.

First, it has no permanent staff. Every program manager serves a four-year term as an outside recruit: university professors, private-lab researchers, corporate engineers on four-year contracts. After four years, they leave. This structure prevents politicization and bureaucratization. A person who will be gone in four years has no interest in climbing the internal ladder. They focus solely on the outcomes of their assigned program.

Second, DARPA runs a failure-portfolio strategy. It operates on a budget structure in which only one out of ten programs needs to succeed. Under its principle of "complete technology transfer," technologies developed by DARPA are commercialized by the private sector. The agency does not hoard its results. That principle is the reason the internet, GPS, and voice recognition all migrated to civilian use.

Third, DARPA practices openness of objectives. The Grand Challenge set a concrete goal ("complete 240 kilometers autonomously") but specified nothing about how to achieve it. Whether to use cameras, LiDAR, or any particular software was left entirely to the teams. In the process, the full array of technologies needed for autonomous driving advanced simultaneously.

This is precisely the point of contrast with China's national AI programs. China's New Generation Artificial Intelligence Development Plan (新一代人工智能发展规划) states its goals explicitly: global leadership in AI by 2030, with budgets allocated and targets assigned to each subfield. This top-down approach excels when objectives are clear. When the goal is well-defined — as in DeepSeek's mandate to train the most efficient model possible with constrained resources — it produces remarkable results.

But for discovering "things no one knew were needed" — the kind of discovery the Grand Challenge catalyzed — the approach has structural limits. When the first race ended in total failure, a system that evaluated programs by their immediate output metrics would not have funded a second race.

Both paths work. They simply excel at different things. The American path is discovery through deliberate failure. The Chinese path is goal-directed resource allocation. In the AI era, the two paths are generating fundamentally different kinds of innovation.


The most decisive moment in the lineage from DARPA's legacy to AI came in 2012, at the ImageNet competition.

At this annual image-recognition contest, a team led by University of Toronto professor Geoffrey Hinton submitted AlexNet, which slashed the image-recognition error rate from 26% to 15%. The 11-percentage-point gap over the previous best was a margin no researcher had anticipated. It was not an improvement. It was a rupture.

Google moved immediately. It acquired Hinton's startup, DNNresearch, for $44 million. That acquisition gave rise to Google Brain. From Google Brain, in 2017, came the paper "Attention Is All You Need" — the Transformer architecture. GPT, Claude, and Gemini are all direct descendants of that paper. The roots of virtually every large language model (LLM) in existence trace back to this single publication.

Lay out the lineage: DARPA's Strategic Computing Initiative (1983) led to foundational neural-network research, which led to the ImageNet competition (2012, AlexNet), which led to Google Brain, which led to the Transformer paper (2017), which led to OpenAI's GPT series, which led to ChatGPT (2022), which led to the global AI boom. This sequence was no accident. It is the product of a structure the American technology ecosystem refined over seventy years: military investment, foundational research, civilian commercialization.


Section C: Immigration as Oxygen — The H-1B Crisis and the Talent War

Consider a roster of AI's most pivotal figures.

Jensen Huang — CEO of NVIDIA. Born in Taiwan. Ilya Sutskever — co-founder of OpenAI. Born in Russia, raised in Israel. Andrew Ng — co-founder of Google Brain. Born in the United Kingdom, raised in Hong Kong and the United States. Yann LeCun — Chief AI Scientist at Meta. Born in France. Demis Hassabis — CEO of Google DeepMind. Of British-Singaporean descent. Geoffrey Hinton — the father of deep learning, 2024 Nobel Prize in Physics. Born in the United Kingdom, based in Canada.

Not one of the six was born in the United States.

The pattern is not anecdotal cherry-picking, nor a claim that immigrant individuals are inherently more talented. It is a structural fact. Approximately 70% of students enrolled in U.S. doctoral programs in AI and computer science are foreign-born (National Science Foundation data). Immigrants account for 55% of Silicon Valley startup founders (Kauffman Foundation). Between 2000 and 2010, immigrants constituted 25.6% of co-inventors on U.S. patents.

America's AI capability = domestic talent + talent drawn from the rest of the world. Remove the second term and Silicon Valley would not be Silicon Valley.


In September 2025, the Trump administration announced plans to raise H-1B visa fees to as much as $100,000. The existing fee structure ranged from $460 to $6,460. Against the minimum of $460, that represented an increase of more than 200-fold.

The stated rationale was "protecting American jobs." The actual mechanics work differently.

The H-1B visa operates under an annual quota: 65,000 general slots and an additional 20,000 for holders of advanced degrees. Demand exceeds the quota by orders of magnitude every year. Selection is by lottery. Nearly half of these visas concentrate in professional, scientific, and technical services: AI researchers, software engineers, data scientists.

At $100,000 per visa, the structure shifts. Google, Amazon, and Microsoft can absorb an additional $100,000 when hiring engineers earning $300,000 to $500,000. But a seed-stage AI startup cannot. When the visa fee equals nearly half of a single employee's salary, small-scale innovators are structurally disadvantaged in the talent market. The base of the Silicon Valley ecosystem narrows.


The paradox was visible even inside the Trump administration.

Elon Musk, then leading the Department of Government Efficiency (DOGE), publicly defended the H-1B program. Without companies like NVIDIA, he argued, America's standing in the AI era would not exist. His co-head Vivek Ramaswamy took the same position. The MAGA coalition, which campaigned on "America First," fractured internally over immigration policy. Silicon Valley's logic cut across political-party lines.

Companies reacted immediately, incorporating in Canada and the United Kingdom and shifting to remote-work arrangements. These moves opened three fronts.


The first front was Canada.

In November 2025, the Canadian federal government approved a fast-track permanent-residency program for H-1B visa holders. Vancouver and Toronto had long earned the nickname "Silicon Valley North." Google and Amazon each operate major AI research labs in Vancouver and Toronto, respectively. Trump's door-closing became Canada's windfall.

This signaled that the international talent competition had entered a new phase. In the past, the contest was about which country had the better universities. Now it had shifted to which country issues visas faster and cheaper.

The second front was India.

Google, Amazon, and Microsoft hired 33,000 people in India in 2025 alone, an 18% increase year over year. This was not simple labor-cost arbitrage. As hiring in the United States grew more difficult, companies moved to where the talent was.

Simultaneously, surveys showed that 30% to 40% of Indian students who studied in the United States now preferred to return home. Three factors drove this shift. First, the startup ecosystems of Bengaluru and Hyderabad had matured. Second, immigration uncertainty in the United States had grown. Third, India's rapid economic growth had expanded post-return opportunities. The question "Why endure precarious visa status in America?" was getting a different answer than before.

The third front was China.

On October 1, 2025, the Chinese government launched a dedicated visa for STEM talent: the K-visa. It offered five-year multiple-entry privileges, tax incentives, and housing-support packages. The target audience was singular: second- and third-generation ethnic Chinese living abroad — specifically, Chinese-origin AI researchers working in Silicon Valley.

China's K-visa was not merely a visa policy. It was a "reverse diaspora" strategy. Just as emigrants from Taiwan and Hong Kong built Silicon Valley in the 1970s and 1980s, China now sought to reverse the flow — to bring talent that had left China for America back to China. The K-visa's tax incentives were the economic expression of that pull.


The more the United States closes its doors, the faster talent drains through these three fronts. Talent behaves like water: block it in one place and it flows to another.

Consider a counterfactual. What if Jensen Huang's parents had chosen the United Kingdom instead of the United States in 1973? If Ilya Sutskever had gone from Israel to Canada? If Yann LeCun had stayed at France's CNRS? If Geoffrey Hinton had never left Toronto?

These counterfactuals are not idle speculation. They are a thought experiment about the future that a $100,000 H-1B fee would create. Today's policy shapes the talent landscape twenty years from now. Had the central figures of AI history never come to the United States, today's AI landscape would look different. The magnitude of that difference is the true long-run cost of immigration policy.

The short-term costs are visible: higher hiring expenses for startups, corporate relocations to Canada and India. The long-term costs are invisible: the next generation's Jensen Huang may be sitting in a public school in Toronto, not Oregon. America's own immigration policy eroding its AI supremacy — that is the worst outcome this policy could produce.


Section D: The University as Power Plant — From Stanford to OpenAI

1998\. The Stanford computer science department. Two doctoral students used university servers to build a search-engine prototype. Larry Page and Sergey Brin. Their advisor introduced them to a VC. A company launched from a garage in Santa Clara. The company was Google.

The essence of this story is not that two geniuses met. Their advisor knew a VC. The VC agreed to a meeting the next day. The physical distance from the university server room to that first meeting was a twenty-minute drive. That geographic density is what makes Stanford not merely a university but a company factory.


Trace Stanford's lineage and you trace the history of Silicon Valley. Google (Larry Page and Sergey Brin, Stanford doctoral program), Yahoo (Jerry Yang, Stanford), Hewlett-Packard (William Hewlett and David Packard, Stanford engineering), Sun Microsystems (Stanford), Cisco (Stanford), and NVIDIA (Jensen Huang, Stanford master's in electrical engineering).

This lineage is no collection of coincidences. It began with the report Vannevar Bush drafted immediately after World War II: Science — The Endless Frontier. The model was straightforward: the government funds universities, universities conduct basic research, and that research transfers to private industry. Stanford's Frederick Terman implemented this principle on campus. He built an on-campus industrial park and encouraged graduates to start companies nearby. The "university-corporation-VC" triangle, designed in the 1950s and operating for seventy years, produced what is now Silicon Valley.


Follow the concrete flow of the AI research pipeline and the mechanics of that triangle become visible.

In 2012, Geoffrey Hinton's AlexNet ignited the deep-learning revolution. A University of Toronto research project was acquired by Google for $44 million. Google Brain took shape. In 2017, eight Google Brain researchers published "Attention Is All You Need." The Transformer architecture became the standard of AI research.

In 2018, building on the Transformer, OpenAI released GPT-1. GPT-3 followed in 2020; ChatGPT in 2022. In 2021, key researchers left OpenAI to found Anthropic. In 2023, Anthropic launched Claude. By 2025, Anthropic's ARR stood at $14 billion, with a 2026 target of $26 billion.

Every branch connects to a single root. DARPA led to university research, which led to Google's acquisition, which led to OpenAI, which led to Anthropic, which led to AI services worldwide. This lineage flows through the power plant called Stanford.


Stanford's Human-Centered AI Institute (HAI) operates on an annual budget of roughly $100 million and a faculty of more than 200. Its AI Index Report has become the benchmark dataset for measuring the state of global AI research, tracking annual counts of papers, investment, and talent distribution.

MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) is a hub for deep reinforcement learning and robotics. iRobot and Dropbox emerged from it. UC Berkeley's BAIR (Berkeley AI Research Lab) anchors open-source LLM research; Databricks and Cohere were created by its researchers. Carnegie Mellon University's (CMU) School of Computer Science laid the foundations of natural language processing (NLP) and speech recognition. Scale AI and Duolingo belong to the CMU lineage.


China is attempting to replicate this model.

Tsinghua University (清华大学), China's premier engineering school, has been scaling its AI research rapidly. Together with Peking University (北京大学), it receives concentrated government investment. Through the "Double First-Class" initiative (一流大学, 一流学科), the Chinese government has sought to elevate over 100 universities and departments to world-class standing. A significant share of the researchers behind DeepSeek are graduates of China's top-tier universities.

Two differences, however, still sustain the gap.

One is the spin-off ecosystem. In Silicon Valley, the cycle in which university researchers launch startups and raise VC funding to scale is natural. China has this cycle too. But the dependence on government funding is higher, and the gravitational pull of Baidu, Alibaba, Tencent, and Huawei — which dominate the platform economy — means that joining a large corporation is often more attractive than independent entrepreneurship. Additionally, the Chinese government's regulatory crackdown on Big Tech from 2021 to 2023 introduced uncertainty into the private AI investment environment.

The other is academic openness. Chinese researchers are submitting and being accepted to the de facto standard venues of global AI research (NeurIPS, ICML, ICLR) in growing numbers. But the speed at which research published within China is fully shared with the outside world still differs. The issue is not national capability but a structural difference in the information environment.

That the gap is narrowing is a fact. DeepSeek's emergence is the proof. But the direction and pace of that narrowing remain questions that demand continued observation.


Link to Volume 1: The Geography of Innovation — British Factory Clusters and Silicon Valley


What we saw in Volume 1: The innovation ecosystem of Industrial Revolution-era Britain formed around Manchester and Birmingham. Factories clustered along the River Derwent, harnessing its waterpower. The Royal Society and the Lunar Society of Birmingham served as knowledge networks. The Manchester Bank (1771) supplied industrial capital, and the alliance of factory owners and bankers accelerated innovation. Richard Arkwright was not the man who invented the spinning frame. He was the man who designed the system — the factory as a new institutional form — that connected spinning machinery, waterpower, labor discipline, and capital.


In eighteenth-century Britain, textile factories concentrated in Lancashire and Derbyshire for two reasons: the waterpower to run the mills was there, and so were the labor and capital to fill them. In the twenty-first century, AI companies concentrate in the 50-kilometer radius from San Francisco to Palo Alto for the same logic: the capital and talent to build AI are there.

The content of the resources has changed. VC capital and DARPA funding instead of waterpower and coal. GPUs instead of spinning frames. The annual proceedings of NeurIPS and ICML instead of the Lunar Society's monthly knowledge exchanges. But the law that innovation requires geographic density has not changed across centuries.

Map the correspondence more precisely and a structural isomorphism emerges.

1760 — 1830 Britain 2010 — 2026 Silicon Valley
Manchester-Derbyshire factory cluster San Francisco — Palo Alto 50 km tech cluster
Royal Society + Lunar Society Stanford HAI + NeurIPS/ICML
Arkwright's spinning-frame patent monopoly NVIDIA's CUDA software ecosystem lock-in
Factory owner — banker alliance (Manchester Bank) VC — founder alliance (Sequoia — Google, a16z — Coinbase)
Huguenot weaver immigrants Doctoral immigrants from China, India, Russia, France
Machine export ban (1774 — 1843) Semiconductor export controls (EAR, Export Administration Regulations)
Post-patent-expiry diffusion of the spinning frame Post-open-source diffusion of LLMs after the Transformer

This correspondence does not argue that history repeats. It argues that the conditions under which innovation ecosystems function are similar across eras. Capital, knowledge networks, technological monopoly, immigrant talent: when all four conditions are met simultaneously, an epicenter of innovation forms. And when any one of them falters, the epicenter shakes.


There is, however, one decisive difference.

The levers of the British Industrial Revolution were physical. Cotton had to be shipped from India by sea. Coal had to be mined from pits. Factories had to be built beside rivers with flowing water. Transferring a skilled weaver's craft took twenty years. Innovation faced physical limits on its speed of diffusion. Britain's statutory ban on machine exports from 1774 to 1843 extended its technological supremacy precisely because those limits held.

Silicon Valley's levers are intangible. The marginal cost of copying an algorithm is zero. DeepSeek proved it. When DeepSeek-R1 was released in January 2025, Silicon Valley paused. The model had been built with 2,048 H800 GPUs — less than one-tenth the compute used by American Big Tech AI labs — yet matched them on benchmarks. DeepSeek had absorbed American open-source research — the Transformer paper, the LLaMA models, various techniques — added its own training optimizations, and released the result. The code crossed borders over the internet.

This is why the AI revolution propagates far faster than the Industrial Revolution, and simultaneously why maintaining technological superiority is harder for the United States. Britain could block machines. The United States cannot easily block algorithms.

Set Arkwright and Jensen Huang side by side and a common thread appears. Neither built a machine. Both designed the ecosystem that ran on top of the machine. What Arkwright truly built was Cromford Mill — a factory system integrating waterpower, shift work, quality control, and apprentice training. What Jensen Huang truly built was not the GPU but CUDA (Compute Unified Device Architecture). Released in 2006, this software platform transformed the GPU into a general-purpose parallel-computing device. PyTorch and TensorFlow run on CUDA. Virtually every AI model in the world trains through CUDA. Switching to AMD GPUs would require rewriting tens of millions of lines of optimized code. That switching cost is the real moat behind NVIDIA's 92% share of the GPU market.

If Arkwright multiplied the speed of spinning yarn by orders of magnitude, Jensen Huang multiplied the speed of thinking by orders of magnitude. What they share is that neither built the machine itself — both designed the system surrounding the machine.

Volume 1's core formula — technological innovation, capital concentration, social instability, institutional redesign — is operating once more in the AI era. The Silicon Valley ecosystem generates AI technological innovation, and that technology concentrates capital. Anthropic at $380 billion, OpenAI heading toward $500 billion, NVIDIA's trailing twelve-month revenue at $187.1 billion. That capital concentration has begun to generate social instability — as the chapters ahead will show. And the pressure for institutional redesign is building: AI regulatory legislation in individual U.S. states, federal-level debates. Just as it took sixty-four years for the Factory Acts to be enacted in eighteenth-century Britain, the institutional redesign of the AI era may arrive faster. Or it may arrive later.


Pressure Points of the Ecosystem: Where the Four Pillars Are Shaking

That an ecosystem is strong does not mean it is eternal. Each of Silicon Valley's four pillars is now under pressure.

The pressure on the VC capital pillar is overheating.

Anthropic's ARR has compounded at triple-digit rates for three consecutive years. $1 billion at the end of 2024, a $7 billion run rate by mid-2025, $14 billion by February 2026. The 2026 target is $26 billion. This growth rate is genuinely extraordinary. But measured against a $380 billion valuation, even hitting $26 billion ARR in 2026 yields a price-to-sales (P/S) ratio of 15x, more than double the historically sustainable multiple for a SaaS company.

The debate over whether AI is a bubble remains unresolved. For a bubble to burst, revenues must fail to justify the investment. The bet investors are placing now is that "AI is a larger transition than the internet, and the companies holding the core platforms of that transition will grow far beyond their current valuations." If that bet is right, this is not a bubble. If it is wrong, it is a replay of the 2000 dot-com crash. From where we stand today, neither outcome can be known.

The pressure on the DARPA legacy pillar is the pace of privatization.

DARPA's annual budget is approximately $4 billion. A single OpenAI funding round exceeds that. The Department of Defense's official AI budget of $1.7 billion amounts to 0.8% of private-sector AI investment of $211 billion. The relative weight of government-led foundational technology development is shrinking rapidly.

This matters because of incentive structures. Private capital favors research that can be monetized within eighteen months. The kind of basic research DARPA once funded, research whose payoff "might materialize in ten years, or might not," does not attract private capital. The foundations of the internet, GPS, and deep learning all emerged from government-funded research that was "impossible to monetize." That dynamic is shifting.

The pressure on the immigrant talent pillar has been examined in detail above: the paradox of the H-1B fee hike and the talent drain across three fronts.

The pressure on the university pipeline pillar is the inversion of the academia-industry relationship.

As AI companies recruit university researchers en masse, the pool of basic-research talent at universities is shrinking. As of 2023, over 70% of authors on top U.S. AI papers were affiliated with private companies. The traditional sequence — universities conduct basic research, companies apply it — has begun to reverse. OpenAI and Google DeepMind now conduct basic research directly. In the short term, the private sector's growing capacity for basic research is efficient. In the long term, it risks widening the gap between "profitable basic research" and "unprofitable but necessary basic research."


One more point, for Korean readers.

The Seoul metropolitan area has AI research talent, investment capital, and university labs, all in close proximity. The physical distances are short. But the gap with Silicon Valley is not geographic. The difference between a Stanford graduate walking into a VC meeting and a KAIST researcher meeting an investor is not a matter of distance. It is a matter of the cost of failure, the maturity of equity-based compensation structures, and the predictability of the regulatory environment.

Silicon Valley was built over seventy years. It cannot be cloned overnight. Nor does a perfect clone need to be the goal. Samsung Electronics supplying HBM (High Bandwidth Memory) to NVIDIA, SK hynix providing accelerator memory for AI. These represent a strategy for capturing value in the AI era through a different path than Silicon Valley's design layer. The distinction between "design layer and execution layer" introduced in Volume 1 applies at the national level as well.


Transition: Where Capital Goes Next

Silicon Valley is the convergence point of four pillars: VC capital, the DARPA legacy, immigrant talent, and the university pipeline. These four pillars were laid down in layers over seventy years, and pressure on any one of them shakes the whole. The H-1B crisis presses the third pillar; the overheating of private capital has begun to rattle the first.

Yet at this moment, the ecosystem still functions. And the flow of capital it generates is moving toward its next destination.

Capital's address is converging on a single point: GPUs. And the company that makes GPUs is, for all practical purposes, one: NVIDIA. The company built by the boy who sat in an Oregon public school at age nine, unable to speak a word of English.

Silicon Valley generates capital; that capital buys GPUs; GPUs build AI; that AI generates more capital. This cycle interlocks with the global circulatory system called the dollar. GPUs are transacted in dollars. The entire AI industry runs on a single blood type: the dollar.

The structure and vulnerabilities of that cycle are what the next chapter will dissect.


Principal sources: The Letter Two (2026.02.07), Venture Capital Journal (Crunchbase), Fortune (2025.09.22), Rest of World (2026), National Science Foundation, Kauffman Foundation, Anthropic official announcement (2026.02), CNBC (2025.10.31, 2026.02.06), East Bay Times (2025.11.23), Bulletin of Atomic Scientists (2025.10)