← The Algorithm of Two Empires Vol. 2 5 / 18 한국어
Vol. 2 — The Algorithm of Two Empires

Chapter 4: Dollars and GPUs — The Dual Structure of Financial and Technological Hegemony


Opening: The Blood Type of the AI Age

It was early morning on Thursday, November 20, 2025.

When NVIDIA's quarterly earnings conference call began, Wall Street analysts closed their spreadsheets. The numbers spoke for themselves. Data center revenue: $51.2 billion. A new quarterly record. Year-over-year growth of 66%. Jensen Huang appeared on screen in his trademark leather jacket and read the figures in a measured voice.

Some did the math one way. In 2020, NVIDIA's total annual revenue was $10.6 billion. Five years later, a single quarter's revenue had grown to five times that figure. The company had expanded twentyfold in half a decade.

Others did the math differently. At $51.2 billion per quarter, that works out to $560 million per day. Broken down by the hour: $23.3 million. Per minute: $380,000. Per second: $6,400. NVIDIA was earning $6,400 every second.

A few hours after the conference call ended, another contract was signed in Riyadh, Saudi Arabia. HUMAIN, a subsidiary of the Public Investment Fund (PIF), finalized an agreement with NVIDIA to build a 500-megawatt AI factory. The price was not disclosed, but reverse-engineering the unit cost of a GPU cluster at that scale puts the figure in the billions of dollars.

The contract was denominated in dollars.

HUMAIN is Saudi Arabia's dedicated AI agency, launched in May 2025. It operates and invests across the entire AI value chain under PIF. A 500-megawatt AI factory. Saudi Arabia's target is for AI to contribute 12% of GDP by 2030. A nation preparing for life after oil is buying GPUs. And it is paying for those GPUs in dollars. Saudi petrodollars are now cycling into GPU-dollars.

No coincidence. GPUs trade in dollars. A single Blackwell B200 GPU installed in an NVIDIA data center server costs $30,000 to $40,000; a full rack system runs into the millions. Rent cloud services and the invoice arrives in dollars. AWS, Azure, GCP: all billed in dollars. AI startup valuations are denominated in dollars. Anthropic at $380 billion. OpenAI at $730 billion. Investment rounds in dollars, exits in dollars, equity in dollars.

The blood type of the AI age is the dollar.

Within this simple fact lies the core of twenty-first-century hegemonic structure. Throughout history, true hegemonic powers possessed two things simultaneously: the money the world uses, and the machine the world wants. In the nineteenth century, Britain had the pound and the steam engine. In the twenty-first, America has the dollar and the GPU. The two reinforce each other: the more GPUs are sold, the greater the demand for dollars; the greater the demand for dollars, the more capital there is to buy GPUs.

That leaves a single question: How durable is this dual structure, and where are its fracture lines?

This chapter dissects NVIDIA's GPU monopoly, tracks the astronomical AI investments of the Big Four tech companies, and analyzes the new significance that reserve-currency status acquires in the AI age. The double helix in which financial hegemony and technological hegemony interlock and reinforce each other, and what happened when Britain lost that structure, is the subject of what follows.


Section A: NVIDIA — The Arkwright of the AI Age

It's Not the Hardware. It's the Ecosystem.

In June 2006, NVIDIA quietly released a piece of software. It was called CUDA — Compute Unified Device Architecture. Its purpose was to make GPUs usable not just for graphics rendering but for general-purpose parallel computation. Almost no one predicted that this would change the world two decades later.

At the time, NVIDIA's competitors focused on chip performance. NVIDIA made chips too. But NVIDIA also built an ecosystem. Researchers began writing code in CUDA. The AI frameworks PyTorch and TensorFlow were designed to run on top of CUDA. The deep learning library cuDNN, the linear algebra accelerator cuBLAS, the multi-GPU communication library NCCL: all of them sat on CUDA.

Millions of researchers and engineers wrote code in CUDA. Hundreds of thousands of research projects and thousands of commercial AI models were built with CUDA optimization as a given. The result is switching cost. To migrate to an AMD GPU, you would have to rewrite all of that code, tens of millions of lines of optimized software. In practical terms, it is close to impossible.

That is the real reason NVIDIA holds 92% of the GPU market.

As of the first half of 2025, NVIDIA's GPU market share stands at 92%. By any measure (AI chips, AI data center revenue, or otherwise), the figure falls between 85% and 92%. It is a monopoly.

The full-year results for FY2026, reported in February 2026, reveal the scale of that monopoly. Annual revenue of $215.9 billion, up 65% year over year. Samsung Electronics posted total revenue of $240 billion in 2025. NVIDIA is approaching that figure on AI alone. And confirmed backlog already stands at $320 billion. Future revenue exceeds current revenue.

Data Note: NVIDIA FY2026 Key Metrics

Data center quarterly revenue $51.2B (Q3, YoY +66%). Trailing twelve-month total revenue $187.1B (YoY +65.2%). Cumulative Blackwell revenue ~$180B. Q4 revenue $68.1B (all-time record, YoY +73%). Q1 FY2027 guidance $78B (above consensus of $72.6B). Jensen Huang's outlook: visible Blackwell + Rubin revenue of $500B through end of 2026.

The stock told a different story. In late February 2026, NVIDIA shares traded at $193, some 30% below the all-time high of $280. Revenue was at a record, yet the stock had pulled back from its peak. Markets price the future. The gap reflected two questions: Can growth sustain? And will competition erode the ecosystem?

CUDA Is the Windows of the AI Age

"CUDA is the Windows of the AI age."

The analogy holds because the structure is identical. In the 1990s, Microsoft Windows dominated the PC operating system market. Every company's software had to run on Windows. When Windows changed, the software had to change. That dependency was Microsoft's moat.

Today, AI models run on CUDA. The GPU cluster that trained OpenAI's GPT-4, the servers that trained Anthropic's Claude, Google DeepMind's Gemini: all of them use the NVIDIA GPU + CUDA stack. That dependency is NVIDIA's moat.

But the analogy doesn't stop there. Microsoft eventually faced challenges to its Windows monopoly. Linux displaced Windows in the server market, and Android became the new standard in mobile. NVIDIA confronts the same kind of challenge.

AMD's MI300X approaches the H100 in raw chip performance. Microsoft Azure uses AMD GPUs for certain workloads. Meta is testing AMD. Yet AMD's software ecosystem, ROCm, is markedly thinner than CUDA. "The chips are comparable, but the software falls short." That single sentence summarizes the current competitive landscape.

Google chose a different path. It designed its own AI chip, the TPU (Tensor Processing Unit), for internal workloads. The seventh-generation TPU, "Ironside," is reducing the cost of training Gemini. But TPUs are used only inside Google. They are not sold externally. This is not a strategy to replace the CUDA ecosystem; it is a strategy to reduce internal NVIDIA dependence.

Amazon, too, builds its own AI chips — Trainium for training, Inferentia for inference. AWS is shifting some internal workloads onto these chips. Here again, the goal is cost reduction, not NVIDIA replacement.

The frontal assault on NVIDIA is coming from China.

Huawei Ascend — The Emergence of a Rival Ecosystem

Huawei's Ascend 910C was designed as a domestic substitute for the NVIDIA H100 in the Chinese market. It is manufactured on SMIC's 7-nanometer process. Yield rates hover around 40%. TSMC produces NVIDIA's 3- to 4-nanometer chips at yields exceeding 90%. That yield gap translates directly into production cost: you need more wafers to produce the same number of chips.

In 2025, Huawei set a target of producing 600,000 Ascend 910C units. For 2026, the plan calls for scaling up to 1.6 million dies. More than one million domestically produced AI accelerators, from Huawei and Cambricon (寒武纪) combined, are projected to enter the Chinese market in 2026.

The Chinese government is effectively mandating Ascend adoption through public procurement and state-owned enterprises. Baidu, Alibaba, and Tencent are transitioning from NVIDIA to Ascend under government pressure.

A core problem remains. Ascend has no CUDA ecosystem. When Chinese AI companies switch from NVIDIA to Ascend, they must rewrite their software from scratch. Running a model that DeepSeek trained on 20,000 NVIDIA GPUs on Ascend requires optimization work at massive scale. In practice, DeepSeek V4 is developing separate versions optimized for Huawei and Cambricon chips.

This is the point at which global AI infrastructure bifurcates. The NVIDIA/CUDA ecosystem on one side, the Ascend/domestic ecosystem on the other. The more aggressively the United States tightens export controls, the more intensively China refines its own ecosystem. The two ecosystems are incompatible. Moving a model developed on one to the other demands substantial additional work.

DeepSeek's case matters here.

In January 2025, DeepSeek released its V3 and R1 models. The GPU compute cost for training was approximately $6 million, less than one-tenth of the GPU hours Meta uses to train models of comparable performance. On benchmarks, the models matched or closely approached frontier-level results.

The implications are twofold.

For NVIDIA, it is a threat — evidence that the number of GPUs required for AI training can be reduced. While America's Big Tech pours $635 billion to $665 billion into AI, a Chinese company builds a competitive model for $6 million. If this trajectory of efficiency continues, the marginal cost of AI training drops sharply, and questions arise about NVIDIA's future growth.

NVIDIA, however, makes a different argument. Jensen Huang invoked the Jevons Paradox. When efficiency improves, demand increases. Just as coal consumption rose rather than fell when the steam engine became more efficient, lower AI training costs will lead to more models being trained — and ultimately more GPUs being needed.

Which logic proves correct remains to be seen. Meanwhile, as of March 2026, NVIDIA's confirmed backlog stands at $320 billion. Those are orders already placed.


Section B: The $400 Billion Bet — Anatomy of Big Tech AI CapEx

Four Companies, $650 Billion in a Single Year

On February 6, 2026, CNBC published an article. The headline: "Google, Microsoft, Meta, Amazon — How They're Spending on AI."

The numbers were as follows.

Amazon: $200 billion. Alphabet (Google): $175 billion to $185 billion. Microsoft: $145 billion. Meta: $115 billion to $135 billion. Combined: $635 billion to $665 billion.

China's annual defense budget for 2026 is $225 billion. The AI spending of these four private companies nearly triples the defense budget of one of the world's largest military powers. South Korea's 2025 GDP was $1.78 trillion. A single year of this investment amounts to 37% of the Korean economy.

The combined total for 2025 was $400 billion. In one year, the figure grew by 60% to 66%. Is this the velocity of a bubble, or the velocity of a genuine transition?

Amazon: Even If Free Cash Flow Goes Negative

Amazon's 2026 CapEx of $200 billion is the largest of the four.

CEO Andy Jassy's logic is straightforward. "If we don't invest now, we fall behind." AWS is the leader in cloud infrastructure. AI demand is surging, and AWS revenue is surging with it. But meeting the demand for AI training requires building more data centers. Data centers take time to build. Fail to break ground now, and when demand materializes two years from now, there will be no supply.

Amazon is also developing its own AI chips (Trainium for training, Inferentia for inference) to reduce dependence on NVIDIA. But the core of the investment remains data centers to house NVIDIA GPUs.

The issue is financial. CNBC's analysis suggests Amazon's 2026 free cash flow (FCF) could turn negative, the company spending more than it earns. In the short term, this is sustainable; Amazon has borrowing capacity. But if this investment fails to convert into revenue, it becomes the AI age's South Sea Bubble.

Google: Rebuilding from the Foundation

Alphabet's 2026 CapEx runs between $175 billion and $185 billion.

Google occupies a peculiar position. It competes for the lead in AI models themselves, yet has been criticized for lagging in AI monetization. Fear spread that ChatGPT would threaten Google's search business. Gemini was the counterattack, but early errors and controversies piled up.

Google's strategy is to differentiate through infrastructure. It develops the seventh-generation TPU, Ironside, in-house for internal workloads — reducing NVIDIA dependence while improving cost efficiency. Simultaneously, it aims to expand Google Cloud's market share. It ranks third in cloud behind AWS and Azure, but the goal is to close the gap by differentiating on AI training services.

Alphabet's FCF remains positive. Google's advertising business still generates enormous cash. That cash funds the AI investment. Google is pouring ad revenue into AI.

Microsoft: Leverage Built on OpenAI

Microsoft's 2026 CapEx (annualized for FY2026) is $145 billion.

Microsoft has invested in OpenAI since 2019. Total investment exceeds $13 billion. In return, Azure became OpenAI's exclusive cloud infrastructure. The servers running ChatGPT sit in Microsoft data centers. The GPU cluster that trained GPT-4 was built by Microsoft.

The structure of the partnership is simple. The more OpenAI grows, the more Azure revenue increases. Microsoft profits not from the AI model itself but from the infrastructure the model runs on. Copilot, the AI assistant embedded across Windows, Office, and GitHub, runs on Azure. Microsoft's AI strategy, in a word, is "infrastructure leverage."

Meta: Using Open Source to Break Monopolies

Meta's 2026 CapEx ranges from $115 billion to $135 billion.

Meta's strategy is distinctive. It does not sell AI models directly. Instead, it releases the LLaMA series as open source. The banner reads "AI democratization," but the actual purpose is different.

When LLaMA is released as open source, developers build products on top of it. That drives more NVIDIA GPU sales and more AWS and Azure cloud usage. But this does not hurt Meta — it weakens the monopoly of its competitors, OpenAI and Google. When OpenAI sells GPT as a paid product, Meta gives away a model of comparable performance for free. OpenAI's subscription revenue base is undermined.

What Meta wants is not dominance of the AI model market. It wants AI woven into the advertising algorithms of Instagram, Facebook, and WhatsApp, boosting ad efficiency. The model itself is a means to that end.

Meta's FCF outlook is dire. According to Barclays, sustaining this level of CapEx could reduce FCF by 90%. Mark Zuckerberg's bet is enormous.

Where Does the Money Go?

Of the $635 billion to $665 billion, 75% ($450 billion) flows directly into AI infrastructure. The breakdown is as follows.

GPU server purchases account for 40%. NVIDIA H100, H200, and Blackwell B200 clusters. Data center construction represents 25% (power, cooling infrastructure, and land acquisition). Networking equipment accounts for 10% (InfiniBand and high-speed Ethernet connections). The remaining 25% covers storage, software, and personnel.

The biggest bottleneck is power.

A single data center complex at the $1.5 to $2 trillion scale consumes the electricity of a small city. According to the IEA (International Energy Agency), data center power consumption in 2026 will reach approximately 800 TWh in the baseline scenario and exceed 1,000 TWh in the upper-bound scenario — roughly equivalent to Japan's total electricity consumption. By 2030, this demand is projected to increase 160% from 2023 levels.

FERC (Federal Energy Regulatory Commission) estimates that U.S. data center power capacity will grow from 19 GW in 2023 to 35 GW by 2030. This is what drives the current wave of new nuclear plant proposals, small modular reactor (SMR) investments, and natural gas plant construction.

Consider a rural town in Oregon. When a major data center is announced, the local government welcomes the tax revenue. But resentment surfaces among residents. "Our electricity bills go up, while the AI produced in that facility eliminates our jobs." The benefits and costs of the investment do not distribute evenly within the same community. This asymmetry is the political fracture line of AI investment.


Section C: The Dollar as Blood Type — Reserve Currency and AI Hegemony

The Weight of 56%

As of Q2 2025, the U.S. dollar accounts for 56.32% of global official foreign exchange reserves.

Some read this as "evidence of dollar hegemony in decline." In 2001, the dollar's share was 72%. In twenty-four years, it dropped 16 percentage points. If the trend continues, the dollar will eventually lose its reserve-currency status.

The IMF's analysis tells a different story. Adjusting for exchange rate effects, the real decline in the dollar's share amounts to just 0.12 percentage points. When the dollar is strong, dollar-denominated assets rise in relative value and the share appears inflated; when the dollar weakens, the share appears deflated. The 72% figure in 2001 came at the peak of the dot-com bubble and a period of extreme dollar strength. Present the figures without context, and they generate misunderstanding.

The most frequently cited alternative to the dollar, the Chinese yuan (CNY), holds 2% of global foreign exchange reserves. For the currency of the world's second-largest economy, this is strikingly low. The euro sits at 20%, the yen at 6%, the pound at 5%. The internationalization of the yuan is still in its infancy.

"The decline of the dollar" is a narrative whose rhetoric outpaces reality.

The Four Advantages of Dollar Hegemony

Reserve-currency status provides four advantages in the AI age.

First, a structural edge in the cost of capital. U.S. Treasuries are regarded as the world's safest asset. Because global investors demand dollar-denominated assets, America's borrowing costs remain low. That low cost of capital is what makes it possible for the Big Four to spend $635 billion to $665 billion in 2026. For Chinese companies, raising the same amount of capital at the same cost is structurally difficult.

Second, AI transaction infrastructure itself runs on the dollar. GPUs trade in dollars. NVIDIA stock is dollar-denominated. AWS, Azure, and GCP bill in dollars. Of the $211 billion in global AI venture capital investment, 60% was deployed in dollars from the Bay Area. Anthropic's $380 billion valuation, OpenAI's $730 billion valuation, both denominated in dollars. As the AI industry grows, dollar demand increases: a self-reinforcing loop.

Third, the dollar system functions as a sanctions weapon. The SWIFT network, the Export Administration Regulations (EAR): these operate within the dollar system. NVIDIA cannot sell H100s to China not because of chip performance but because of American control exercised through the dollar system. To buy NVIDIA GPUs, Chinese companies must use dollars, and dollar-system transactions are subject to U.S. sanctions.

Fourth, non-dollar AI companies face structural disadvantages. Europe's Mistral, South Korea's Naver HyperCLOVA X, India's AI startups: all must have their valuations set in dollars and receive investment in dollars. This process introduces currency risk and the cost of integrating into the dollar credit system.

The Dollar's Achilles Heel

An honest analysis must also specify the vulnerabilities.

U.S. national debt exceeded $36 trillion as of 2025 — 125% of GDP. This debt rolls over automatically. Annual interest payments on U.S. Treasuries now exceed the defense budget. Historically, excessive debt has eroded confidence in a hegemonic power's reserve currency. Just as Spain sustained its fiscal position on the silver revenues of South American mines, the United States can service this debt as long as dollar demand holds. The day that condition changes, the arithmetic changes with it. Whether America is structurally exceptional or whether the reckoning simply has not yet arrived — that question remains open.

The BRICS+ nations are pursuing de-dollarization. After Russia was cut off from SWIFT following the invasion of Ukraine, it shifted energy payments to the yuan. Iran sells oil in yuan and rupees. Saudi Arabia publicly mentioned for the first time the possibility of selling oil in currencies other than the dollar. India is expanding rupee settlement agreements. The volume of these transactions does not yet threaten dollar dominance. But a direction is forming — and that is different from ten years ago.

China's digital yuan (e-CNY) represents another strategy. Not a frontal replacement of the dollar, but the construction of a parallel channel that operates outside the dollar system. In January 2026, China tightened overseas remittance regulations, requiring detailed transaction records for transfers exceeding 5,000 yuan (CNY) or $1,000, and extending record retention from five years to ten. Capital outflow controls are tightening even as yuan direct-payment channels expand. The strategy is to increase yuan settlement in infrastructure investment in developing nations. The long-term goal is to create transaction pathways that bypass SWIFT.

This is not an immediate threat to dollar hegemony. It is, however, a trajectory in which the dollar system's capacity as a sanctions weapon gradually weakens. The more the United States weaponizes SWIFT sanctions, the greater the demand for sanctions-evasion infrastructure. Paradoxically, the weaponization of the dollar fuels demand for dollar alternatives.

As of March 2026, the dollar remains the blood type of the global AI economy. But efforts to change that blood type are underway. Success would require a long time yet. That the effort exists at all, however, marks a different world from twenty years ago.


Section D: The Double Helix — The Self-Reinforcing Structure of Financial and Technological Hegemony

The Lesson of Britain

In 1815, the Napoleonic Wars ended. Britain stood as the world's sole great power.

The City of London was the center of global finance. The British pound sterling was the reference currency of international trade. The Royal Navy controlled the trade routes of the Atlantic and Indian Oceans. The factories of Manchester and Birmingham produced half the world's cotton textiles. The steam engine belonged to Britain.

These two things — the pound and the steam engine — interlocked. Factories selling cotton textiles worldwide generated trade surpluses. Those surpluses flowed into London and became financial capital. That capital was reinvested in building more factories. More factories produced more cotton textiles. It was a cycle.

Financial hegemony reinforced technological hegemony. British firms raised capital at low interest rates, thanks to global demand for pound-denominated assets. Their cost of capital was lower than that of German or French firms. They could build more factories, more cheaply. Those factories generated more trade surpluses. A double helix.

This structure persisted for a hundred years.

Then the cracks appeared.

The first crack was the diffusion of technology. Machines get copied.

In 1774, Britain enacted the Act for the Preservation of the Art of Making Engines, simultaneously criminalizing the export of industrial machinery and the emigration of skilled workers. Violations carried fines of up to 500 pounds and twelve months' imprisonment. Britain intended to maintain its technological monopoly through legislation.

Samuel Slater, twenty-two years old, had apprenticed at the Arkwright spinning mill on the banks of the River Derwent. In 1789, he boarded a ship without a single blueprint — the entire design of the spinning frame stored in his memory. He landed in Rhode Island and by 1793 had completed the first water-powered textile mill in the United States. British law classified him as a criminal. America called him "the father of American manufacturing."

Between October 2024 and May 2025, seizures of NVIDIA H100 and H200 GPUs smuggled into China exceeded $160 million. Where Slater carried blueprints in his head, modern smugglers hid GPUs in shipping containers. The form has changed, but the structure is the same. Technology controls always generate workarounds.

Germany imported and imitated British machinery. Technology spread along similar pathways.

After absorbing British technology, Germany caught the next wave that Britain missed — the Second Industrial Revolution: chemicals, electricity, steel. Germany's applied chemistry education system captured 90% of the global organic chemicals market. German steel production surpassed Britain's in 1893. By 1913, Germany's share of world manufacturing output was 14.8%; Britain's was 13.6%.

The second crack was financial overextension. London's financial markets made massive loans to Argentina, Russia, and China, bankrolling imperial expansion. Then World War I erupted. The war turned those loans into bad debts. Britain borrowed from the United States during the war. A creditor nation before the conflict, Britain emerged as a debtor nation afterward.

The third crack was decisive: the rise of America. In 1880, U.S. steel production surpassed Britain's for the first time. By 1900, the United States was the world's leading steel producer. In 1917, the United States became the global lender of last resort, lending dollars to wartime combatants and initiating the shift of the world's financial center from London to New York. After 1918, London still mattered, but it was no longer singular.

Technological hegemony and financial hegemony transferred together.

The Twenty-First-Century Double Helix

America's structure today is identical in form to Britain's.

Reinforcing cycle one: AI and cloud services generate a technology trade surplus. That surplus accumulates as Wall Street and venture capital. That capital flows into OpenAI and NVIDIA. The growth of OpenAI and NVIDIA creates more AI services.

Reinforcing cycle two: Global demand for dollar-denominated Treasuries lowers America's borrowing costs. Low borrowing costs make Big Tech's $650 billion in CapEx possible. That CapEx produces more powerful AI models and infrastructure. More powerful AI generates more dollar-denominated transactions.

This is the double helix in progress.

Three Current Fracture Lines

Apply Britain's lesson to America, and the same three potential fracture lines emerge.

The first candidate is technology diffusion. DeepSeek arrived. It built a frontier model for $6 million. That is the modern incarnation of the principle that technology gets copied. As open-source AI models — LLaMA, DeepSeek, Mistral — proliferate, the possibility of monopolizing AI technology diminishes. China occupies nine of the top ten global open-weight models (ChinaTalk, based on certain benchmarks in 2025). Just as Germany absorbed British machinery and then surged ahead in the Second Industrial Revolution, the possibility that China absorbs foundational AI technology and leads the next wave — reasoning, agents, robotics — remains open.

The second candidate is financial overextension. The $36 trillion national debt is structurally analogous to Britain's excessive lending. The United States can service this debt because reserve-currency status allows it to refinance at low interest rates. But this structure functions only as long as confidence in the dollar holds. If confidence wavers, interest rates rise; if rates rise, debt-servicing costs rise; if debt costs rise, Big Tech's CapEx capacity shrinks. This is not an immediate threat, but it is a structural vulnerability.

The third candidate is the rise of a competitor. China has not yet caught up with the United States. According to Epoch AI, the gap between Chinese models and American frontier models is three to six months as of 2025. Yet the direction is convergence. At the end of 2024, China's share of the global open-source AI market was 1.2%. By August 2025, it was 30%, a twenty-five-fold increase in eight months. DeepMind CEO Demis Hassabis assesses the gap as "a matter of months."

Yet a decisive difference between Britain and America must be noted. Britain operated its technological hegemony alongside a physical empire. When the empire crumbled, technological hegemony faltered with it. America operates its technological hegemony through dematerialized software and standards. Unlike steam engines, algorithms can be copied without the original disappearing. The CUDA ecosystem does not vanish when China adopts Ascend. Call this a "protocol empire": a hegemonic structure whose power rests not on physical territory but on the standards, platforms, and software ecosystems that others must adopt to participate. This characteristic gives it the potential to outlast Britain's dual hegemony.

At the same time, however, the same dematerialization accelerates the diffusion of AI technology. Physical machines required ships to transport. Algorithms travel over the internet.

The Paradox of 2025: The Contradictions of Export Controls

In December 2025, the Trump administration briefly approved the export of NVIDIA H200s to China. NVIDIA's China revenue share is approximately 13%. The logic: losing this revenue would shrink R&D funding and, over the long term, weaken American AI hegemony.

But the decision was reversed. In January 2026, BIS (Bureau of Industry and Security) shifted the review status of H200 exports to China from "presumption of denial" to "case-by-case review." Simultaneously, the Trump administration imposed a 25% tariff on advanced AI chips from outside U.S. supply chains. In the end, NVIDIA's China H200 revenue was zero. Approved, but never shipped.

What does this paradox reveal? Export controls exist to impede China's AI development. But export controls simultaneously reduce NVIDIA's revenue. When NVIDIA's revenue falls, R&D investment declines, and America's technological edge narrows. Corporate interest and national security collide.

Britain repealed its machinery export ban in 1843, judging that the revenue from selling machines outweighed the risk of technology diffusion. The United States faces the identical dilemma today.


Volume 1 Connection: From Arkwright to Jensen Huang

Richard Arkwright opened the world's first water-powered spinning mill in Cromford, a small village on the banks of the River Derwent, in 1771. The story that he invented the spinning frame is only half true. Arkwright was a systems designer before he was an inventor.

What Arkwright created was not a spinning frame but a factory system. He arranged machines within a building, organized workers into shifts, and integrated the entire flow from raw cotton supply to finished-goods shipment. What had been dispersed cottage industry became the concentrated system of the factory.

In the language of Volume 1, Arkwright did not seize the execution layer — he created the design layer. Not weaving itself, but the system that made weaving possible.

Jensen Huang is the Arkwright of the AI age.

The comparison is structurally precise.

Arkwright Jensen Huang / NVIDIA
Spinning frame GPU (Blackwell B200)
Cromford Mill DGX supercomputer cluster
Patent litigation to block competitors CUDA ecosystem imposing switching costs
Formation of the factory-owner class Formation of the AI-company class
Destruction of the handloom weaver class Threat to the junior software developer class

But there is a decisive difference.

Arkwright's factory system functioned fully only within Britain. Raw cotton came from India. Workers came from Lancashire. Markets lay in the colonial empire. Arkwright's fate was bound to that of the British state. When the British Empire faltered, Lancashire's factories faltered with it.

Jensen Huang's GPU empire is transnational. Design happens in Santa Clara, California. Manufacturing is outsourced to TSMC in Taiwan. High-bandwidth memory (HBM) comes from SK hynix and Samsung in South Korea. Sales span the globe. Usage spans every continent.

This transnationality makes NVIDIA stronger than Arkwright. It is not tethered to the fate of a single nation. At the same time, this transnationality collides with export controls, the attempt at "re-nationalization." When the U.S. government says, "NVIDIA is an American company, so do not sell to China," it clashes with the logic that "We are a global corporation, and China revenue belongs to our shareholders."

Just as Arkwright tried to block competitors through patent litigation but ultimately lost his patent case in 1785, NVIDIA's CUDA monopoly will eventually face its own challenge. Patents expire. Yet even after Arkwright lost the patent case, the factory management system he built remained the standard. Even if NVIDIA loses its CUDA monopoly, the advantage of being the company that first defined the standard for AI computation will endure.

"If Arkwright increased the speed of spinning cotton a hundredfold, Jensen Huang increased the speed of thought a hundredfold. What the two men share is not that they built machines, but that they built ecosystems on top of machines."

And the core formula from Volume 1 (technology, capital concentration, social instability, institutional redesign) is operating once again. The technology called AI concentrates capital in NVIDIA. That capital flows into the Big Four's $650 billion in investment. The AI produced by that investment reshapes labor markets. Social instability begins. Institutions have not yet caught up.

In Volume 1, sixty-four years elapsed before the Factory Acts — from the first factory in 1769 to the Factory Act of 1833. Whether the historical band of institutional adaptation (fourteen to sixty-four years) can be compressed in the AI age remains unknown. But one thing is certain: the speed is different. Arkwright's factory was visible only in Lancashire. Jensen Huang's GPUs are operating simultaneously across the entire world.


Supplementary Section: Structural Analysis for Investors

What this section requires is the language of analysis: not narrative, but framework.

Structural Concentration in the GPU Supply Chain

NVIDIA's monopoly rests on three concentrations.

Design concentration: NVIDIA monopolizes GPU design. AMD competes but faces a wide software ecosystem gap. The custom chips of Google, Amazon, and Microsoft are confined to internal use.

Manufacturing concentration: NVIDIA GPU fabrication depends on TSMC's fabs in Taiwan. TSMC's foundry market share was 64% in 2024 and exceeded 66% for the full year of 2025. Without ASML's EUV (extreme ultraviolet) lithography equipment, 3- to 4-nanometer processes are impossible. ASML is the world's sole EUV manufacturer. It has not sold a single EUV machine to China. This double concentration forms the physical foundation of U.S. export controls.

Ecosystem concentration: CUDA is proprietary. Millions of developers have written CUDA-optimized code. That codebase depends on NVIDIA. That dependency is the switching cost.

Unless all three concentrations collapse simultaneously, NVIDIA's monopoly endures. And breaking even one of them is difficult in the short term.

The Risk Structure of Big Tech CapEx

The $650 billion investment carries three risks.

Demand risk: If actual enterprise spending on AI services grows more slowly than expected, infrastructure becomes oversupplied. As of 2026, the bulk of AI consumption remains concentrated in the training phase. The inflection point at which inference demand surpasses training demand will be the watershed for monetization.

Efficiency risk: If DeepSeek-style optimization accelerates, fewer GPUs will be needed to achieve the same performance. CapEx becomes overinvestment. If Jensen Huang's Jevons Paradox counterargument proves correct, this risk dissolves.

Power risk: If data center power demand outpaces power supply capacity, bottlenecks emerge in training and inference throughput. The pace of U.S. grid upgrades may not keep up with the pace of AI investment.

One fact remains. The four companies spending this money have each wagered tens of billions of dollars that AI is not a bubble. Their information is better than that of outside investors. They could be wrong. But if they are wrong, the scale of the AI bubble will dwarf the dot-com bust.

Investment Implications of the Dollar-GPU Double Helix

The coupling of dollar reserve-currency status and GPU monopoly presents investors with three time horizons.

Short term (2026-2028): Near-term earnings visibility is high. A confirmed backlog of $320 billion is already in place, and 40% of Big Tech CapEx flows into GPU servers, most of it through NVIDIA. FY2026 results have confirmed this structure.

Medium term (2028-2032): Efficiency gains and the acceleration of custom chip development could reduce NVIDIA's growth rate. But rising model complexity and surging inference demand could offset that decline. Which force prevails is unknowable as of 2026.

Long term (beyond 2032): Quantum computing, neuromorphic chips, or architectures not yet imagined will eventually replace or complement GPUs. That is the next transformation of the double helix.

The investment implications for the dollar as reserve currency are different in kind. The dollar is not a direct investment target but a structural precondition. Yet the structural stability of the dollar is a prerequisite for U.S. Big Tech investment. In a scenario where dollar confidence erodes, the entire U.S. tech valuation complex is repriced. That would produce not a "dollar crisis" but a shock equivalent to an "AI bubble collapse."


Supplementary Section: A Comparative Counterbalance — China's Different Calculus

$6 Million vs. $635 Billion

The contrast in numbers poses a question.

DeepSeek V3's training cost: approximately $6 million. Big Tech's combined 2026 AI CapEx: $635 billion to $665 billion. The ratio exceeds 1 to 100,000. One side built a frontier model for $6 million. The other is pouring in $650 billion.

How should this be interpreted?

Simplification breeds misunderstanding. DeepSeek's $6 million is the cost of the final training run. It excludes prior research and development, data preparation, and infrastructure construction. According to SemiAnalysis, DeepSeek's total server CapEx is estimated at approximately $1.6 billion, with cluster operating costs accounting for $944 million. Still hundreds of times less than U.S. Big Tech, but the $6 million figure is only part of the picture.

The direction, however, is clear. DeepSeek developed more efficient algorithms under constraint. As export controls on the NVIDIA H100 and H800 tightened, it found ways to do more with fewer chips. Constraint did not kill innovation — it gave birth to a different kind of innovation.

This is the same pattern observed in Volume 1. When Lancashire's handloom weavers were losing their livelihoods, some developed different skills. Constraint pushes some aside, and forces others in new directions.

China's AI Efficiency — Structural Reasons

China's AI efficiency is not simply a matter of making do. There are structural reasons.

Chinese data center electricity costs are less than half of those in the United States. In 2024, China's net power capacity additions totaled approximately 430 GW — more than fourteen times the U.S. figure of roughly 30 GW (National Energy Administration / Jefferies). China's projected surplus power capacity by 2030 is approximately 400 GW. When electricity costs half as much, inference costs half as much. When inference costs half as much, the same budget delivers twice the AI services.

Chinese AI model pricing runs one-sixth to one-quarter of U.S. levels. DeepSeek R1's API cost is $0.55 per million input tokens. OpenAI o1 charges $15 — a twenty- to fifty-fold difference. If Chinese AI services can capture global market share at this price structure, U.S. Big Tech's AI monetization takes a direct hit.

As of February 2026, China's frontier models (Alibaba's Qwen 3.5, ByteDance's Doubao 2.0, GLM-4.7, Kimi K2.5) deliver performance that matches or closely approaches their Western counterparts. Their prices are one-sixth of American levels.

This is the friction point of the double helix. In a world where dollar-denominated AI services dominate, the emergence of cheaper non-dollar AI services could reduce the dollar's share of AI transactions. This has not happened yet. But the direction is forming.


Transition: Two Sides of the Same Coin

Here lies the connection: "the concentration of capital" (Chapter 4) and "the exclusion of labor" (Chapter 5) are two sides of the same coin. While NVIDIA headquarters earns $6,400 every second, a paralegal position at a Chicago law firm quietly disappears — not through termination but through attrition. The vacancy is simply never filled. A silent displacement.

On one side of the coin: $650 billion. On the other: the displaced.

Dollar reserve-currency status lowers the cost of capital, and that capital flows into GPU clusters that automate 69% of paralegal work. While the world's surplus capital concentrates in Santa Clara, the tools that concentration produces arrive simultaneously on white-collar desks around the globe.

As dollars and GPUs reshape the world, people are pushed aside by that reshaping. Who are they, and how are they displaced?

It is time to turn the coin over.


NVIDIA's 92% GPU monopoly rests not on hardware but on the CUDA ecosystem. The Big Four's $650 billion in AI investment is made possible by the low cost of capital that dollar reserve-currency status provides. These two forces — dollars and GPUs — form a double helix, each reinforcing the other. DeepSeek's $6 million frontier model, China's energy cost advantage, $36 trillion in national debt, and BRICS+ de-dollarization efforts all apply friction to this double helix. Just as Britain held the dual hegemony of the pound and the steam engine for a century before losing it, how long the United States can sustain the same structure is being decided right now.


Next chapter preview: When the AI built with $650 billion of investment starts coming for your colleague's job, are you among the displaced or the discerning? What blue-collar workers in the Rust Belt experienced thirty years ago, white-collar workers are beginning to experience now. The target has changed and the speed has changed. But the structure is the same.


Reference: Key Data in This Chapter

Metric Value Source
NVIDIA GPU market share (2025 H1) 92% Carbon Credits
NVIDIA AI chip market share 90% PatentPC
NVIDIA AI data center revenue share 86% Visual Capitalist
NVIDIA FY2026 annual revenue $215.9B (+65% YoY) NVIDIA official
NVIDIA Q4 FY2026 revenue $68.1B (+73% YoY) NVIDIA official
Cumulative Blackwell sales ~$180B Motley Fool
FY2027 backlog $320B Analyst estimates
Big Four CapEx 2025 ~$400B CNBC
Big Four CapEx forecast 2026 $635B-$665B CNBC
Amazon 2026 CapEx ~$200B CNBC
Alphabet 2026 CapEx $175B-$185B CNBC
Microsoft 2026 CapEx (annualized) ~$145B CNBC
Meta 2026 CapEx $115B-$135B CNBC
AI infrastructure direct allocation ~75% IEEE ComSoc
Dollar share of global FX reserves (2025 Q2) 56.32% IMF COFER
Yuan share of FX reserves ~2% IMF COFER
Dollar peak share (2001) 72% St. Louis Fed
U.S. national debt $36T+ Official
DeepSeek V3 GPU training cost ~$6M Analytics Vidhya
Huawei Ascend 910C yield ~40% SemiAnalysis
Huawei Ascend 2026 production target 1.6M dies Bloomberg/SCMP
H100/H200 smuggling seizures $160M+ CNBC
British machinery export ban Enacted 1774, repealed 1843 Historical record
U.S. steel surpasses Britain 1880 Historical record
German steel surpasses Britain 1893 Wikipedia