Wednesday night, November 30, 2022.
A startup in San Francisco opened a website. No press conference. No formal announcement. Its founder, Sam Altman, posted a brief note on Twitter. "Try ChatGPT." That was it.
On the first day, nothing seemed to happen. The second day, the same. On the third day, word of mouth ignited. A lawyer tried summarizing case law. A translator tested it in her specialty language. A programmer asked it to write code.
Every time a result came back, the reaction was the same. Awe and terror, simultaneously.
Something similar happened at a law firm in Seoul. A legal researcher with twelve years of experience opened his laptop after work. He typed in a case-law search query — a task that normally took three hours. The result came back in thirty seconds.
It was not perfect. Some citations were inaccurate. But the overall structure was startlingly precise. He stared at the screen for a long time. The foundation of his professional pride had just shifted beneath him.
Within five days, one million people had logged on. Within two months, the number passed one hundred million. Instagram took two and a half years to reach the same figure. TikTok needed nine months. A UBS report put it plainly: "In twenty years of tracking the internet, we have never seen a consumer application spread this fast."
Paul Graham of Y Combinator noticed something else. "What is surprising about the reaction to ChatGPT is not just how many people are amazed. It is who they are." Something fundamentally large was happening.
Google declared an internal "Code Red." The company that had dominated the search market for twenty years went into emergency mode. The Atlantic named ChatGPT the "Breakthrough of the Year" for 2022. One journalist's phrase captured it precisely: "The first moment ordinary people experienced the power of modern AI firsthand."
In Part 2, we tracked a hundred and fifty years of machines replacing human muscle. From the steam engine to the factory system, from the handloom weaver's collapse to the redesign of society. Chapter 11 ended with Britain gaining and losing hegemony — leaving behind a question: "If the first Industrial Revolution replaced muscle, what does the next explosion replace?"
Now Part 3 begins. Carrying the experience of two eras, we enter the third. The world after Rome's roads and concrete, after Britain's steam engines and cotton mills.
What is being replaced this time is not muscle. It is cognition. The ability to think, to analyze, to judge. The domain in which humans take the greatest pride.
1. The Transformer — The Steam Engine of Cognition
In 2017, eight researchers at Google Brain published a paper. Its title: "Attention Is All You Need." The architecture it proposed was the Transformer. As of 2025, the paper has been cited more than 173,000 times. It is one of the most cited papers of the twenty-first century.
What the Transformer does is startlingly simple. "Predict the next word." Look at the preceding words in a sentence and guess what comes next. That is all. Execute it at sufficient scale, and something unexpected happens.
In Chapter 7, we saw Newcomen's steam engine. A machine that converted thermal energy into kinetic energy. The principle was straightforward: boil water, push a piston with steam. The Transformer works the same way. It converts text data into predictive capability.
Just as the steam engine spread from cotton mills to mining, transportation, and iron smelting, the Transformer has proliferated. Since 2020, it has expanded beyond text into images, audio, video, and robotics. DALL-E (2021) generated images from text. Sora (2024) generated video from text. A single architecture is penetrating nearly every domain of cognition.
The difference lies in scale. Before the Transformer, AI research was a kind of alchemy. Which architecture was better, which algorithm worked — answers depended on empirical trial and error.
In 2020, Kaplan et al. at OpenAI changed the game. What their "Scaling Laws" discovered was this: model performance follows a power law across three variables — the number of parameters, the size of the dataset, and the compute invested in training. Increase all three, and performance rises. Predictably. The relationship held across seven or more orders of magnitude.
This was the transition from alchemy to chemistry. From "find a cleverer algorithm" to "train a bigger model on more data." Just as Watt measured steam engine efficiency scientifically, AI researchers could now predict performance. Though this chemistry still has only an incomplete periodic table. The total volume of performance is predictable; the specific form in which that performance manifests remains unknown territory.
In 2022, Hoffmann et al. at DeepMind refined the law further. The Chinchilla study revealed the "direction of scale." Increasing parameters alone is not enough. Data must grow at the same rate. The optimal ratio is roughly twenty tokens per parameter.
Chinchilla trained 70 billion parameters on 1.4 trillion tokens. Gopher trained 280 billion parameters on 300 billion tokens. Chinchilla won. A model four times smaller, trained on 4.7 times more data, outperformed the giant.
In Chapter 2, when we discussed Rome's operating system, we noted that no empire built roads without also building a relay-station network. The same holds for AI. The three axes of scale must grow in balance.
The results followed. From GPT-1 (2018) to GPT-4 (2023), model parameters increased 15,000-fold in six years. From 117 million to 1.76 trillion. The compute invested in AI training has grown roughly ten billion-fold since 2010 — an annual average of 4.4x. If Moore's Law doubles capacity every two years, AI compute has doubled every five to six months.
One analyst's projection summarizes the trend. Leopold Aschenbrenner, a former OpenAI researcher, frames it as an "OOM (order of magnitude) count." Compute scale-up contributes 0.5 OOM per year. Algorithmic efficiency gains contribute another 0.5 OOM per year. Combined, effective compute improves by 1 OOM — tenfold — every year. From GPT-2 to GPT-4, 3.5 to 4 OOM accumulated over four years. As a single analyst's extrapolation, however, these figures should be read as a leading scenario, not a settled future.
GPT-3's training cost was $4.6 million (2020). GPT-4 is estimated between $78 million and $800 million, depending on whether the figure reflects amortized compute or hardware acquisition costs. As of 2025, the training cost for a single next-generation frontier model exceeds $500 million to $1 billion. Anthropic CEO Dario Amodei has stated: "By 2024, a single training run was already approaching one billion dollars."
When scale grows large enough, unexpected things happen. In 2022, Wei et al. at Google Research reported a phenomenon they called "Emergent Abilities." Capabilities absent in small models appear suddenly once a model crosses a certain size threshold. To borrow a phrase from physicist Philip Anderson's 1972 Science paper: "More is Different."
A concrete example. Three-digit addition. A model with 6 billion parameters scored 1% accuracy. A model with 175 billion parameters scored 80%. This is not linear improvement. It is a nonlinear jump.
More striking examples exist. GPT-3 (2020) had zero ability to maintain meter and rhyme in poetry. GPT-4 (2023) writes sonnets. Foreign-language translation showed the same pattern. Below a certain scale, output was barely better than random. Cross the threshold, and quality leapt to the level of a professional translator. No one predicted that such abilities would emerge from "predict the next word."
More than one hundred such emergent abilities have been discovered. Advanced reasoning, in-context learning, analogy, symbolic computation. Scale up a model, and there is no way to predict in advance which capabilities will appear.
This is what distinguishes this revolution from its predecessors. The handloom weaver's decline was foreseeable. Everyone knew machines could weave faster. It was only a matter of time.
AI is different. Scale up, and unforeseen capabilities appear suddenly. A task impossible three years ago, AI accomplishes today. What it will accomplish tomorrow, no one knows. The direction of the threat itself is unpredictable. That is the real difference.
The MATH benchmark illustrates the speed. In 2021, AI scored 5% accuracy on math problem-solving. Researchers predicted "a fundamental breakthrough will be required." Three years later, accuracy exceeded 90%. No fundamental breakthrough. Simple scaling alone.
MMLU — a university-level knowledge test — followed a similar trajectory. It was effectively solved within three years of its creation. Inference costs dropped 1,000-fold in under two years. Structurally, this mirrors the Luddites of the Industrial Revolution insisting that "machines can never replace skilled craftsmen."
2. The Scope of Cognitive Automation — A Historical Reversal
The Industrial Revolution struck physical labor. Handloom weavers, spinners, coachmen. Low-skilled workers were displaced first; high-skilled artisans enjoyed relative protection. In Chapter 9, we saw this pattern through the weaver's collapse.
This time, the order is reversed.
In 2023, Eloundou et al. published a paper in Science. "GPTs are GPTs." The study quantified the reversal. 80% of the U.S. workforce has at least 10% of its tasks affected by large language models. 19% has more than 50% of its tasks within the zone of impact.
LLMs alone can perform 15% of all tasks at equivalent quality but faster. Combined with LLM-powered software tools, 47 to 56% of tasks can be accelerated.
The critical finding: high-income, high-education occupations have greater exposure than low-income ones. Translation and interpretation, 76%. Legal services, 72%. Accounting and tax, 68%. The core tasks of the skilled middle class show the highest exposure.
McKinsey's 2024 updated estimate goes further. The median timeline for AI task automation has been pulled forward to 2045 — approximately ten years earlier than previous estimates. Sixty to seventy percent of working hours are theoretically automatable. For the first time in history at this scale, the wave of automation crashes not on the factory floor but in the office.
Exposure is not replacement. What the Eloundou study shows is the scope in which AI can be applied to a task — not the conclusion that the person performing it will be fired. Exposure can result in augmentation rather than displacement.
In fact, an analysis by the Economic Innovation Group reveals a paradoxical result. Between 2022 and 2025, the increase in unemployment among the occupations most exposed to AI was just 0.30 percentage points. Among the least exposed occupations, it was 0.94 percentage points. The jobs more exposed to AI are more stable in employment.
This paradox draws a new class line. In the past, the dividing line was "skilled versus unskilled." Now a split is emerging between "highly skilled workers who can use AI" and "highly skilled workers whom AI replaces."
PwC's 2025 report puts numbers to this divide. The wage premium for AI-proficient workers reaches 56% — more than double the 25% recorded in 2023. Wage growth by AI exposure level sharpens the pattern further. Low-exposure occupations: 7.9%. Medium-exposure: 12.6%. High-exposure: 16.7%.
For investors, this data carries a message. The temptation of technological determinism whispers that "AI changes everything," but there is a lesson from Rome. Roads and concrete did not build the empire. Institutions and law organized the flow. AI opens possibilities. Capital and institutions determine the direction.
Projections from international organizations show the scale. The IMF estimates that 60% of jobs in advanced economies fall within AI's sphere of influence. Half stand to benefit from productivity gains; the other half face the risk of displacement. In low-income countries, the figure is 26% — a gap of 2.3 times.
The World Economic Forum offers another projection. Between 2025 and 2030, 170 million new jobs will be created while 92 million are eliminated — a net gain of about 78 million (+7%).
Goldman Sachs estimates that AI could displace the equivalent of 300 million full-time jobs globally. McKinsey projects that generative AI could create $2.6 to $4.4 trillion in annual economic value — a figure comparable to the entire GDP of the United Kingdom in 2021 ($3.1 trillion).
As in the Industrial Revolution, the steam engine made the factory possible, but the joint-stock company and the banking system supplied the capital, and factory acts and education laws reshaped society. Large numbers do not, by themselves, determine direction.
3. Evidence of a Productivity Explosion — and Its Paradox
The evidence is there. In 2023, the GitHub Copilot study produced the best-controlled large-scale field experiment to date. In a randomized controlled trial, developers using the AI coding tool completed tasks 55.8% faster. Less experienced developers gained the most.
Brynjolfsson et al.'s 2025 study shows similar results across a broader range. Tracking 5,179 customer-support agents at Fortune 500 companies, AI-assisted tools raised average productivity by 14%. For new hires and lower-skilled workers, the gain was 34%.
"Two months of experience with AI equals six months without it." The skill gap compresses. The value of experience shrinks. Here again is the paradox: the advantage tilts toward newcomers.
Capital is responding to the evidence. AI capital expenditure by the four largest tech companies is surging — from $256 billion in 2024 to $427 billion in 2025, with $650 billion projected for 2026. A 2.5x increase in three years.
NVIDIA is the beneficiary of this flow. It commands 90 to 95% of the AI chip data-center market (as of 2025). Revenue grew from $27 billion in fiscal year 2023 to $96.3 billion in fiscal year 2025. Market capitalization: $4.44 trillion (February 2026).
In a gold rush, the money goes not to those who pan for gold but to those who sell the pickaxes. NVIDIA is the pickaxe merchant of the AI age.
OpenAI's revenue reflects the same acceleration. $2 billion in 2023, $6 billion in 2024, over $20 billion projected for 2025. 3x per year. The fastest SaaS revenue ramp in history.
By the numbers alone, a productivity revolution appears already underway. 374 of the S&P 500 companies mention AI in their filings. Companies are talking about AI.
And yet there is a paradox.
A gap yawns between micro-level evidence and macro-level reality. Copilot's 55.8% speed improvement. Brynjolfsson's 14% productivity gain. At the level of the individual firm, AI clearly works. Zoom out to the national economy, and the story changes.
Despite all this investment and growth, AI's effect is barely visible in macroeconomic productivity data. In an NBER study surveying 6,000 CEOs, the majority responded that "AI's actual operational impact remains modest." The abandonment rate for AI pilot projects has risen from 17% in 2023 to 42% by the end of 2024 — a 2.5x increase.
In 1987, economist Robert Solow left behind a famous sentence: "You can see the computer age everywhere except in the productivity statistics." This "Solow Paradox" is repeating in the age of AI.
The pattern is familiar. Economist Brynjolfsson explains it as the "J-curve effect." General-purpose technologies demand massive complementary investment. Workflows must be redesigned, employees retrained, software rebuilt. While these investments are underway, measured productivity stagnates or even declines.
The IT revolution exhibited exactly this pattern. The productivity paradox of the 1970s and 1980s: companies adopted PCs, but productivity metrics refused to budge. Between 1995 and 2005, labor productivity surged to an annual average of 2.5 to 3.0% — the ascending arm of the J-curve. Up from 1.4 to 1.5% in the preceding period, a jump of 1.0 to 1.5 percentage points.
There was a reason for the delay. Buying a PC was not enough. Business processes had to be redesigned, workers retrained, new software developed. It took Walmart ten years to combine barcodes with inventory management systems and revolutionize distribution.
The same process is underway with AI. Though there is no guarantee that the J-curve's upswing will arrive.
Daron Acemoglu, the 2024 Nobel laureate in economics, estimates AI's ten-year GDP effect at a maximum of 1%. This contrasts starkly with Goldman Sachs's estimate of 7%. Acemoglu's argument: the productivity gains visible in early experiments come from "easy tasks," and extrapolating them across the entire economy is excessive. Goldman Sachs's figure assumes AI raises labor productivity across advanced economies by 1.5 percentage points per year.
Somewhere between the optimists' 7% and the skeptics' 1%, reality lies. One thing is certain: the technology is already here. How far its effects reach depends on the allocation of capital and the response of institutions.
4. The Difference in Speed — This Time It Is Faster
Compare the speed of technological diffusion, and a pattern of acceleration appears. The steam engine took 50 to 70 years to reach wide adoption. Electricity took 30 to 40 years — 78% of American factories used electric power by 1929, up from about 5% in 1899. The internet took about 15 to 20 years.
Generative AI has reached 39% of American adults within two years of launch. The PC reached only 20% adoption three years after its release. The internet stood at 20% after two years. The time required to reach the same penetration level has been cut by more than half.
ChatGPT's weekly active users tell the story of this speed. January 2023: 100 million. December 2024: 300 million. February 2025: 400 million. By the end of 2025: 800 to 900 million. One-fifth of the world's internet-connected population uses a single AI service every week.
In Chapter 11, we compared the duration of hegemonic peaks. The Netherlands, 70 years. Britain, 50. The United States, 25. The same pattern appears in the speed of technological diffusion. From steam to electricity: 2x faster. From electricity to the internet: 2x again. Acceleration is accelerating.
What this difference in speed means for an individual is illustrated by the story of a translator living in Rome.
Call him Marco. Italian-English translator. Fourteen years of experience. His specialties were legal and financial documents — contracts, audit reports, regulatory filings. Spending thirty minutes on a single sentence to get the nuance of a term exactly right was a point of professional pride.
He worked freelance from a small apartment in central Rome. His income was not lavish, but it was stable.
In the summer of 2023, an email arrived from a regular client, a law firm in Milan. It announced a shift to machine translation post-editing, or MTPE. AI would produce the first draft; the translator would revise it. The per-word rate was cut by more than half.
Marco took on an MTPE assignment. The AI's first-pass translation was startlingly accurate. Eighty percent was usable as-is. The remaining twenty percent was where his fourteen years of expertise came into play — but in the client's eyes, that difference did not justify the higher rate.
In 2024, his income fell by 60%. New commissions began to thin out in his inbox. Where eight assignments a month once arrived, now there were two or three. By 2025, an 80% decline was projected. Across the Atlantic, a French-English translator in Quebec traced a similar arc — more than fifteen years of experience and a six-figure income collapsing at the same rate.
According to a 2024 survey by the Society of Authors in the U.K., more than a third of translators have lost work, and 43% report declining income.
Set this alongside the trajectory of the handloom weaver from Chapter 9, and the difference in speed becomes stark. The Lancashire weaver's wages fell by more than 80% over a generation — 25 to 30 years from peak to trough. Marco is experiencing the same magnitude of income loss in two to three years. The speed is roughly ten times faster.
The weaver had a generation's worth of time. His children could find different occupations. While the parents' generation collapsed, the next could become factory overseers or engine drivers. Marco does not have that time. The window for adaptation is compressed. This is the structural difference that the Displaced of the AI age confront.
Generalizing the translator's case to the entire AI era would be an error. Translation — text in, text out — is structurally among the most automatable occupations. Not every profession will experience this speed of collapse.
In the legal sector, AI usage increased 315% between 2023 and 2024. Seventy-nine percent of legal professionals use AI in daily tasks, yet employment remains stable. The difference depends on whether AI "replaces" the work or "augments" it.
The macro pattern of accelerating speed is difficult to deny. Steam engine: 50 to 70 years. Electricity: 30 to 40. Internet: 15 to 20. AI reached 39% of adults within two years. This acceleration compresses the time allowed for adaptation.
In Chapter 10, we saw that adapting to the Industrial Revolution took a hundred years. How much time does the AI age allow?
5. South Korea's Coordinates — 27% Displaced, 24% Discerning
For readers in South Korea, these figures may feel abstract. They should not.
South Korea ranks first among OECD members in AI adoption at firms with ten or more employees (30.28%). As of 2025, 55.7% of domestic companies use generative AI. By 2026, the figure is projected to reach 85%. Roughly half of Korean workers perform tasks using generative AI, saving an average of 1.5 hours per day.
Rapid adoption means a rapid opportunity to raise productivity. It also means people are displaced rapidly.
An analysis by the Bank of Korea provides more direct coordinates. AI adoption has the potential to raise South Korea's GDP by 4.2 to 12.6%. Productivity could improve by 1.1 to 3.2%. This also implies that AI could substantially offset the economic contraction caused by population decline.
At the same time, there is risk. Twenty-seven percent of all workers fall into the "high exposure, low complementarity" group. AI can replace their tasks, but they are poorly positioned to use AI to raise their own productivity.
By contrast, 24% belong to the "high exposure, high complementarity" group. They are exposed to AI, but they can wield it as a tool to increase their own value. Same technology, same exposure — opposite outcomes.
Twenty-seven percent and twenty-four percent. These numbers sketch the outline of South Korea's Displaced and the Discerning. A gap of just three percentage points. Two groups of nearly equal size are walking toward opposite futures.
The Korea Development Institute finds that AI adoption primarily reduces youth employment. In a world where the value of experience is compressed, the generation with the least experience paradoxically faces the greatest uncertainty. Brynjolfsson's research shows that "two months with AI equals six months of experience." This equivalence simultaneously erodes the skill advantage of incumbent workers and threatens to render new hires unnecessary altogether.
At the global level, lines of class restructuring are being drawn. A 2024 ILO analysis reveals overlapping inequalities. The share of workers highly exposed to generative AI is 34% in high-income countries and 11% in low-income countries — a gap of 3x.
Within the highest-exposure category, women account for 4.7% and men for 2.4% — 2x. The reason: clerical work is a primary source of female employment. Being Displaced is not determined by occupational category alone. Gender and geography intersect.
South Korea is flipping this coin faster than anyone else.
6. Transition — Technology Opens the Door. Capital Decides the Direction.
The foregoing pages have traced the contours of the third explosion. The Transformer architecture provided the foundation for cognitive automation. Scaling laws opened a predictable path where "scale creates intelligence." Emergent abilities produced unpredictable leaps along that path.
The scope of impact is unprecedented. Eighty percent of the U.S. workforce has at least 10% of its tasks affected by LLMs. High-skilled, high-income occupations stand on the front line of automation. The speed is equally unprecedented. The generation of time granted to the handloom weaver has been compressed to two or three years for the translator.
At the same time, the productivity paradox is playing out in real time. AI is everywhere, yet it is still invisible in macroeconomic statistics. Somewhere between a Nobel laureate's 1% and an investment bank's 7%, reality lies. Where exactly on the J-curve we stand, no one yet knows.
One thing is certain. Capital is already in motion. AI capital expenditure by the four largest tech companies has increased 2.5x over three years. NVIDIA's market capitalization has risen from $1 trillion to over $4 trillion in two years.
AI startups are absorbing more than half of all venture capital investment (52.5% as of 2025) — the largest concentration of capital into a single technology sector in history. Just as Roman capital flowed into the latifundia and Industrial Revolution capital concentrated in railways and factories, the river of capital is changing course.
One observation for the investor. During transitions of hegemony, the highest returns have never come from the core assets of the previous hegemon. As we saw in Chapter 11, the Discerning of 1870 to 1914 invested not in British cotton mills but in American railroad bonds. A similar question holds for the AI age. Where is value migrating? Toward AI adoption by existing industries? Toward AI infrastructure itself? Or toward entirely new industries that AI makes possible?
Technology opened the door. In Rome, it did. In the Industrial Revolution, it did. Now capital decides the direction.
In the next chapter, we examine where that capital is flowing, how, and why. Big Tech, data, and computing power — a new trinity. Rome's latifundia concentrated capital in land. The factories and railways of the Industrial Revolution concentrated it in machinery. The capital concentration of the AI age is happening in things you cannot see.
Data. Algorithms. Compute. Intangible assets are becoming the wellspring of wealth. A new form of capital concentration is producing a new kind of inequality.
End of Chapter 12. Next: Chapter 13 — New Forms of Capital Concentration: The Trinity of Big Tech, Data, and Computing Power