A Thursday in November 2025. Seoul.
A college entrance exam (suneung) hall. A high school teacher, assigned as a proctor, stared at the mathematics section. For thirty years he had prepared students for this test. Integration, probability and statistics, geometry. Everything he had taught was right there on the page. Eighteen minutes passed. The examinees bent over their desks, the scratch of pencils filling the hall.
He knew. More than half these problems could be solved by ChatGPT. He had tried it himself the year before. He typed in Question 30 — the so-called "killer question" — and the AI produced a full solution in twelve seconds. Most of his students spent thirty minutes on that problem. More failed it than solved it.
When the exam ended, the students would scatter along the branching paths of early admission and regular admission. Of them, 70.6 percent would enroll in university. The highest rate in the OECD. Would what he had taught them still hold value four years from now? In the silence of the exam hall, the question did not wait for an answer.
In the previous chapter, we saw two lives in 2025. A translator being displaced and an AI-native entrepreneur on the rise. For the third time, we arrived at the same conclusion: technology and capital move before institutions do. This chapter asks the question that follows. When do institutions catch up — and in what form?
We examine three domains. Education, labor law, taxation. These are the three arenas that provoked the fiercest debates after technology upended the economy — in Rome, during the Industrial Revolution, and now. And in all three, as of 2026, fundamental redesign has not arrived.
1. Education — The Most Refined Legacy of the Industrial Age
Education is, historically, the slowest institutional domain.
Recall the figures from Chapter 10. From the year Richard Arkwright built his first factory in 1769 to the creation of a national education system in England with the Forster Act of 1870 — 101 years. Of the four institutional domains — labor law, suffrage, education, and trade unions — education came last. Compulsory schooling took 111 years. Free education took 122.
Why is education always the slowest? Labor law faces resistance primarily from employers. Education makes every stakeholder anxious at once. The Church feared losing control over instruction. Factory owners dreaded the loss of child labor. Taxpayers refused higher taxes. Parents demanded a say in what their children were taught. In domains where interests are layered this densely, consensus is slow.
The product of that slow consensus remains with us today. Grade levels, timetables, standardized curricula, examinations. The skills that industrial society demanded determined the content of education. Standardization, time discipline, literacy, numeracy.
When Prussia introduced compulsory education in 1763, England was a century behind. By 1870, the literacy gap stood at 97 percent for Prussia versus 76 percent for England. Education spending was 0.2 percent of GDP in England, 1.6 percent in Germany — an eightfold difference. That gap became the structural reason Germany overtook Britain in the late nineteenth century.
Early investment in education determines industrial competitiveness. The lesson holds 150 years later.
The Paradox of the College Entrance Exam
South Korea is the most accomplished practitioner of that educational model.
Youth tertiary attainment: 70.6 percent, first in the OECD. PISA mathematics scores among the world's highest. The private tutoring market is among the largest on earth. From the perspective of industrial-age education, Korea is the finished product. The problem is that this finished product is the most vulnerable structure for the AI era.
What the national college entrance exam measures — memorization, pattern recognition, standardized problem-solving — is precisely what AI does best. According to the World Economic Forum, 39 percent of existing skill sets will be transformed or rendered obsolete between 2025 and 2030. Korea's private tutoring industry is optimized to maximize exactly the capabilities AI will replace. A system specialized in problems with fixed answers cannot teach the ability to navigate problems with no answer at all.
At the same time, adult participation in lifelong learning in Korea stands at just 33.1 percent. Degree attainment is the world's highest, but post-graduation learning is comparatively low. It is both the culmination of industrial-age education and the starting point of AI-age education.
Reports indicate that AI-related academic dishonesty accounts for 60 to 64 percent of all cheating incidents in higher education institutions. This is not a question of student ethics. It is a symptom of an education system that has not adapted to AI. In a world where exams can be solved by AI, a system that measures competence through exams stops working.
AI threatens existing education. It also opens the possibility of alternative models.
Bloom's Promise, the Numbers in Practice
In 1984, educational psychologist Benjamin Bloom reported a striking result. Students who received one-on-one tutoring performed two standard deviations higher than students in conventional classrooms. But giving every student a personal tutor was impossible. This was "Bloom's 2-sigma problem." AI tutoring has the potential to solve it at scale.
The real-world numbers are more modest. VanLehn's 2011 meta-analysis found an effect size of d = 0.76 for intelligent tutoring systems — a synthesis of fifty studies. That is 38 percent of the two-sigma Bloom claimed. In a 2024 study by Kestin and colleagues, GPT-4-based AI tutors showed effect sizes of d = 0.73 to 1.3 in a randomized controlled trial with university physics students. Learning volume more than doubled, while time spent fell by 18 percent.
Not two sigma. But even one sigma is educationally meaningful.
Khan Academy's AI tutor, Khanmigo, offers evidence of scale. During the 2024-25 school year, users surged from 40,000 to 700,000. By early 2026, the figure was 1.4 million. Sal Khan had projected 100,000 users by 2025. The actual number exceeded his forecast by a factor of fourteen. The core design is Socratic: it does not give answers — it asks questions.
Evidence of scale and evidence of efficacy are different things. No independent, large-scale RCT on Khanmigo's learning outcomes yet exists. That 1.4 million people use it is proof of accessibility, not proof of learning gains.
Does It Take a Hundred Years?
The Korean government has begun to move. In March 2025, AI-powered digital textbooks were introduced for third- and fourth-grade elementary students, first-year middle school students, and first-year high school students — in English, mathematics, and information technology. Full implementation is targeted for 2028. The teacher training budget is 740 billion won over three years. A goal of cultivating one million digital professionals has also been set.
Singapore chose a different path. Its SkillsFuture program is driving a transition from a "degree-based" to a "competency-based" economy. Eighty percent of Singapore job postings carry no educational requirement. In the United States, 63 percent of registered voters say a four-year degree is not worth the cost. Graduate enrollment in computer science fell 15 percent in the fall of 2025. AI is eroding education in its own field first.
According to the WEF, 59 percent of the global workforce will need reskilling by 2030. Of these, 11 percent — 120 million people — are projected to receive no retraining at all. IDC estimates the global economic loss from AI skill shortages at $5.5 trillion by 2026. The cost of AI upskilling per employee is $1,400.
Fourteen hundred dollars and $5.5 trillion. The asymmetry reveals both an investment opportunity and the urgency of policy.
During the Industrial Revolution, educational adaptation took 101 years. In the AI era, there are no 101 years to spare. Accelerating forces exist — real-time information, existing educational infrastructure, global benchmarking, AI itself as an educational tool. But decelerating forces are powerful too. Not knowing what to teach is more paralyzing than not having schools at all. The structural mismatch between the technology change cycle (six to twelve months) and the curriculum revision cycle (five to ten years) is the core obstacle.
2. Labor Law — The Existence of Law and the Functioning of Law Are Not the Same
In 1833, the British Parliament passed the Factory Act. Laws had existed before. The Health and Morals of Apprentices Act of 1802. The Cotton Mills and Factories Act of 1819. The words were on the books, but there was no enforcement apparatus. The key innovation of the 1833 Act was the appointment of four salaried factory inspectors. A law without inspectors was a meaningless law.
Four inspectors to oversee four thousand factories. Physically impossible. Yet even that impossibility mattered, because it established a principle: the state can intervene in the labor conditions of private enterprise. On that principle, forty-five years of incremental expansion were built.
The same pattern is visible in the AI era.
The EU AI Act entered into force in August 2024. It classified employment-related AI as high-risk. Recruitment, promotion, dismissal, task allocation, performance monitoring — all covered. It imposed a prior notification obligation toward workers.
The EU Platform Workers Directive introduced a "legal presumption": if a platform exercises control and direction, an employment relationship is presumed. South Korea's AI Basic Act took effect in January 2026 — the world's second comprehensive AI regulatory law.
The speed of legislative response has accelerated. From the start of the Industrial Revolution to the first effective response: sixty-four years. In the AI era, from the technology's emergence to the EU AI Act: two years. A first legislative response sixteen to thirty-two times faster.
But the existence of law and the functioning of law are not the same. The EU AI Act's high-risk obligations can be deferred. Full application may slip to 2027 or 2028. New York City's Local Law 144 mandated annual bias audits for automated employment decision tools. Two years into enforcement, it drew criticism for inadequate enforcement. California's Proposition 22 affirmed platform workers' independent contractor status. For four years, no agency verified compliance with its wage and health insurance provisions.
Structurally identical to the Cotton Mills Act of 1819 — a law without inspectors.
When the Algorithm Became the Boss
Modern labor law was designed around four premises. Human workers operate under the direction of human managers. Working hours are measurable and finite. Ownership of work output is determined by the employment relationship. Hiring and firing are acts of human judgment.
AI is eroding all four premises simultaneously.
Sixty-one percent of U.S. companies use AI-based analytics to measure employee productivity and behavior. Seventy-four percent use online tracking tools. Ninety percent rely on monitoring tools. Fifty-six percent of monitored employees report stress and tension; among unmonitored employees, the figure is 40 percent. A sixteen-percentage-point gap.
Article 22 of the GDPR guarantees the right not to be subject to fully automated decision-making. "Meaningful human intervention" is required. When AI reports that "this employee's performance score is in the bottom 10 percent" and a manager glances at it for three seconds before clicking "approve," is that a "human decision"? The GDPR prohibits rubber-stamping. What qualifies as more than a rubber stamp remains undefined.
As of 2022, South Korea had 795,000 platform workers — 3 percent of total employment. In July 2023, the exclusivity requirement for industrial accident insurance was abolished, extending coverage to workers on multiple platforms. Protection has begun, but health insurance and pensions remain out of reach. The Supreme Court has been expanding precedent toward recognizing platform workers as employees.
At the federal level in the United States, as of March 2026, no comprehensive AI employment regulation exists. In 2024, more than 400 AI-related bills were introduced across forty-one states. The "No Robot Bosses Act" did not pass. The "Stop Spying Bosses Act" did not pass. The absence of a unified federal framework creates opportunities for regulatory arbitrage for corporations and gaps in protection for workers.
When surveillance intensifies and protection weakens simultaneously, the need for a safety net grows. For investors, this interval is double-edged. The AI workplace surveillance market is booming, but no one knows when the regulatory backlash will arrive. The key is distinguishing companies that grow while the arbitrage window is open from those that will adapt when it closes.
From the Annona to Guaranteed Income
When livelihoods are destroyed, demand for welfare follows. This pattern recurs across all three eras.
In Rome, when the latifundia displaced small farmers, grain subsidies were provided to the proletariat who migrated to the cities. The Lex Frumentaria of 123 BC was the beginning. By 58 BC, it became free. After that, no politician could abolish the annona.
During the Industrial Revolution, when machines replaced handloom weavers, the welfare system evolved. From the Elizabethan Poor Law of 1601 through the New Poor Law of 1834 to the Beveridge Report of 1942.
In the AI era, Universal Basic Income stands on that same continuum. As of 2025, more than one hundred guaranteed income programs are in operation worldwide.
Experimental evidence is accumulating. Finland (2017-2018) gave 2,000 unemployed individuals 560 euros per month for twenty-four months. The employment effect was negligible — annual days of employment increased by just six. Life satisfaction was 7.3 versus 6.8 for the control group. Mental health and cognitive function improved significantly. The Finnish government refused to extend the experiment. The negligible employment effect was read, politically, as "failure."
Stockton's SEED program (2019-2021) showed different results. One hundred thirty-one low-income residents received $500 per month for twenty-four months. Full-time employment rose from 28 percent to 40 percent. The control group went from 32 to 37 percent. A twelve-point gain versus five. Less than 1 percent of spending went to alcohol and tobacco. Income stability and mental health improvements held even through the COVID-19 pandemic.
Kenya's GiveDirectly experiment is the world's largest. 23,000 participants, 195 villages, a twelve-year timeline, a $30 million project. Self-employment among recipients increased. Labor supply did not fall. The key finding: payment structure determines outcomes. Large lump sums drove investment. Long-term UBI encouraged savings and risk-taking. Short-term UBI improved only nutrition and psychological well-being.
The three experiments suggest that UBI has moved from a question of "whether" to a question of "in what form." Within the next decade, most advanced economies are likely to implement some version of guaranteed income. But for the modifier "universal" to apply, the funding question must be resolved. That question belongs to the next section.
3. Taxation — How Do You Tax Wealth You Cannot See?
In 1799, British Prime Minister William Pitt introduced the income tax. He needed funds for the Napoleonic Wars. The top rate was 10 percent. The target was 10 million pounds; actual revenue came to 6 million. It was repealed in 1802. Reintroduced in 1803.
Repealed again in 1816. Permanently institutionalized by Prime Minister Robert Peel in 1842, at a rate of 3 percent. From emergency introduction to permanent institution: forty-three years.
The income tax was a response to new forms of wealth created by the Industrial Revolution. Factory profits, financial returns. The land tax could not capture a factory owner's earnings. In 1700, 60 percent of British tax revenue came from the land tax. By 1914, it was 5 percent.
Income tax rose from zero to 40 percent. Estate duties from zero to 15 percent. The entire tax system was redesigned over 150 years.
Every new tax instrument was resisted as "radical." Every one eventually came to be accepted as natural.
A Tax Base That Depends on Labor Income
The same crisis is approaching in the AI era. Consider the structure of U.S. federal revenue for fiscal year 2023. Individual income tax: 49 percent. Payroll tax: 36 percent. Corporate income tax: 9 percent. Other: 6 percent. The share dependent on labor income: 85 percent. If AI replaces cognitive labor, 85 percent of the tax base is structurally eroded.
According to RAND, a large-scale automation scenario produces "dual tax base vulnerability." The income tax base shrinks — and because falling wages reduce consumption, the consumption tax base contracts simultaneously. At exactly the moment when spending on retraining and safety nets must rise, revenue falls.
The effective tax rate for American billionaires is 8.2 percent. For a median-income worker, it is 25 percent. The wealth share of the top 0.01 percent has doubled since 1980. Piketty's r > g is being accelerated by AI. In a structure where the return on capital exceeds the rate of economic growth, capital income increasingly evades income taxation.
The Robot Tax Question
In February 2017, Bill Gates made a proposal. "If a robot does $50,000 worth of work, it should be taxed at a similar level." That same month, MEP Mady Delvaux introduced a similar proposal in the European Parliament. It was voted down. Delvaux said: "They dismissed the concerns of citizens."
In August 2017, South Korea became the first country in the world to take a measure equivalent to a "robot tax." It reduced the tax credit for automation equipment investment by up to two percentage points. Not a direct tax — a subsidy reduction. After this modest measure, new robot installations in South Korea fell for the first time since 2012.
In October 2025, Senator Sanders released a report warning that, in a worst-case scenario, "AI and automation could destroy 100 million American jobs within a decade." Senator Graham's response: "Dead on arrival."
The income tax, too, received multiple death sentences. Introduced in 1799. Repealed in 1802. Reintroduced in 1803. Repealed again in 1816. Permanently established in 1842. The institution was born through repeated death and resurrection.
The Problem of Invisible Wealth
The eternal enemy of the tax inspector is not the tax evader — it is invisibility.
When Rome's censor walked the fields to assess the tributum, land was visible. When a Victorian tax official reviewed factory ledgers, machines and profits were measurable. How do you "see" the value AI creates? The value of Google's search algorithm. Meta's social graph. The productivity gains from an AI copilot.
The "Silicon Six" paid $155.3 billion less than statutory tax rates between 2010 and 2019. Meta, Apple, Netflix, Google, Amazon, Microsoft. The structure: intellectual property transferred to Ireland, Singapore, Luxembourg. Value created by American employees and American R&D, taxed in Ireland.
OECD Pillar Two is an attempt at a response. A minimum corporate tax rate of 15 percent for multinational enterprises with revenue exceeding 750 million euros. Most advanced economies began implementation in 2024. The United States did not. In January 2025, President Trump signed an executive order declaring that the prior OECD agreement "has no force in the United States absent Congressional action."
The Brookings Institution has proposed a three-phase AI taxation framework. In the near term, maintain the current system while closing loopholes. In the medium term, shift the tax base toward consumption taxes. In the long term, tax AI resource accumulation — compute and hardware. The framework likens taxing AI capital assets to taxing steel during the Industrial Revolution — and argues that this should be avoided in Phase 1.
South Korea's contradiction compresses this tension. In 2017, it introduced a robot tax — an incentive reduction. In 2025, it offers 4.23 trillion won in tax credits for AI R&D. One hand restrains automation; the other promotes AI investment.
South Korea's manufacturing robot density is 1,012 units per 10,000 workers — the highest in the world. The working-age population is projected to decline from 72 percent in 2020 to 55 percent by 2050. The tension between automation and taxation is nowhere more acute than in South Korea.
Across three eras, the migration of the tax base repeats. Rome moved from visible wealth — land — to invisible wealth — provincial trade. The Industrial Revolution moved from land to factory profits and financial returns. The AI era is moving from labor income to data, compute, and algorithmic value. Each time, the existing tax system failed to capture new forms of wealth. Each time, new tax instruments were invented.
4. Recognition Has Accelerated, but Action Has Not
A pattern runs through all three institutions — education, labor law, taxation.
The speed of problem recognition has accelerated dramatically. In Rome, it took roughly sixty years to recognize the dangers of the latifundia. During the Industrial Revolution, twenty-six years to recognize the problems of the factory system. In the AI era, less than one year. Four months after ChatGPT launched, Eloundou and colleagues published "GPTs are GPTs." From Rome to AI: a 120-fold acceleration.
The first nominal response has also sped up. In Rome, sixty-seven years to the Gracchan land law of 133 BC. During the Industrial Revolution, thirty-three years to the first Factory Act in 1802. In the AI era, one year to the U.S. executive order of October 2023. A sixty-seven-fold acceleration.
Recognition has accelerated. Action has not.
During the Industrial Revolution, the gap between nominal regulation (1802) and effective regulation (1833) was thirty-one years. In the AI era, as of 2026, at least three years have passed — and the gap remains open. The EU AI Act has been adopted, but full application is not until August 2026. In the U.S. Congress, more than one hundred AI bills have been introduced since 2023, but zero comprehensive laws have passed. Biden's AI executive order of October 2023 was revoked by Trump fifteen months later.
Four forces accelerate, and five decelerate.
Accelerating forces. Information speed — problems become visible in real time. Democratic procedures — universal suffrage is already established. International cooperation infrastructure — forty-six countries have endorsed the OECD AI Principles. Institutional muscle memory — regulatory design patterns have accumulated from GDPR, antitrust law, and beyond.
Decelerating forces:
- Regulatory capture — Big Tech's U.S. lobbying expenditure exceeds $70 million annually.
- Technical complexity — the EU AI Act's initial draft in 2021 was written before ChatGPT; by the time it was adopted in 2024, provisions for foundation models had to be hastily added.
- Global competition — in the U.S.-China AI race, regulation reads as a concession of leadership.
- Political polarization — legislative gridlock in the United States.
- Diffuse and uncertain harm — in the AI era, a "Sadler Committee moment" has not yet arrived.
Before the Sadler Committee in 1832, factory children testified. Parliament moved. In the AI era, millions of knowledge workers are being affected incrementally. Without a single galvanizing event to create a political tipping point, the momentum for comprehensive action remains weak.
Four policy models are in competition. The EU leads with regulation — a risk-based comprehensive framework. The EU AI Office initially staffed 140 people. The number is thirty-five times the four factory inspectors of 1833, but the complexity of what must be overseen is incomparably greater.
The United States leads with innovation. At the federal level, there is no dedicated AI regulatory agency. The situation mirrors pre-1833 Britain. China leads with state direction — it enacted the world's first regulation specific to generative AI in August 2023, nine months after ChatGPT's launch.
South Korea is a hybrid. It has the highest AI adoption rate in the OECD, yet its comprehensive AI law did not take effect until January 2026. The highest adoption rate does not mean the earliest regulatory framework.
No model has yet achieved what the Factory Act of 1833 represented: "effective regulation." No country possesses all three elements — legislation, enforcement infrastructure, and demonstrated impact.
5. The Person Falling Asleep in Front of the Screen
Winter 2025. A law firm in Seoul.
A paralegal with fifteen years of experience sat at her desk. Her eight-hour shift was over. The office lights were off, but her laptop was open. She was working through the company's "AI upskilling course." Prompt engineering, legal AI tools, data analysis fundamentals.
A lecture video played on the screen. Her eyes closed. She forced them open. They closed again. The cursor blinked beside a paper cup of cooling coffee.
This scene carries an echo from 190 years ago.
In Chapter 10, we saw the half-time children. The Factory Act of 1833 limited working hours for children aged nine to thirteen to eight hours a day and mandated two hours of daily education. Records show that children who sat in classrooms after six hours of labor fell asleep. The education was nominal. From that nominal education to substantive education — the Forster Act of 1870 — took another thirty-seven years.
The paralegal in 2025 occupies the same position as the half-time child. After full-time work, there is no time for retraining. The cost comes out of her own pocket. What she should be learning is itself uncertain. Sixty-one percent of workers "consider" upskilling, but only 4 percent are actually pursuing it. A fifty-seven-percentage-point gap.
AI legal tools have begun to replace her core work. Case research, document drafting. Legal research time is being cut by 60 to 80 percent. She is learning to use the tool that will replace her more efficiently. The same structure the translator in Chapter 14 described — feeling that AI post-editing was "learning to make my own replacement more effective."
In the Industrial Revolution, the half-time child's problem was not laziness. It was a design flaw in the institution. The paralegal's problem in 2025 is not a lack of willpower. A structure that asks her to watch two hours of online lectures after a full day's work is every bit as nominal as the half-time system of the 1833 Factory Act.
Transition — What Has Not Yet Arrived
Education, labor law, taxation. One sentence runs through all three domains.
Technology opened the possibilities; capital and institutions determined the direction. The problem is that the institutions have not yet arrived. And until they do, the line between the Displaced and the Discerning remains drawn by forces no individual can control.
In Rome, the absence of institutional response led to regime change. When reform failed within the Republic, the Principate came. In Britain, gradual adaptation was possible. From the Factory Act of 1833 to universal suffrage in 1928, a century of incremental reform accumulated. In the AI era, the institutional path remains undetermined.
The cycle that runs through all three eras repeats. Technology detonates productivity. Capital concentrates. Social unrest grows. Institutions are redesigned. In Part 1, we saw Rome. In Part 2, the Industrial Revolution. In Part 3, the AI era.
Now it is time to ask the larger question. Does this cycle truly exist? If there is a formula that runs through all three eras, what does it tell us?
End of Chapter 15. Next: Chapter 16 — The Formula Across Three Eras: Technology, Capital Concentration, Social Unrest, Institutional Redesign