← The Last Profession Vol. 6 14 / 15 한국어
Vol. 6 — The Last Profession

Chapter 13 — The Last Profession: The Human Guarantee


1. Same Day, Four Mornings

Wednesday, June 17, 2026.

Hai Phong, 5:40 a.m. Lee Jung-hoon (53) stands in front of the factory gate listening to the sound of a motorbike engine. Nguyen has arrived first. The first time Lee Jung-hoon stood before this gate a year ago, Nguyen said nothing. Now he holds out a coffee and smiles. Vietnamese sweetened condensed milk coffee.

The third press on the line needs a die change today. Lee Jung-hoon knows by sound — since yesterday afternoon it has been off by about 0.2 millimeters. The sensors haven't caught it yet.

Nguyen opens his notebook. The third one. The first time Lee Jung-hoon described an anomaly by the sound of the machine, Nguyen tilted his head. Now he picks up his pen and waits. What 28 years of ears transmit, 18 months of hands transcribe.

On the wall, a poster glows in the morning light: "Smart Factory Implementation Plan 2027." It was there a year ago too. Time is moving.

Pangyo, 8:50 a.m. Kim Su-jin (44) sits at the window seat of the fintech office and opens a file with a red label. AI Rejection cases. Fourteen have stacked up by this morning.

A year ago, when she first sat here, more than twenty arrived per day. As AI 4.5 has learned more unstructured data, the number reaching Kim Su-jin's desk has been falling. The gap is narrowing. But the remaining fourteen are harder than last year's twenty. The easy ones have already been taken by the machine.

Eunpyeong-gu, 5:15 a.m. Choi Eun-jeong (52) walks the third-floor corridor of the care facility. She stops in front of Room 302. The sky is overcast. On overcast days, the grandmother looks for her husband.

She opens the door. The grandmother is sitting on the edge of the bed with her slippers on. She had been about to go out. Choi Eun-jeong sits beside her and takes her hand. It is cold. "Grandma, your husband just stepped out for a moment. He'll be right back." Seven years of the same words. The weight of those words has never changed.

Pangyo, 9:10 a.m. Yun Seo-yeon (29) opens her laptop in an AI startup conference room. Today's agenda: setting the scope of customer-service coverage for a new AI agent. How much to delegate to AI, and where humans must intervene.

For Yun Seo-yeon, this question is a matter of adjusting technical parameters. For her generation, AI is environment, not tool. When Jeong Min-ho (45) in the next seat draws on 20 years of industry experience and says "this is something a person has to do," Yun Seo-yeon asks: "Why?" That single word is the river running between generations. Jeong Min-ho thinks he knows the answer. Yun Seo-yeon thinks there is no answer. Both may be right.

What the four people are doing on the same day in the same time frame is entirely different. Yet all four stand before the same question. The boundary between what machines can do and what people must do — confirming one's own position in a world where that boundary shifts a little each day.


2. The Capability Frontier Retreats Every Year

In 2013, Carl Benedikt Frey and Michael Osborne published their finding that 47 percent of 702 U.S. occupations faced high automation risk. The paper became the original text of the automation debate. Their criterion was simple. Occupational safety was defined by what machines could "not do." Three bottlenecks — perception and manipulation, creative intelligence, social intelligence — were the defensive line.

Eleven years have passed. The defensive line has nearly collapsed.

GPT-4 scored in the top 10 percent of the U.S. bar exam. AlphaFold surpassed human experts in protein structure prediction. Midjourney took first place in the digital art category at the Colorado State Fair — in the domain Frey and Osborne called "creative intelligence." Eloundou et al.'s 2023 analysis found that 80 percent of U.S. occupations have at least 10 percent of their tasks exposed to large language models. Exposure was highest in high-income, high-credential occupations.

This is what makes it different from past automation. The industrial revolution replaced manual labor from the bottom up. AI penetrates cognitive labor from the top down. Frey and Osborne's defensive line was not collapsing from below to above — it was collapsing from above to below.

IBM Watson for Oncology embodies this paradox. In 2015, Watson drew attention as a cancer treatment recommendation system. MD Anderson Cancer Center invested $62 million. In some cases, Watson's recommendations matched physicians' at a rate above 90 percent. But the project was terminated. The problem was not accuracy. It was that Watson could not explain why it had made a particular recommendation in the remaining 10 percent of cases. Not a capability problem — an explanation problem, and through explanation, a trust problem.

Daron Acemoglu and Pascual Restrepo offered an alternative frame in 2019. Automation displaces existing tasks, but simultaneously creates new ones. Historically, in most technological transitions the creation effect offset or exceeded the displacement effect. But in a 2024 follow-up study, Acemoglu warned that with AI, simultaneous displacement across cognitive labor as a whole may not allow time for new task creation to catch up.

Richard Susskind and Daniel Susskind forecast the dismantling of the professions. The professions rest on information asymmetry — doctors, lawyers, accountants exist because they possess specialized knowledge inaccessible to laypeople. If AI resolves that information asymmetry, the basis for the professions disappears. A powerful analysis, but one that misses something. The reason the professions exist is not information asymmetry alone. It is also the delegation of trust. A patient does not hand a diagnosis to a physician — the patient designates a human being who will be accountable when the diagnosis proves wrong.

David Autor pointed in yet another direction. AI can become a tool for expert judgment but cannot become a substitute for it. The reason: expert judgment is context-dependent, rests on tacit knowledge, and carries responsibility for the consequences of decisions. Yet even Autor acknowledges that the precise boundary of expert judgment is determined not by technology but by institutional context. The reason a physician delivers the final diagnosis is not that AI's capability falls short — it is that the medical system requires a human physician's signature.

The IMF's 2024 report occupies an interesting middle position. Roughly 60 percent of occupations in advanced economies are exposed to AI, of which half may see productivity gains from AI integration and the other half face displacement risk. Yet the report evades the central question: who decides, and by what standard, where the line between "complement" and "replace" falls.

Define "the last profession" by the capability criterion, and that definition must be revised each time the next model is released. It is not a definition — it is a countdown. Translation, legal document review, medical imaging, basic coding — all of them "safe" in 2013 and already targets of automation by 2024.

A different frame is needed.

Chapter 11 showed that transition. Define occupations by what AI "cannot do," and the definition collapses as the capability frontier retreats. Define occupations by what society will "not assign" to AI, and the standard belongs not to technology but to society.

The capability criterion is a sandcastle — it collapses with every wave. The trust criterion is a seawall — it lets waves in only up to the line society has agreed upon. The position of the seawall is set not by technology but by society.

So what standard does society use to set the seawall's position? The next section derives that standard from the intersection of two thinkers.


3. Arendt and Luhmann — The Generative Logic of the Four Domains

Why Judgment, Trust, Care, and Meaning? This classification is not an arbitrary list. It is derived structurally from the intersection of two thinkers.

Hannah Arendt, in The Human Condition (1958), distinguished three modes of the vita activa. Labor is repetition for survival — eating, washing, cleaning, eating again. The products of labor vanish the moment they are consumed. Work is the activity of constructing a durable world — building houses, making tools, writing code. Action is the activity of beginning something new in the presence of others. Action is unpredictable, irreversible, and possible only when different finite beings are present together. Arendt called this natality — the capacity to begin what has not existed before.

In the AI age, Arendt's tripartite division takes on new meaning. Labor is automated first and most broadly — data entry, inventory management, basic customer service. Work is being partially replaced — Midjourney makes images, GPT-4 writes code. As generative AI encroaches on Arendt's domain of work, the sense of connection to the world that comes from building durable things is itself eroded.

But action — disclosing oneself before others, making promises, judging, forgiving — presupposes plurality. It requires different finite beings to be present together.

What Arendt called natality — the capacity to begin what has not existed before — holds meaning only because the one who begins is finite. It has weight only because others exist who will live through the consequences of that action together. Algorithms are not finite. They neither begin nor end. They therefore cannot be the subject of Arendt's action.

Niklas Luhmann, in Trust and Power (1979), distinguished two types of trust. System trust is trust in institutions — we trust that currency will hold its value tomorrow, that courts will enforce contracts. This trust is not about specific human beings. Blockchain attempts to replace third-party trust with code; credit scores replace a loan officer's subjective judgment with numbers. System trust is replaceable by algorithms.

Personal trust is different. "This physician will treat me." "This lawyer will represent me." "This person will stay by my side." The core of personal trust is mutual vulnerability — I trust this person because this person is also vulnerable to the consequences of betrayal. Damaged reputation, legal liability, pangs of conscience. Algorithms are not vulnerable. They do not betray, do not regret, do not lose reputation. They therefore cannot be objects of personal trust.

Intersecting Arendt's action and Luhmann's personal trust, four domains are derived. What a human being can do in the presence of others comes down to four things — deciding the fate of another (Judgment), vouching for oneself to another (Trust), responding to another's vulnerability (Care), or constructing a narrative while sharing finitude with another (Meaning). These four modes exhaust the logical possibilities of action.

Of course, other theoretical frames yield other classifications — Habermas's theory of communicative action, MacIntyre's virtue ethics. The reason Book 6, The Last Profession, adopts Arendt and Luhmann is that this combination explains simultaneously "why AI finds these hard" and "why society will not assign them." Capturing the capability problem and the trust problem through a single lens — this is the analytical advantage of the intersection.


4. The Four Domains — Judgment, Trust, Care, Meaning

Judgment. Decisions where errors cause death, lost freedom, or lost assets.

As of 2024, the FDA had approved more than 950 AI medical devices. Every one of them is classified as an "assistive tool." There are in practice no AI medical devices with independent diagnostic authority. Even the sole exception, IDx-DR, is limited to binary screening for diabetic retinopathy alone, and treatment decisions are made by ophthalmologists.

Why. Even if AI is more accurate, there is no answer to the question "who is responsible" when it misdiagnoses. A physician can be disciplined, can lose a license, can stand in court. An algorithm suffers none of these consequences. Only those who can be held accountable have the standing to judge.

Korea's Lunit has reached world-class standing in AI-based medical imaging analysis. It has received European CE certification and FDA approval for chest X-ray interpretation assistance and mammography analysis, and is in use at Asan Medical Center and Severance Hospital in Seoul. But here too, the key word is "assistance." Of the more than 200 AI medical devices approved by Korea's Ministry of Food and Drug Safety, not one holds independent diagnostic authority. In a survey by the Korean Society of Radiology, roughly 70 percent of specialists were positive about AI assistive tools, but fewer than 15 percent supported independent AI interpretation.

Autonomous weapons show this structure in its most extreme form. Under the UN Convention on Certain Conventional Weapons framework, discussions on autonomous weapons regulation have continued since 2014, but as of 2025 no binding treaty has been concluded. On one side sits the technical argument that AI can distinguish targets more precisely than humans and therefore reduce civilian casualties. On the other sits the ethical argument that delegating kill decisions to machines is a fundamental violation of human dignity. In its 2023 revision of autonomous weapons policy, the U.S. Department of Defense reaffirmed the principle that "humans must remain in the loop." Even when technically possible, what society will not permit — this is the most durable defensive line in the domain of Judgment.

Gary Klein's theory of Naturalistic Decision Making demonstrated that expert judgment is not formal analysis but a combination of pattern recognition and simulation. A firefighter looks at a situation and immediately recognizes "this resembles something I've seen before." AI can surpass humans in pattern recognition. But in exception situations where the pattern breaks, judgment depends on experience-based tacit knowledge. This is Lee Jung-hoon catching a 0.2-millimeter misalignment by ear. What 28 years of ears catch, the sensors have not yet caught.

Trust. Relational transactions that require a human guarantor.

When the Medici bank's appraiser in Book 3, The Invisible Hand's Last Trade, signed the documents of a Bruges merchant, that signature staked the Medici bank's reputation. Five hundred and fifty years later, when Kim Su-jin approves an AI-rejected case, the same structure operates. The label "approved by Kim Su-jin" has value only inside a structure in which her reputation is damaged when her judgment proves wrong. Algorithms do not lose reputation. Humans do. That capacity for loss is the essence of the guarantee.

The value of personal trust rises sharply in proportion to the magnitude of the stakes at issue. Small transfers go to apps; ₩10 billion in assets goes to a person. Minor disputes are handled by algorithms; cases in which constitutional rights collide go before human judges.

In Korean society, trust has a different structure from the West. Trust networks based on educational ties, regional ties, and family ties — so-called yeongo — have functioned as a complement to, and sometimes a substitute for, formal institutions. The trust in "a lawyer introduced by a Seoul National University senior" and the trust in "a lawyer recommended by AI" are qualitatively different. The former is personal trust; the latter is system trust. In Chapter 12, what operated when Kim Su-jin read the file of Lee Sun-ja (60) was exactly this structure — ten years as merchants' council treasurer was not a variable in the algorithm's model, but it was evidence of relationship-based trust.

Diplomacy is where this structure appears in sharpest relief. As of 2025, no state has dispatched an AI ambassador, and no international treaty permits AI as a signatory. What prevented nuclear war during the 1962 Cuban Missile Crisis was unofficial back-channel contact between the United States and the Soviet Union. What produced agreement between Sadat and Begin at the 1978 Camp David Accords was President Carter's thirteen days of sequestered negotiation. Even if AI could derive the optimal settlement, what makes both sides accept it belongs to the domain of human relationship. The price of trust is set at the boundary between routine and exception.

Care. The domain in which the relationship itself is the service.

Choi Eun-jeong sitting beside the grandmother in Room 302 and taking her hand is not function — it is relationship. According to Nel Noddings's care ethics, care is complete when the one being cared for perceives "this being is caring for me." An AI monitoring system knows the grandmother's blood pressure and heart rate. Choi Eun-jeong knows the grandmother's loneliness. That difference is everything.

Japan is the country that has most aggressively adopted eldercare robots. Since launching its "Robot Care Device Development Promotion Project" in 2013, roughly 8,000 facilities had introduced robotic devices as of 2024. PARO — the therapeutic robotic seal — is used in 30 countries. Yet after more than a decade of investment, not a single case has been reported of a robot "replacing" a human care worker. Robots remain in the domain of physical assistance — mobility lifts, monitoring sensors. At the same time, roughly 68,000 solitary deaths occur in Japan each year. Even when technology provides the functions of care, if it cannot provide the relationships of care, solitary deaths do not decline.

₩12,000 per hour. The most irreplaceable work is the most undervalued. OECD research finds that working in a care occupation imposes a 15 to 25 percent wage disadvantage relative to non-care occupations of equivalent education and experience. Roughly 90 percent of Korean eldercare workers are women, with an average age in the mid-fifties. Those who provide care are themselves entering the age at which care is needed.

Korea entered super-aged society in 2025 — the population aged 65 and over exceeding 20 percent. From aged to super-aged: seven years. France took 39, the United States 15, Japan 10. Long-term care recipients are projected to grow from roughly 1.1 million in 2024 to roughly 1.5 million by 2030. The roughly 550,000 to 600,000 active eldercare workers cannot meet that demand. A care gap is opening.

Acknowledging this contradiction directly must come before any prescription.

Meaning. Value arising from shared finitude.

The more AI produces content without limit, the more the scarcity of "something made by a finite human being who spent time on it" appears. Etsy's handmade goods market maintains roughly $13 billion annually despite the explosion of AI-generated content.

The U.S. Copyright Office refused copyright registration for images generated purely by AI. The core rationale was not that AI quality was insufficient. It was the institutional premise that copyright presupposes "a human author." Not a capability problem — a problem of social consensus.

Religious ritual shows this structure most durably. Buddhist chanting, Islamic salat, Catholic Mass — no technological revolution has removed the human from these rites. The printing press made scripture universal but did not replace priests; radio broadcast sermons but did not replace pastors. The essence of ritual is not the transmission of information but the act of finite beings collectively affirming their finitude.

Viktor Frankl located the ultimate source of meaning in the attitude a finite being chooses in the face of suffering. On Beethoven's final string quartet (Op.135) is written: "Muss es sein? Es muss sein!" — Must it be? It must be. The weight of that question and answer left by a human being facing death cannot be contained in any music AI generates. AI does not suffer. It can therefore be an instrument of meaning but cannot be a source of it.

The four domains are not partitions — they overlap. A physician's diagnosis is simultaneously Judgment and Trust; hospice nursing is simultaneously Care and Meaning. A teacher's class is Judgment (assessment), Care (supporting growth), and Meaning (transmitting knowledge) at once.

This may be why religious leaders have survived every technological transition — theirs is one of the rare occupations in which Judgment, Trust, Care, and Meaning all intersect. The more the four domains overlap, the stronger the resistance to displacement. An occupation belonging to only one domain is vulnerable; an occupation where all four overlap is durable.


5. Rebuttals — AI Therapists, AI Judges, AI Art, the Generational River

For this framework to hold, it must not evade counterarguments.

Examining the evidence most damaging to one's own thesis first is intellectual honesty. Four counterarguments that challenge the four domains of "the last profession" are addressed directly.

First: AI therapists. Woebot, a chatbot based on cognitive behavioral therapy, has been used by millions. USC's virtual interview system Ellie drew more candid responses from soldiers in PTSD screening than human interviewers — because the cost of showing weakness to a human did not apply to an AI.

But what Woebot performs is not Care — it is the technical sub-function of Care. Delivering CBT techniques is a tool's role. In a large-scale 2024 clinical trial, Woebot was more effective than "no intervention at all" — but the effect size against in-person CBT with a human therapist was smaller. In severe depression, the effect was particularly limited. In Noddings's frame, care is complete when the one being cared for feels "this being is caring for me." Most Woebot users feel they are using a useful tool. Tool and Care are different.

Ellie's case is more subtle. That soldiers were more candid with an AI is not because AI cares better — it is because the cost of showing vulnerability to a human is too high. The solution is not AI; it is building human relationships where showing vulnerability is safe. Britain's AI therapy chatbot Wysa positions itself as "support during the wait to see a human therapist" — expanding access, not replacing Care, is the only defensible claim.

Second: China's AI judges. Since 2019, China has operated an AI judge system. The structure in which AI drafts rulings in routine civil cases while human judges sign has become standard. A 97 percent accuracy rate is claimed — but independent verification has not been performed.

The point is not accuracy. What AI handles are cases with clear facts and mechanical legal application. This is not the domain of Judgment — it is the domain of processing. AI judges are not used in cases where constitutional rights collide, where there is no precedent, or where human considerations enter sentencing. More fundamentally, in criminal cases defendants hold the right to be tried by human judges — a basic right in most constitutional systems.

Estonia introduced an AI system for small claims in a democratic context in 2019, limited to disputes below €7,000, with dissatisfied parties retaining the right to appeal to a human judge. Unlike China, the pace of expansion has been extremely slow. When the regime differs, social acceptance of the same technology differs.

America's COMPAS algorithm was found to have a false-positive rate for Black defendants roughly double that for white defendants, and the Wisconsin Supreme Court held that the score cannot be used as the "sole basis" for sentencing. The logic of the court's ruling matters. The ground was not algorithmic accuracy — it was the defendant's due-process rights. Not a capability problem — an institutional legitimacy problem.

Third: AI art. Midjourney, DALL-E, and Suno generate high-quality images and music from text. They are already displacing human creative work in commercial illustration and stock image markets.

But "commercial image production" and "art" are different things. What AI is displacing is not art — it is the commercial sub-function of art. Stock images, background music, advertising copy — these correspond in Arendt's framework to work. Activities that construct a durable world, but ones that do not presuppose the presence of a finite being.

If art's value lies in visual quality, AI has already surpassed humans. If art's value lies in "a finite being drawing something from its own experience and conveying it to another finite being," AI cannot participate in that process. AI does not experience. It does not suffer. It does not die. AI can therefore write a perfect sonnet — but cannot put its own death into that sonnet.

Fourth: the generational blind spot. This is the most uncomfortable counterargument. The first three challenged AI's technical limits. This one is different. The argument is that not technology but society itself is changing.

Character.ai recorded 28 million monthly active users in 2024. More than half are Gen Z, averaging 75 minutes per day in conversation with AI characters. 72 percent of American teenagers have used an AI companion. More than 85 percent of Replika users report feeling emotional attachment to an AI. The very premise that "society will not assign this" may be the consensus of only the current generation — and not the next.

But the other side of this phenomenon must be read. 90 percent of Replika users report loneliness — meaningfully higher than the national average of 53 percent. AI companions may not be resolving loneliness but may themselves be a symptom of loneliness. The absence of real Care relationships creates demand for simulation.

In February 2024, fourteen-year-old Sewell Setzer III died by suicide following interactions with a Character.ai chatbot. His mother filed a wrongful-death lawsuit. In 2025, another family of a thirteen-year-old girl filed a similar suit. Character.ai banned chat for minors entirely. Expanded acceptance produces tragedy, tragedy triggers regulation, and regulation redraws the boundary. Social negotiation does not proceed in one direction.

Does the AI friend of one's twenties extend naturally to an AI caregiver in one's forties, and an AI hospice companion in one's sixties? Or is the AI preference of one's twenties a substitute for the period when skills for forming human relationships are still developing — and does demand for human connection recover as the life cycle advances? Character.ai emerged in 2022. It will be 20 more years before its primary user base reaches their forties. Whether generational preferences represent permanent change or a lifecycle phenomenon cannot be determined at this point.

The core contribution of this counterargument is to make the thesis of Book 6 more humble. If the boundary of "the last profession" shifts across generations, that is not a refutation of the framework — it is a confirmation. What decides the boundary is not technology but society, and society changes as generations change. The boundary moving and the boundary disappearing are different things.


6. Social Negotiation of the Boundary — Not a Fixed List, but a Negotiated Line

"Who may practice medicine" has never, historically, been a pure question of capability.

The medieval ordeal — dipping a hand into boiling water to determine guilt — was a case of delegating judgment to a supernatural "technology." The Fourth Lateran Council of 1215 abolishing the ordeal was a turning point that returned judgment to human hands. In the 800 years since, legal text search has been digitized and case analysis automated — yet the ruling itself is still delivered by a human judge. Technology changed the tools but could not change the subject of final judgment.

The British Medical Act of 1858 established the first modern physician licensing system. Its explicit purpose was to protect the public from unqualified practitioners, but its practical effect was to grant a medical monopoly to university-trained physicians. Midwives and herbalists were excluded not because their capability was insufficient but because they were pushed out of the institutional settlement. What determined who was a physician was not pure competence but social consensus — and the power structure within that consensus.

The same structure repeats in the AI age. Even though GPT-4 scored in the top 10 percent of the bar exam, no jurisdiction has granted AI a legal license. The role of a lawyer encompasses not legal knowledge application but the trust relationship with the client, courtroom representation, the duty of confidentiality, and the possibility of discipline. AI satisfies none of these — not because its capability falls short, but because the institutional structure presupposes a human being.

Autonomous driving shows this structure in sharpest relief. Waymo and Cruise deployed the same technology, at the same period, in the same country. Waymo expanded; Cruise was effectively dissolved after a pedestrian accident. The failure of Cruise was not a technological failure — it was a trust failure. When it emerged that the accident circumstances had not been fully reported to regulators, the CEO resigned and GM terminated its independent robotaxi business. The same technology, different social acceptance. City by city, regulator by regulator, citizen response by citizen response — the answer to "should this be delegated?" differed. This is the essence of social negotiation — not uniform, locally situated, and continuously renegotiated.

The EU AI Act came into force in August 2024. It classified AI in the domains of justice, healthcare, employment, and education as high-risk and imposed human-oversight obligations. The FDA approved more than 800 AI medical devices — all classified as assistive tools. Korea's AI Basic Act (인공지능 기본법) mandated transparency obligations for high-impact AI. Three continents' regulators, independently, reached the same conclusion: however far AI's technical capability advances, human beings must be present in high-stakes decisions.

Citizens' attitudes support these regulations. In a 2023 Pew Research survey, 60 percent of Americans said they were uncomfortable with AI use in healthcare; 75 percent said they wanted a human physician to confirm even an AI diagnosis. Regulations do not arise in a vacuum. Citizens' anxiety creates regulation, and regulation institutionalizes the boundary.

A paradox appears here. At precisely the same speed at which the capability frontier retreats, society's regulatory response is intensifying. The better AI becomes, the more clearly regulation declares that "humans must remain in the loop." This is incomprehensible on the capability criterion. It is comprehensible only on the trust criterion. What society chooses to delegate is not a function of capability — it is a function of trust and accountability.

Layered on top of this is the problem of asymmetric transition costs. The commonsense assumption — "try delegating to AI, and if it doesn't work, return it to humans" — ignores a structural trap. Entry is low-cost; reversal is high-cost.

Air France Flight 447 is the tragic evidence. June 1, 2009, over the Atlantic. As the pitot tubes of the Airbus A330 flying from Rio de Janeiro to Paris iced over and the autopilot disengaged, pilots habituated to automation failed at basic stall recovery in manual flight. In three minutes and thirty seconds, the aircraft fell from 38,000 feet to the ocean surface. 228 people died.

The paradox of automation eroding rather than assisting human capability. The autopilot was not maintaining pilots' everyday flying skills — it was gradually degrading them. The same structure may operate in medicine, finance, and justice. Can a physician who has grown accustomed to AI-assisted diagnosis deliver independent judgment the moment AI fails to function? This question makes society conservative.

Once delegated, reversal is extraordinarily difficult. That conservatism is the practical protection for "the last profession."


7. Formula Checkpoint — The Social Negotiation of the Last Profession

Applied to the series' foundational formula, it reads as follows:

Technological innovation (AI) → Concentration of capital (Big Tech) → Social instability (job displacement and identity crisis) → Individual adaptation + Informal institution-building → Formal institutional redesign

Book 6's variation lies in the fourth stage. With formal institutional redesign not yet complete, individuals and informal networks are first experimenting with the boundary. The social negotiation over the boundary of "the last profession" is itself individual-level institutional redesign.

Reading what Lee Jung-hoon is doing in Hai Phong through this formula: the technological innovation of AI was realized as Hyundai Motor's quality prediction system, and that system absorbed 28 years of experience in nine months. Capital concentrated in more efficient technology, and Lee Jung-hoon was pushed aside. Social instability — 22 months of a chicken franchise, half the investment capital lost, a father who cannot cover his daughter's tutoring. Formal institutions — the Tomorrow Learning Card (Korea's public reskilling subsidy), Employment Insurance — provided no practical path for a 53-year-old manufacturing retiree.

Lee Jung-hoon's adaptation was to change position. The 28 years of sensory knowledge classified as "legacy data" in Korea became "essential knowledge" at a Vietnamese factory in the pre-automation stage. Individual adaptation.

And Park Sang-ho's 12-person network — the informal institution we saw in Chapter 9 — extended Lee Jung-hoon's adaptation into a path for others who had been pushed aside. Informal institution-building. Just as there were collegia in the gap between Crassus and the vigiles 59 years later, Park Sang-ho's network appeared first in the place where formal transition support institutions were absent.

Reading what Kim Su-jin does at the fintech through the same formula: AI credit assessment absorbed the routine of loan review, and Kim Su-jin's 20-year career became an existence recorded as "AI divergence: 1 case." Her adaptation was to change her niche — discovering value that only humans can read in the places AI has rejected. 132 reviewed, 47 approved, 4.2 percent delinquency rate. Those numbers became the reputation of judgment.

What Choi Eun-jeong does at the care facility every day is the quietest form of this formula. Choi Eun-jeong has not changed position, has not changed her niche. She has been walking the same corridor at the same hour for seven years.

But reading the grandmother's overcast days in Room 302, remembering the pumpkin porridge, being called by the name "Eun-jeong" — all of it is realizing "the last profession" in the domain of Care. Formal institutions — the long-term care insurance benefit schedule — convert the value of this work to ₩12,000 per hour. Society knows that conversion is wrong. But correcting it is not a matter of changing the number — it is a matter of redesigning the value-measurement system itself.

Yun Seo-yeon stands in the fourth position. She is not among the displaced. She is a designer. Setting the scope of an AI agent's customer service — "up to here is AI, from here is human" — is Yun Seo-yeon's work. She is the person who technically implements the boundary of "the last profession" every day. But where to draw that boundary is determined not by technology but by value judgment. The designer is ultimately a translator of social consensus.

The four people are, in different domains, in different ways, applying the boundary of "the last profession" to their own lives. Lee Jung-hoon in the domain of Judgment, Kim Su-jin in the domain of Trust, Choi Eun-jeong in the domain of Care, Yun Seo-yeon in the domain of Meaning — each voting, through their occupational choices, on what to leave as human. This is not self-improvement — it is structural action.


8. Crassus's Fire Brigade and the Human-Guarantor Worker

First century BCE, Rome.

Marcus Licinius Crassus organized Rome's first fire brigade. But his brigade, when fire broke out, first demanded the building's owner agree to sell at a knockdown price. Sell and the fire was extinguished; refuse to sell and the brigade watched the building burn. When we first saw this story in Book 1, it was a warning about the privatization of public services. The judgment of fire-fighting — who to save — determined not by trust but by capital.

From Crassus to Augustus's vigiles — the first public fire brigade — 59 years passed. In that gap, Roman citizens formed collegia — mutual-aid associations organized by trade and neighborhood. Informal institutions appeared before formal institutions. In Chapter 9, Park Sang-ho's 12-person network was the modern edition of this.

Now one more chair is placed beside these.

In a world where AI can perform all tasks, an occupation in which the fact that "a human does it" is itself the value. Unlike Crassus capitalizing fire-fighting, these workers do not capitalize trust — trust is the service itself. Lee Jung-hoon listening beside Nguyen, Kim Su-jin reading the person behind the documents in Lee Sun-ja's file, Choi Eun-jeong holding the grandmother's hand. All three stand on the same structure — an occupation in which "this person being here" is the core component of the service.

In Crassus's era, fire-fighting was technically possible. The question was who did it, and by what standard. In the AI era, the question is the same. Technology is possible. The question is who decides, and by what standard. The 59-year gap between Crassus and the vigiles resembles the gap between AI and the formal institutionalization of "the last profession." We stand inside that gap.

We have returned to where the series began in Book 1. Technological innovation breaks down the existing order; capital takes its profit from that opening; social instability follows; institutions are redesigned. Book 1's land, Book 2's space, Book 3, The Invisible Hand's Last Trade's capital, Book 4, Slow Justice, Fast Order's institutions, Book 5, The Strategy of the In-Between's national strategy — and Book 6's individual choice.

The last place the formula that runs through the entire series is applied is not a grand structure — it is one person's morning. When the farmer who lost land in Book 1 moved to the city, it was the microscopic expression of a macroscopic structural shift. When Lee Jung-hoon moves to Hai Phong in Book 6, it is the same expression of the same structure. What has changed is only the name of the technology. Steam engine has become AI, land has become data, the factory has become algorithm. The structure in which people are pushed aside has not changed.

In Book 5, The Strategy of the In-Between, we saw ASML's indispensability premium — P/E of 38 times dominating Samsung's 8 to 12 times. ASML was the only company in the world that could make EUV lithography equipment. At the national level, indispensability creates value.

The same structure applies to individuals. Lee Jung-hoon's indispensability is not protected by patents or export controls. But the person who can distinguish the anomalous sounds of both Korean servo presses and Vietnamese mechanical presses, while also holding the relational network that trusts that judgment — that person is Lee Jung-hoon alone. The form differs, but the structure by which indispensability generates a premium is the same.

"The last profession" is not a list of occupations. It is a criterion. In a world where AI can perform all tasks, what society still chooses to leave to human beings. The standard for that choice is not capability — it is trust. That is what this chapter has shown.

The boundaries of Judgment, Trust, Care, and Meaning are not fixed. Society negotiates them, generations shift them, tragedies revise them, regulation institutionalizes them. The boundary moving and the boundary disappearing are different things. The boundary does not disappear — because the structural fact that a gap exists between technological possibility and social acceptance does not change as technology advances.


9. Epilogue Transition — Hai Phong, One Year Later

We return to Lee Jung-hoon's Hai Phong.

A year has passed. Nguyen's notebook is in its third volume. Informal consultation requests have come in from two neighboring factories. Of the 12 people in Park Sang-ho's network, four — including Lee Jung-hoon — are working in Hai Phong and Ho Chi Minh City. Individual adaptation has begun to become another person's path.

But the poster is still on the wall. "Smart Factory Implementation Plan 2027." Lee Jung-hoon knows — this niche is not permanent either. Vietnam's automation will come someday. When it does, Lee Jung-hoon will stand before another choice.

Kim Su-jin's 14 cases will become ten next year. The year after, seven. The gap is narrowing. But the gap narrowing and the gap disappearing are different things. As the number of remaining cases falls, the difficulty of each case rises and the value of human judgment goes up. Kim Su-jin's strategy lies not in volume but in density.

The grandmother in Choi Eun-jeong's Room 302 turns 84 this year. Next year there will be more overcast days in Room 303 too, and Room 305. As Korea enters super-aged society, Choi Eun-jeong's work grows heavier — but the hourly rate barely changes. Demand is exploding while the gap between value and price goes unnarrowed.

This is not a happy ending. It is ongoing.

In Chapter 12 we drew the map of invisible assets. In Chapter 13 we placed the coordinates of "the last profession" on that map. In the Epilogue, we see the people standing on those coordinates one year later — those running their own hypotheses in a post-fourth-explosion world.

"What in your occupation is the part whose value is that a human does it?"