← The Last Profession Vol. 6 8 / 15 한국어
Vol. 6 — The Last Profession

Chapter 7 — The Designer's Eye: From Execution to Design


1. Yun Seo-yeon's Morning

Spring 2025, Pangyo.

She was awake before the alarm sounded. Yun Seo-yeon, twenty-nine, picked up her phone from the nightstand and checked her Slack notifications. Three draft market-analysis reports generated overnight by the AI assistant, one summary of competitor activity, and one internal project dashboard — all waiting.

She read the first draft while loading a coffee capsule and pressing the button. A SaaS market trend report, assembled by the AI. The numbers were accurate. The structure was not bad. But the direction was wrong.

What the client wanted was not a panoramic view of the market but a map of the gap their product could slip into. Yun deleted two-thirds of the AI draft and typed in a prompt to rearrange the remaining data.

Forty-seven minutes from her rented room in Seongsu-dong to the Pangyo office. The second draft arrived on the subway. This time the direction was right. She made three edits and posted it to the team channel. The report was done before she reached the office.

She graduated from Seoul National University with a degree in cognitive science. She works as a product manager at an AI startup with 40 employees. Annual salary ₩52 million. ₩24 million in student loans outstanding.

She has never done her job without AI. University assignments were completed alongside AI tools, and on the first day of her first job the AI tools were already sitting on the desk.

Silencing the alarm, brewing coffee, and reviewing an AI draft occupy the same level of routine. There is no hierarchy among those three actions. She opens an AI draft the way she washes her face; she enters a prompt the way she eats a meal.

That is the strength. She perceives AI not as a threat but as an environment. The way water is invisible to a fish — for Yun Seo-yeon, AI is not a matter of conscious choice but a baseline condition of work.

In Chapter 3, Jeong Min-ho was crushed by the weight of time left over after the AI copilot finished his report. For Yun Seo-yeon, the concept of "time left over" does not exist. While AI generates a draft, she is already designing the next problem.

And that is also her blind spot. She has never exercised judgment without AI. The meaning of that blind spot becomes visible at the end of this chapter.


2. From Execution to Design

Through Chapter 6 we watched the landscape of displacement. Lee Jung-hoon's one-way ticket, Kim Su-jin's emptying drawer. Being pushed out and being hollowed out.

Part II's question was: what disappears?

Part III's question is different. What remains?

To understand what remains, we must first look precisely at what disappears. In Chapter 4 we analyzed that it is not professions but tasks that are replaced. It is not the radiologist's profession that disappears — it is the task of reading images that migrates to AI. It is not the attorney's profession that disappears — it is the task of reviewing contracts that is automated. It is not the architect's profession that disappears — it is the task of drafting plans that passes to AI.

When a task disappears, what is left in its place?

Design is left.

The physician becomes someone who designs what an AI diagnostic system examines and in what sequence patients are seen. The translator becomes a localization director who reconstructs AI translation drafts to fit cultural context. The attorney becomes a legal strategist who decides which risk clauses extracted by AI to negotiate and which to treat as grounds for walking away from a deal.

The accountant becomes not a tax filer but a financial judgment strategist who interprets AI-generated financial analysis and connects it to management strategy.

The developer moves from someone who writes code to a supervisor who reviews AI-generated code and designs system architecture. Reports indicate that 46 percent of developers using GitHub Copilot adopt AI-suggested code as-is. From writing code to reading and judging code. The verb has changed.

There is a structure running through this transition. It is the movement from the execution layer to the design layer.

The design layer is the combination of three capacities.

Directional Judgment — the ability to define what needs to be built and what problem needs to be solved. AI excels at "how," but "what" and "why" remain human territory.

Frontier Mapping — the ability to read the boundary between what AI does well and what it does badly.

Context Integration — the ability to incorporate into judgment the tacit knowledge, relational context, and cultural nuance that AI cannot access.

What Yun did that morning was exactly this. She judged that the AI draft's direction was wrong — Directional Judgment. She knew AI organizes the whole market well, but she also knew this client needed a map of the gap — Frontier Mapping. She redesigned the report taking into account the client's decision-making context, internal politics, and budget cycle — Context Integration.

She did not write the report. She designed the report's direction.

This is Part III's point of departure. Between displacement and hollowing-out, there are people who found a different path. The name of that path is the design layer.


3. The Centaur Model — The BCG and Harvard Experiment

The clearest experiment showing this transition exists.

In 2023, Harvard Business School and Boston Consulting Group (BCG) conducted a joint study. Researchers including Fabrizio Dell'Acqua and Ethan Mollick participated. 758 BCG consultants were randomly divided into three groups: a control group without access to GPT-4, a group given GPT-4 with usage guidance, and a group given GPT-4 alone. They were asked to perform 18 consulting tasks.

The results split in two directions.

On tasks where AI had the advantage, the groups using AI completed 12.2 percent more tasks, improved quality by 25.1 percent, and worked 25.1 percent faster. Consultants who had originally performed at lower levels gained the most. The gap between top and bottom performers narrowed. AI functioned as a leveling tool.

On tasks where AI was at a disadvantage, however, the result was the opposite. The groups using AI performed 19 percent worse. When AI confidently produced a wrong answer, even 758 professional consultants adopted it uncritically. Expert status did not function as a filter for catching AI errors.

Dell'Acqua and his colleagues proposed a concept to explain this result: the Jagged Technological Frontier.

AI's capabilities are uneven. On some tasks it overwhelms human experts; on tasks immediately adjacent it produces poor results. The boundary is jagged. It cannot be predicted intuitively. Even seasoned professionals find it difficult to know in advance whether AI will perform well or badly on a given task.

This concept connects directly to the core of the design layer. The designer in the AI era is someone who reads the map of the Jagged Technological Frontier. The ability to judge where to hand off to AI and where human intervention is required — that is Frontier Mapping.

The origins of this model lie in chess.

In 1997, IBM's Deep Blue defeated world chess champion Garry Kasparov. The event was consumed as the narrative of "AI defeating humans." But Kasparov himself carried a different question. In 1998, he proposed a new competition format he called Advanced Chess — humans using computers as tools while playing. The result was play of a higher standard than AI alone or humans alone could produce.

At a 2005 freestyle chess tournament, the decisive event occurred. The winners were not grandmasters with supercomputers but a team of three ordinary amateurs and three desktop computers.

Kasparov interpreted it this way: a weak human plus a machine plus a superior process beats a strong machine plus a strong human plus an inferior process.

This is the origin of the Centaur model. Like the centaur of Greek mythology, a collaborative structure in which human intelligence and machine computation each bear their own role. Wharton School professor Ethan Mollick, systematizing this model, used the term Centaur for arrangements where the roles of human and AI are clearly distinct, and Cyborg for arrangements where the boundary between those roles is blurred from the start.

The paradox of high performers emerges here. In the BCG experiment, some high performers who handed tasks they had previously executed with skill over to AI lost critical engagement and actually produced worse results.

Collaborating with AI does not automatically produce good results. There must be an eye that reads the boundaries. When the human half of the centaur surrenders its role, what remains is not a centaur but a passenger riding a machine.


4. Kim Tae-hyeon's Transition

Yeonnam-dong, Mapo-gu — a one-person architecture practice. Ten in the morning.

Kim Tae-hyeon, forty-seven, sits before three monitors. On the left: 127 design variants generated by an AI generative design tool, displayed as thumbnails. In the center: a 3D rendering of the selected option rotating on screen. On the right: a KakaoTalk conversation window with a client.

He graduated from Hanyang University with a degree in architecture. He holds an architect's license. His wife, forty-five, is a freelance translator. His son is seventeen. Annual revenue ₩120 million. The office is on the second floor of an alley in Yeonnam-dong.

Until 2023, thirty hours of Kim's working week went into drafting plans. The remaining time was for client meetings, site visits, and processing permit applications at the district office.

In the summer of 2023, the occasion for transition arrived. A competing firm introduced AI design tools and completed a proposal in three days. Kim had spent three weeks on a proposal of the same scale. It was not that the client was taken from him — he never even had the chance to be considered. By the time his three-week proposal arrived, the decision had already been made.

Resistance came first. For someone who had held a pen over plans by hand for twenty years, the 127 variants generated by AI were a threat. "This isn't architecture — it's shopping." That was what Kim told a colleague.

The process of an architect drawing a single plan holds within it the judgment that reads the site, imagines the flow of movement, calculates the angle of light. Having AI produce 127 options and say "choose one" seemed to bypass the entire process.

The experiment began out of necessity. He was losing ground to competition. It was not curiosity but survival. In autumn 2023, he adopted AI design tools. At first he used them only as an auxiliary for plan drafting — completing a design by his usual methods and then using AI to check variants afterward.

The transition came three months later.

On a café renovation project in Yeonnam-dong, one of the AI-generated variants proposed a circulation layout Kim had not considered. It reversed the placement of the counter and the seating. His twenty years of experience would not have produced that arrangement.

But the moment he saw it, it was Kim's experience that judged why it was good. The angle of afternoon light entering through a north-facing window, the atmosphere the café owner wanted, the pedestrian flow patterns of the alley's retail strip — AI knows none of this. It can calculate structural efficiency, but it cannot answer the question "does this atmosphere work in this alley?"

When AutoCAD appeared in the 1980s, architects resisted it too. Plans drawn by hand were now being drawn by computer. The words "this isn't architecture — it's typing" circulated. In the end, the architects who learned AutoCAD survived.

But that transition was a transition of tools. Humans input; computers output. The subject of input remained human. The verb stayed the same — "to draw."

Now it is different. AI generative design produces output on its own. Input the objectives — structural strength, material cost, energy efficiency, aesthetics — and AI generates hundreds of options. Zaha Hadid Architects uses this technology to automatically generate optimized variants of complex geometric forms, with architects selecting and modifying from among them.

The boundary between tool and producer blurs. In the AutoCAD era, the architect shifted from "someone who draws by hand" to "someone who draws by computer." In the AI era, the verb itself has changed — from "to draw" to "to choose."

Kim's weekly drafting work fell from thirty hours to eight. The reclaimed twenty-two hours he now spends on design evaluation. He compares AI-generated options, crosses the client's requirements against the site conditions and regulatory constraints to select the optimal solution, and explains that solution to the client in human language.

The twenty-two hours did not become empty time. They became redesigned time.

This is the self-aware practice of the Centaur model. AI generates. Kim judges. The boundary is clear.

But there is a new anxiety.

For twenty years Kim has been reading the history of buildings. The structural characteristics of low-rise apartment blocks built in the 1970s, the floor-to-ceiling heights and ventilation patterns in the printing alley of Euljiro, the handling of slope in Seoul's hilly terrain. The eye that reads those things developed while he was drawing plans. A sense embodied through drawing thousands of plans by hand. When twenty hours went into a single plan, judgment was loaded into each line. The accumulation of that judgment became "the eye that reads buildings."

If a generation arrives in which AI draws plans in the architect's place, how is that sense formed? Can reading the history of buildings also be learned by AI?

Kim does not yet have the answer. He is simply working without it.


5. Companies on the Jagged Frontier

There are concrete cases showing what shape the Jagged Technological Frontier takes in practice.

Aidoc, an Israeli medical AI company, supplies AI-assisted radiology diagnostic systems to more than 900 hospitals worldwide. The system automatically detects abnormal findings — cerebral hemorrhage, pulmonary embolism, vertebral fractures — in CT scans and sends priority alerts to radiologists. Average time to detect cerebral hemorrhage has been reduced by 62 percent.

AI handles the screening and prioritization; radiologists concentrate on the high-risk cases AI has flagged. The role boundary is clear — the Centaur model. The radiologist's role has not disappeared. It has been redefined from reviewing hundreds of scans sequentially per day to focusing judgment on the risk cases AI has identified.

This is the same structure in which Lee Jin-hee, the paralegal in Chapter 4, processed 57 contracts in 12 minutes. The difference is that in the physician's case, the time freed up is not "hollowed out" but "redesigned into concentrated focus on high-risk judgment."

The same frontier has been drawn in law and finance. After Harvey AI was introduced at the major law firm Allen & Overy, the speed of reviewing certain types of contracts improved more than tenfold; JPMorgan's COiN system automated 360,000 hours of annual contract review. From "an attorney who reads contracts" to "an attorney who exercises judgment over contracts AI has read." The verb changed. At the same time, some law firms began reducing the scale of their junior associate hiring — not a movement to the design layer but an elimination of the execution layer.

In Korean finance, major commercial banks including KB Kookmin Bank and Shinhan Bank have adopted AI-based credit assessment systems. AI synthesizes financial data, credit history, and industry outlook to produce an initial assessment; the loan officer then adds the qualitative factors AI has missed — management credibility, industry reputation, relationship history — and delivers the final judgment.

This is precisely the structure Kim Su-jin was living through in Chapter 6. The decisive difference lies between a financial professional who actively designed this structure from the early stages of AI adoption and someone like Kim Su-jin who was hollowed out by it. The former rose to the design layer; the latter remained on the execution layer.

Reports from some banks indicate that the number of cases handled per loan officer has tripled or quadrupled. Whether this counts as "productivity gain" or "intensification" depends on whether the officer has risen to the design layer.

Aidoc, Lunit, Harvey AI — what these cases share is that after AI absorbed the execution, the human role was redefined as judgment and design.

There is a shared risk as well. Those who succeeded in redefining their roles ascended to the design layer. Those who failed ended up in the same position as Lee Jin-hee in Chapter 4 — sitting before an approval button. To become the human half of the centaur, or to become a passenger riding the machine. That is the dividing point between two hospitals, two law firms, two banks that have adopted the same AI.


6. Yun Seo-yeon and Jeong Min-ho — A Generational Fracture

The same project. Two different approaches.

Jeong Min-ho, forty-five, joined Yun Seo-yeon's company as an outside consultant. The Jeong Min-ho we met in Chapter 3 — the man who had completed reemployment training and was struggling to adapt to an era in which an AI copilot finished a report in seven minutes, whose business card read "Deputy Manager" but whose weight had shifted. This time he joined Yun's team on a short-term project contract.

An AI copilot-generated draft market analysis was posted to the team channel. Jeong reviewed it. He verified the numbers, checked for errors, and filled in missing data. After reviewing, he added the comment "Confirmed." It was meticulous work. Not a single figure was wrong.

Yun used the same draft as "the starting point for a redesign." She dismantled the data structure the AI had organized and rearranged it to fit the client's decision-making context. She did not change the conclusion, but she completely rebuilt the path to reaching it.

On the first page of the report she placed a single sentence the client's CEO could use at the next board meeting.

Jeong's work was quality control. Yun's work was directional design. Both are necessary. But the speed at which AI learns quality control and the speed at which it learns directional design are different.

The difference lies not in ability but in where professional identity is anchored.

For Jeong, a report is "an output to be completed." That is how he has worked for twenty years. The quality of the output proved his value. For Yun, a report is "a designed object that guides a decision." The report itself is only an intermediate product; the final output is a change in the client's behavior.

Jeong's identity is tied to a task — someone who writes good reports. Yun's identity is anchored in a way of solving problems — someone who combines whatever tools are available to solve a problem. If the task changes but the method of problem-solving remains, identity is preserved.

After the project ended, Jeong asked Yun: "That sentence on the first page — how did you land on it?" Yun paused, then said: "I read the CEO's LinkedIn posts and the quarterly earnings call. I reverse-engineered what this person wanted to say at the board meeting."

Jeong nodded. For twenty years he had focused on the content of reports, but he had never been trained to reverse-engineer the intentions of the person reading the report.

On the subway home, Jeong thought for the first time about what he did not know. Not what he needed to learn, but what he had not known he was missing. The question was uncomfortable. It was as precise as it was uncomfortable.

Gallup's 2024 survey puts numbers to this difference. Seventy-three percent of Gen Z workers said they used AI every day. Among Gen X, the figure was 41 percent. Asked whether AI caused them anxiety, only 18 percent of Gen Z said yes. Among Gen X, the figure was 43 percent.

A 25 percentage-point gap in anxiety. This is not a difference in technical skill but a difference in the structure of professional identity.

Mollick describes Gen Z as "the generation that grew up as cyborgs from the start." If the Centaur clearly divides the roles of human and AI, the Cyborg is a state in which the boundary between human and AI is blurred from the beginning. AI output stimulates human thought; that thought feeds back into AI as input — a cycle. Yun Seo-yeon is someone who grew up inside that cycle.

But the Cyborg model carries a risk.

Doshi and Hauser's 2024 Harvard Business School research demonstrates it. A group that used AI on a creative writing task scored higher immediately after completing the task. But when the same task was assigned without AI, the group with AI experience actually scored lower than the group that had never used it.

What they had learned was not what AI was doing but the habit of depending on AI. This is called skill atrophy.

That is where Yun's blind spot lies. She has never exercised judgment without AI. She has never fixed the direction of a market analysis from a blank page without an AI draft to start from. She believes she is standing on the design layer, but if the starting point of that design is always AI output — she has no way to verify for herself how thick the design layer actually is.

Kim Tae-hyeon spent twenty years drawing plans by hand, embodying judgment, and then adopted AI. He is the Centaur. Yun Seo-yeon formed her judgment from the start alongside AI. She is the Cyborg.

Both paths are valid. But when Kim turns AI off, something remains. The accumulated thickness of judgment built across twenty years of drawing plans. It was that thickness that let him see the AI-proposed layout in the Yeonnam-dong café renovation and judge immediately "why this is good."

For Yun, that has not been verified. The moment of verification will come when AI stops. Hoping that day never arrives is not a strategy.


7. From Gatekeeper to Designer

In Book 3, The Invisible Hand's Last Trade, we traced the evolution of the financial gatekeeper. From the era of the Medici branch manager judging the trustworthiness of loan applicants by human intuition, to the era of credit rating agencies replacing that intuition with standardized scores, to the era of BlackRock's Aladdin performing system-based judgment. The subject of judgment has moved from human intuition to mathematical model, then to algorithm, then to AI agent.

At each stage, the previous gatekeeper did not disappear. The gatekeeper moved one layer up.

The Medici branch manager's intuition did not vanish — it was reborn as the quant analyst's intuition, the one who designs the algorithm. The credit rating agency's judgment did not disappear — it migrated into the role of designing the parameters of AI risk models.

When the execution of gatekeeping is automated, the design of gatekeeping remains. That is the message Book 3 passes to Book 6.

This is not something that happened only in finance. The same pattern is repeating in medicine, law, and architecture. When the radiologist's image reading moves to AI, the design of diagnostic flow remains. When the architect's plan drafting moves to AI, design evaluation remains.

When Kim Su-jin's loan assessment moves to AI — the very landscape we saw in Chapter 6 — what remains is the judgment that reads trust within the people AI has rejected. The fintech proposal Kim Su-jin took out of her drawer was precisely that movement to the design layer. "When AI says no, your 'but' is what we need" — that single line on the proposal cover is the definition of the design layer.

In Book 5, The Strategy of the In-Between, we analyzed the conditions under which a nation becomes an Indispensable Node. Performing functions that cannot be replaced was the core. Applying that condition to the individual reveals the same structure. When AI began replacing execution, what cannot be replaced is design. Just as national indispensability was a function of position, individual indispensability is also a function of which layer one is standing on.

But the lesson of Book 3 must not be forgotten. Each time the subject of judgment shifted, a new form of crisis emerged that had not existed before. Moving to the design layer is not the entirety of the solution. That very movement generates new anxieties of its own.

Kim's anxiety is exactly that. Will AI also learn to read the history of buildings? The risk Yun cannot sense is exactly that. Is the design layer of a generation that has never exercised judgment without AI thick enough?

To borrow Mollick's formulation, the Jagged Frontier moves. And it always moves in the same direction — toward AI. The design layer is not safe because AI will forever be incapable of it. It is safe because moving to the design layer is the best adaptation available today. Five years from now, the frontier will have to be redrawn.

Stanford HAI's 2024 tracking study confirms this reality. A common pattern across medicine, law, and finance two years after AI adoption — initial productivity gains, delayed role redefinition, rising anxiety. Some professionals succeeded in securing control over the design of AI's role; many became fixed as AI operators.

Those who ascended to the design layer, and those who remained on the execution layer.

That difference determines the trajectory of the next decade.


8. At the Threshold

Claiming that the design layer is a permanently safe zone falls outside this book's scope. Permanently safe zones do not exist. The Jagged Frontier moves, and it always moves toward AI.

But today, the direction in which movement is possible is clear.

From execution to design.

Kim Tae-hyeon moved in that direction. The experience accumulated across twenty years became the thickness of the design layer. From resistance to experiment, from experiment to acceptance — the process took three months.

Yun Seo-yeon was born onto that layer. The opportunity to verify its thickness has not yet arrived.

Jeong Min-ho has not begun the move from the execution layer to the design layer — reviewing an AI draft is verification, not design. It is the same position as Lee Jin-hee's approval button in Chapter 4.

What the trajectories of these three people show is not a difference in individual ability. It is a difference in the speed at which each recognizes their own position within a structural transition. Kim recognized it after losing ground to competition. Yun was born without needing to recognize it. Jeong is pressing the confirmation button without having recognized it.

And the design layer is not the only path. Lee Jung-hoon did not become a designer. He changed his location. Kim Su-jin found a gap. Movement to design, movement of location, discovery of a gap — there are multiple paths, and which path fits depends on the conditions in which the individual is placed. The coordinates of the four quadrants analyzed in Chapter 5 determine the path.

One thing, however, is common to all paths.

Whatever the path, nothing becomes visible without moving. In Chapter 6 we saw that. Moving while nothing is certain. That is why Lee Jung-hoon boarded the plane, why Kim Su-jin emptied the drawer, and why Kim Tae-hyeon installed AI design tools.

In the next chapter we see a different form of movement. Not the vertical movement to the design layer but a horizontal movement toward the point where two domains intersect. What Lee Jung-hoon discovered after spending three months in Hai Phong is there. The structure by which new value is created at the intersection where expertise from one domain meets another — what might be called positional arbitrage.


Threshold Question: When you hand execution over to AI, what is left in your hands? Whether it comes from experience or was learned from AI — that difference determines the thickness of the design layer.