← The Invisible Hand's Last Trade Vol. 3 13 / 13 한국어
Vol. 3 — The Invisible Hand's Last Trade

Chapter 12: The 600-Year Bill of Exchange


1

June 1, 2009. Thirty-five thousand feet above the Atlantic.

The cockpit of Air France Flight 447 was dark. A nighttime flight from Rio de Janeiro to Paris. Hundreds of indicator lights glowed steadily across the instrument panel of the Airbus A330-200, and the fly-by-wire system was maintaining a cruising altitude of 35,000 feet. All 228 seats were full. Tourists returning from Brazil to Europe, businesspeople wrapping up trips, passengers connecting through Paris — all were asleep. Past midnight, there were neither stars nor moon above the Atlantic. All that surrounded the aircraft was the vast cloud mass of the Intertropical Convergence Zone.

Captain Marc Dubois, age fifty-eight. A veteran who had logged 10,988 flight hours since joining Air France in 1988. Of those, 6,258 hours were as captain, with 1,700 hours on the A330 type. The South American route was familiar territory — he had flown it sixteen times since being assigned to the A330/A340 division in 2007. Yet of those 10,988 hours, vanishingly few were spent with his hands on the controls. In the preceding six months, his flight time totaled 346 hours, but the hours he had actually piloted the aircraft — limited to takeoffs and landings — amounted to roughly four. For the remaining 342 hours, he had been watching monitors.

Dubois went on his scheduled rest break. 02:01:46 UTC. The two copilots took charge of the cockpit.

Right seat, the Pilot Flying (PF) — Pierre-Cedric Bonin, age thirty-two. He had joined Air France in 2003, with a total of 2,936 flight hours, 807 of them on the A330. He had flown the South American route five times since being assigned to the A330/A340 division in 2008. He had little experience hand-flying at altitude.

Left seat, the Pilot Monitoring (PM) — David Robert, age thirty-seven. He had joined in 1998, with 6,547 flight hours and 4,479 on the A330. A graduate of France's École Nationale de l'Aviation Civile (ENAC), he was an experienced copilot who had flown the South American route thirty-nine times. However, Robert had gone through a period of transition from flying to administrative duties, and had recently been splitting his time with management responsibilities at the airline's operations center.

At 02:06, Bonin made a cabin announcement: "We will shortly be entering an area of turbulence." The Intertropical Convergence Zone near the equator. Massive cumulonimbus clouds towered like pillars above the Atlantic. Over fifteen kilometers high, their interiors churned with supercooled water vapor, ice crystals, and violent updrafts and downdrafts. Robert discussed a course change after seeing the red patches of cloud on the radar screen, but they decided to continue flying, threading through gaps between the clouds.

02:10:05. Moments after the aircraft entered the cloud, ice crystals began clogging the pitot tubes. A pitot tube is a small metal tube protruding from the aircraft's exterior that measures airspeed by calculating the pressure of oncoming air. The principle is simple — air enters through a hole at the front of the tube, and speed is derived from the difference between dynamic and static pressure. Invented by the eighteenth-century French mathematician Henri Pitot, it has been a fundamental instrument in aviation ever since. In financial terms, it is like a market's order book measuring "speed" — when it clogs, you can no longer tell where the market is heading.

The Airbus A330 is equipped with three pitot tubes — a redundant design. But the supercooled water vapor at 35,000 feet above the equator blocked all three sensors almost simultaneously. Some instruments displayed overspeed; others showed underspeed. Different speeds appeared on either side of the cockpit. Without reliable speed data, automated flight was impossible.

The system did exactly what it was designed to do — it disconnected the autopilot.

The flight computer's control mode shifted from "Normal Law" to "Alternate Law." Under Normal Law, the computer interprets the pilot's inputs and automatically protects flight envelope limits — if a pilot makes an input approaching a stall, the computer refuses. Under Alternate Law, those protections are removed. The aircraft does whatever the pilot commands. Greater freedom, but stripped of its safety net.

Suddenly, the aircraft was in the hands of the two copilots.

02:10:06. Bonin said, "I have the controls" (J'ai les commandes). Robert replied, "Understood" (D'accord). And Bonin pulled back on his sidestick. The nose pitched up.

This was the critical error. But to call it merely an error misses the deeper context. Unlike Boeing's control column, the Airbus sidestick is not mechanically linked between the left and right seats. Even when Bonin was pulling his stick back, Robert's stick remained in the neutral position. Unless Robert visually confirmed Bonin's input, he could not feel through his fingertips what the other pilot was doing. With a Boeing yoke, when one side pulls, the other moves in tandem — immediately apparent. The Airbus design philosophy — mechanical linkage is unnecessary because the computer mediates — was rational under Normal Law, but the moment the system switched to Alternate Law, it became a fatal blind spot.

Warning alarms sounded in the cockpit. "STALL, STALL." The stall warning. A stall occurs when airflow separates from the wing surface, meaning the aircraft loses lift. This is material covered in the first year of flight school. The textbook response is clear — push the stick forward to lower the nose and recover airspeed. But Bonin did the opposite. He kept pulling back. The nose pitched higher. As the angle of attack increased, lift decreased further.

The final report by the BEA (France's Bureau of Enquiry and Analysis for Civil Aviation Safety), published on July 5, 2012, documents the paradox of this moment. The stall warning sounded seventy-five times during the flight. Seventy-five times. Because it sounded so frequently — toggling on and off due to inconsistent speed data — the pilots lost trust in the warning itself. Too many warnings neutralized the warnings. When the pitot tubes momentarily thawed and speed data briefly became valid, the alarm would stop, only to resume when they clogged again. This repetition extinguished the urgency of the alarm. The system was detecting the danger precisely, but the signal of danger, by appearing too frequently, had become noise. This is a pattern that repeats in financial systems — when a risk model cries "danger" too often, traders begin to ignore the warnings. The boy who cried wolf was at work in Flight 447's cockpit too.

There was an even more bizarre paradox. When the angle of attack became too high — when speed dropped too low — the stall warning system itself was designed to stop alerting, having judged the input data "irrational." When Bonin pulled the stick and speed dropped further, the warning stopped. When Robert pushed forward and speed recovered slightly, the warning sounded again. Taking the correct action triggered the alarm; taking the wrong action silenced it. The system's feedback was sending exactly the opposite message.

02:11:40. Captain Dubois returned to the cockpit. The aircraft was already descending at over 3,000 meters per minute. The angle of attack had reached forty degrees — normal flight operates at two to five degrees. The engines were running at nearly 100 percent thrust, but the aircraft, having lost lift, was falling vertically, like an elevator in free fall.

Dubois's voice was recorded on the black box: "What is happening?" (Mais qu'est-ce qui se passe?) And seconds later, "Ten degrees nose down" (Dix degres d'assiette). But Bonin's sidestick was still pulled back. Dubois did not know this — because the Airbus sidestick is not mechanically linked.

Altitude: 10,000 feet. Less than a minute remained. Robert finally understood. "He's been pulling back the whole time!" (Il est en montee, le gamin, il est en montee!) Dubois ordered, "No, no, don't pull up!" Too late. At 02:14:25, Bonin said, "Damn it, we're going to crash — this can't be real!" (Putain, on va taper... C'est pas vrai!) Three seconds later, the recording stopped. 02:14:28.

Three minutes and thirty seconds — four minutes and twenty-three seconds by some accounts. The time elapsed from the moment the autopilot disengaged to the moment the aircraft struck the surface of the Atlantic. All 228 people on board were killed. The aircraft sank to the floor of the Atlantic, 3,900 meters deep, and the black boxes were not recovered until nearly two years later, in May 2011.

The most uncomfortable truth revealed by the accident investigation was that mechanical failure was not at the heart of the disaster. According to the BEA report, the ice in the pitot tubes is estimated to have melted after approximately one minute. The autopilot could have been reengaged. The engines were functioning normally, and there was nothing wrong with the airframe structure. The A330 was a safe aircraft — at that time, the A330's accident rate was among the lowest in the world. The problem was that during those critical seconds without automation, the humans who had been handed manual control could not perform the most fundamental procedure — lowering the nose and recovering airspeed.

Why couldn't they? The BEA report lists multiple factors. Inconsistent speed displays. Insufficient manual flying experience at high altitude. Cognitive overload caused by excessive warnings. The non-coupled sidestick design. Loss of situation awareness. But beneath all those factors lay a single truth.

In an era when the autopilot handles 99 percent of flying, the human capacity for the remaining 1 percent had been quietly atrophying.

The pilots of Flight 447 were not incompetent. Dubois was a veteran captain with 11,000 flight hours, and Bonin and Robert each had thousands of hours of flight experience. But within those thousands of hours, the proportion of manual flying was vanishingly small. In Dubois's case, out of 346 hours of flight time over six months, the time he actually piloted was four hours. As a ratio, 1.2 percent. For the remaining 98.8 percent of his time, his role was not flying but monitoring. Checking the flags the system raised, confirming nothing was wrong, and approving. Structurally identical to what the chairman of the loan review committee did in Chapter 3.

This paradox is not unique to aviation. The Automation Paradox holds that the safer a system becomes, the more dangerous humans become when the system fails. The same pattern has been observed in medicine. Studies have found that in hospitals where AI-assisted diagnostic systems improved X-ray reading accuracy, radiologists' misdiagnosis rates on days the system went down were significantly higher than usual. Because the system was catching most abnormal findings, the human eye grew more likely to miss subtle lesions. The same applies to autonomous vehicles. Research has shown that in Level 3 autonomous driving — where the system handles most driving but humans must intervene in emergencies — drivers' reaction times are slower than in conventional driving. When you've been trusting the system and suddenly have to grab the wheel, human response is several seconds slower than when prepared. Those seconds can mean the difference between life and death.

After Flight 447, the aviation industry strengthened manual flying training. High-altitude stall recovery drills became mandatory, and the EASA (European Union Aviation Safety Agency) and FAA (Federal Aviation Administration) revised their relevant guidelines. But increasing the frequency of training and performing under an actual crisis are different matters. In a simulator, failure doesn't kill you. There is a tension that only exists in real situations, and that tension cannot be sustained without regularly practicing manual flight. Muscle memory is maintained only through repetition.


2

Map this accident onto the capital markets, and what emerges is not a metaphor but a structure.

While the system is operating normally, humans lean on it and stop exercising their own judgment muscles. If a credit rating reads AAA, you don't go out and conduct due diligence yourself. If the risk model displays "safe," you don't bother running stress scenarios. If Aladdin says "no rebalancing needed," you don't take a second look at the portfolio. The greater your trust in the system, the less you can do when the system fails. That is what happened in the cockpit of Flight 447, and it is what happens quietly in the capital markets every day.

May 6, 2010, 2:32 PM, New York.

The mood in the market had been uneasy since the morning. The Greek sovereign debt crisis showed signs of spreading across Europe, and in Athens, anti-austerity protests had turned violent, killing three. The S&P 500 had been sliding from the opening bell. As the afternoon wore on, selling pressure intensified.

At 2:32 PM, Kansas-based asset manager Waddell & Reed initiated the sale of 75,000 E-Mini S&P 500 futures contracts — roughly $4.1 billion in notional value. The purpose was to hedge existing equity positions. The problem was the execution method. The selling algorithm was set to place orders "at a rate equal to 9 percent of the previous minute's trading volume" — but, as the joint CFTC-SEC report recorded, with "no regard to price or time."

As prices fell, volume shrank, and while the absolute quantity corresponding to the algorithm's "9 percent" should have decreased accordingly, the volume already in the market was absorbing liquidity. High-frequency trading (HFT) algorithms began passing the inventory to one another. The "hot potato" effect — algorithms bought futures and flipped them within milliseconds, passing inventory from machine to machine. Volume surged, but real liquidity evaporated. The Flash Crash proved that liquidity can be an illusion that exists only when prices are stable.

In nine minutes, the Dow Jones Industrial Average plunged approximately 1,000 points. 998.5 points. It was the largest intraday drop in history at that time. More than $1 trillion in market capitalization evaporated in under thirty minutes. Individual stocks saw surreal prices. Accenture's share price fell to $0.01 before recovering. Procter & Gamble dropped 37 percent in seconds. There was even a record of Apple stock trading at $100,000 — as buy orders vanished, market orders were filled against absurd prices sitting at the far end of the order book. If the order book was the pitot tube, at that moment the tube was completely blocked.

At 2:45:28 PM, the Chicago Mercantile Exchange's (CME) Stop Logic Functionality triggered, halting E-Mini trading for five seconds. When trading resumed at 2:45:33, prices began to stabilize, and by around 3:00 PM the market rallied, recovering most of the plunge. The day's close was down approximately 3.2 percent from the open.

The autopilot had disengaged. The price sensors had malfunctioned. And the human traders who needed to grab the manual controls found themselves in a situation structurally identical to that of Flight 447's copilots. The trading that algorithms normally handled had to be suddenly taken over by humans, and humans had no time to make substantive decisions in front of an order book changing by the millisecond.

Since then, the "autopilot disengagement" has repeated itself in the capital markets. And each time, at the moment when humans needed to take the controls, humans were not ready.

On August 24, 2015, in the wake of an 8.5 percent crash in Chinese equities, the Dow Jones plunged over 1,000 points the moment U.S. markets opened. On January 4, 2016, China's newly introduced circuit breaker tripped within twenty-nine minutes of the first trading day of the new year — but the circuit breaker itself accelerated the panic, as a rush of sell orders flooded in from traders trying to get out before trading was halted. Four days later, on January 7, the circuit breaker tripped again less than thirty minutes after the open, and Chinese authorities abolished the circuit breaker system just four days after introducing it. A safety mechanism that made the danger worse — structurally identical to how Flight 447's stall warning provided inverted feedback, sounding when the correct action was taken and falling silent when the wrong action was taken.

February 5, 2018: "Volmageddon." The VIX index surged 115 percent in a single day — from 17.31 to 37.32. The XIV (VelocityShares Daily Inverse VIX Short-Term ETN), which had been betting against volatility, lost more than 90 percent of its assets in one day. $1.9 billion became $63 million. The time it took for a self-reinforcing loop to become a self-destructing loop was a matter of hours.

Speed keeps accelerating, and the rate of automation in the capital markets is climbing even faster. As of 2024, an estimated 60 to 70 percent of U.S. equity trading is executed by algorithms. BlackRock's Aladdin generates daily risk reports on assets totaling $14 trillion, and portfolio managers review and approve the rebalancing those reports prescribe. Smart contracts on Ethereum automatically process loan approvals and liquidations every twelve seconds, and AI agents move between liquidity pools in milliseconds, seeking arbitrage opportunities.

While these systems are operating normally, what are humans doing? Confirming. Monitoring. Reviewing flags. Stamping approvals. Just as the chairman of the loan review committee did in Chapter 3. Just as Seo Yuna confirmed Aladdin's VaR violation flags in Chapter 10. Humans are not outside the system but inside it, performing an increasingly narrow role. Not judging but confirming judgments. Not deciding but approving decisions. Just as Captain Dubois held the controls for only four out of 346 hours.

The question, then, is this: The next time the autopilot disengages — when algorithms hit a circuit breaker, oracle prices are distorted, and liquidity evaporates — will there be anyone left who can take the controls? And even if someone remains, will that person remember the stall recovery procedure?


3

There are at least three answers to this question. And they are difficult to reconcile.

The first is to keep the autopilot engaged but leave the captain in the cockpit. AI and algorithms handle the bulk of capital allocation, and humans intervene in emergencies. This is the model BlackRock's Aladdin already exemplifies. An analyst like Seo Yuna checks the risk report every morning, verifies the flags the system has raised, and approves if nothing is amiss. The system is consistent, free from bias, and impervious to human emotional turbulence. In normal times, this is the most efficient structure.

Imagine a concrete day in Seo Yuna's working life. 7:00 AM, arriving at her office in Yeouido (Seoul's financial district, the Korean equivalent of Wall Street). Four monitors light up. The two on the left display Aladdin's dashboard; the upper right shows a Bloomberg terminal; the lower right, the company messenger. The risk report Aladdin generated overnight is arrayed on the left-hand screens. Portfolios A, B, C, D — each with VaR, credit spreads, duration gaps, and expected drawdowns displayed. Two portfolios have flags raised. Yuna clicks, and the details unfold; the system has already prepared a rebalancing recommendation. "Reduce Korea 10-year government bond weight by 2%, increase U.S. TIPS by 2%." Yuna reviews the logic behind the recommendation — rising global rate expectations, need for inflation hedging — and presses the approve button. 8:00 AM. In one hour, she has made two "judgments." In truth, she has made two "approvals."

But the lesson of Flight 447 lies precisely here. A human who intervenes only in emergencies loses the ability to intervene in emergencies. Monitoring is a muscle. If you don't use it, it atrophies.

The second answer is to remove the human from the cockpit entirely. To hand the autopilot full control. AI agents autonomously allocate capital, smart contracts enforce the rules, and a blockchain transparently records every transaction. The world Lee Junhyeok is building comes close to this. His agents calculate liquidity pool depth around the clock, optimize expected return relative to gas fees, and execute transactions in milliseconds. No emotions means no panic selling. No sleep means no gaps. They don't spend three hours debating whether a 73 percent presale rate is realistic. If data exists, they decide; if not, they defer.

But in this structure, there is no one to reverse course. When an agent's objective function begins moving capital in a direction that diverges from its designer's intent, where is the entity that can say "stop"?

The Terra/Luna collapse of May 2022 offered a glimpse of that answer. The event began on May 7. Two large wallet addresses withdrew 375 million UST from the Anchor Protocol. Anchor was a DeFi protocol that promised UST holders an annual yield of 20 percent — an unsustainable rate, yet a self-reinforcing loop was at work: high yields attracted deposits, and deposits sustained the ecosystem's stability.

On May 9, a combination of whale sell-offs and UST selling on Curve Finance caused UST's price to begin depegging from one dollar. $0.985. A small deviation. But the algorithmic stabilization mechanism activated to restore the peg, and that activation was the beginning of ruin. To bring UST's value back to one dollar, the protocol executed a mechanism that minted LUNA to burn UST. The more LUNA was minted, the further LUNA's price fell; the further LUNA's price fell, the greater UST holders' anxiety grew; the more UST was redeemed, the more LUNA had to be minted. The "death spiral." The code performed exactly as designed — faithfully executing the rule to mint LUNA in order to restore UST's value. The capacity to recognize that the rule was making things worse was not included in the code.

Between May 10 and 12, LUNA's supply expanded eighty-fold, from 400 million to 32 billion tokens. On May 12, LUNA's price crashed below $0.10 — a token that had once exceeded $100. UST fell below $0.10. In seventy-two hours, approximately $50 billion evaporated from the Terra ecosystem, and over $400 billion drained from the broader cryptocurrency market. When Copilot Bonin was pulling back on the stick in a stall, the angle-of-attack sensor was accurately reading forty degrees. The data was there, but the data didn't say, "This is going wrong." The same was true for Terra/Luna. It was not the code that said "stop" — it was the humans outside the code who fled in terror. Do Kwon, Terra's architect, was arrested. Code designed by humans, operating without humans, destroyed the money of humans. That code had no 0.5-second pause to ask, "Wait — is this right?"

The third answer begins with the recognition that there is more than one aircraft. Rather than viewing the entire capital market as a single system, it applies different grammars to different domains. Rebalancing of large-scale standardized assets — government bonds, indices, listed equities — is handled by algorithms. Borderless small-value transactions — remittances, microloans, micropayments — are managed by protocols. And non-standardized lending to low-credit borrowers, investments in unprecedented structures, and areas requiring qualitative judgment that data cannot capture remain the province of humans. A structure in which three grammars coexist, each operating where its strengths are greatest.

This appears the most realistic, but the problem is that boundaries always blur. The domains thought to be beyond algorithms shrink every year. A decade ago, corporate credit analysis was considered uniquely human territory. The context behind the numbers in financial statements, the quality of management teams, structural shifts in industries — these were considered unquantifiable. But today, AI reads financial statements, interprets news, and analyzes CEO tone and word choice during earnings calls to generate credit rating predictions. The boundary line is not fixed.

Perhaps all three scenarios will be realized simultaneously. There is no guarantee that the global financial system will converge on a single architecture. The U.S. capital markets are moving closer to the second scenario; China is pursuing a hybrid of the first and second through the digital yuan; and Europe is constructing a regulatory framework for the third. As the CBDC geopolitics examined in Chapter 11 demonstrate, the future of capital allocation is more likely to be plural than singular. All three scenarios carry their own risks, and which becomes dominant will depend not on technology but on the choices each society makes. And those choices ultimately come down to the answer to one question: "Where do we place the human?"


4

Early 2026, Seoul.

A hotel lobby cafe in Yeouido. A late afternoon in January. The Han River beyond the window is submerged in winter fog, and the lights of the securities district buildings across the river bleed faintly through the haze. The cafe has high ceilings, and one wall is floor-to-ceiling glass, the river unfolding in panorama. Jazz piano plays at low volume, and at four in the afternoon — the pre-closing-time lull of Yeouido's financial district — the cafe is relatively quiet. A table in the corner with two cups of coffee. Seo Yuna had arrived first. An Americano was growing cold. She had her laptop open but was not looking at the screen. She was gazing at the river beyond the window.

Lee Junhyeok arrived ten minutes late. He unzipped his puffer jacket and sat across from her. In his hand was not a MacBook but a single iPad. An hour and a half by subway from Pangyo—Korea's answer to Silicon Valley—to Yeouido.

This meeting was no accident. On the day Junhyeok appeared as a witness before the Financial Services Commission (the Korean equivalent of the SEC) in Chapter 11, a RegTech seminar was being held in the same building. Yuna had attended as a risk analyst at the Seoul office of a global asset management firm. The two had first met at the reception following the seminar. They both worked in capital allocation, but the worlds they faced were different. That difference was the reason for the conversation.

Seo Yuna is a risk analyst at a global asset manager headquartered in New York. Every morning she sits before four monitors, reviews the risk reports the system generates, and processes portfolio violations. Thirty-five years old. She studied economics at Yonsei University (one of Korea's top three universities), earned her MBA at NYU Stern, worked three years at a Manhattan asset manager, and returned to the Seoul office. Her father is a retired commercial bank loan officer. Lee Junhyeok is CEO of a startup developing AI agent-based DeFi protocols. He studied computer science at KAIST (Korea's premier science and technology university), spent time in Silicon Valley, and returned — an early-thirties developer. His office in Pangyo is a single room in a coworking space. His team has five members.

Yuna described her work.

"The system generates a risk report every morning. If there's a VaR violation for any portfolio, a flag goes up, and I check that flag. The copilot has already prepared a rebalancing recommendation. In most cases, the system's recommendation is correct. The things I need to analyze independently keep shrinking."

She took a sip of coffee and continued. "At first it was great. Repetitive work decreased. But after about a year, a strange feeling started creeping in. The system raises a flag and I check it. I use the word 'check,' but what I'm actually doing is... closer to approving. The system has already produced the answer, and I'm just stamping it."

Wrapping both hands around her coffee cup, she said, "My father spent thirty years doing loan reviews at a bank. When I told him about this, he couldn't understand. 'So who's making the judgment — you or the machine?' We had this conversation over a meal during the Lunar New Year holiday. My father always visited the project site in person when reviewing loans. He said you can't know from numbers alone without seeing the field. The atmosphere of the sales office, the quality of the finishes in the model unit, even the attitude of the developer's CEO when serving tea — he observed it all. 'There's something called field sense — behind the numbers, there are people,' he said. Then he asked me the same question. 'Do you visit the field?'"

After a brief silence, she added, "Honestly, even I find that line blurry. Whether it's checking or ceremony."

Junhyeok's answer to the same question was different.

"I build agents. The agent reads on-chain data, analyzes the state of liquidity pools, finds the optimal strategy on its own, and executes it. Running twenty-four hours. No emotions, no biases. My role is to design the agent's objective function, adjust parameters, and occasionally check the logs."

Yuna asked, "What if the agent makes a mistake?"

Junhyeok gave a brief laugh. "I fix the code."

"What about transactions that have already been executed?"

His expression shifted subtly. The smile disappeared, and his hand paused mid-motion as he set down his coffee cup.

"...Transactions recorded on the blockchain are irreversible. But that's also an advantage. Transparent, auditable, and no one can tamper with the records."

"It happened once," Junhyeok said after a long pause. Yuna waited. He pushed his iPad to the side of the table and continued. "Last year. The agent's price oracle was temporarily distorted. Five minutes, but in that window the agent took positions based on the distorted prices. I found out after the oracle recovered. The transactions had already been settled, and they couldn't be reversed. The loss wasn't large, but..."

"How did that feel?"

"More than a feeling... it was a question about responsibility. Was it the fault of the oracle service provider? Was it a flaw in my design for not accounting for oracle dependency? Was it the responsibility of the users who entrusted funds to the agent? It wasn't clear." He looked out the window. The fog was thickening. "Just as the pilots of Flight 447 didn't have time to assign blame in a stall, I didn't have time for that either. First I stopped the bleeding, shut down the agent, then analyzed the cause."

Yuna nodded and said, "On our side, it's clear. When the system gets it wrong, I'm the one who gets the phone call."

She told the story of her experience in September 2022. The UK gilt crisis. Prime Minister Liz Truss's "mini-budget" had slammed the markets. Following the announcement to simultaneously pursue tax cuts and fiscal expansion, UK government bond yields surged over 100 basis points in four days. That alone was a shock, but the real crisis lay behind it.

The LDI (Liability-Driven Investment) strategies used by British pension funds had begun to unravel. LDI is a strategy in which pension funds invest in long-dated government bonds with leverage to match their long-term liabilities. An efficient structure when interest rates are stable, but when rates surge, a chain of margin calls begins. As bond prices fall, collateral values decline; when margin calls arrive, bonds must be sold to raise cash, and that selling pushes bond prices down further, triggering more margin calls. A self-reinforcing downward loop. Structurally identical to the Terra/Luna death spiral — except this was happening not in cryptocurrency but in the UK gilt market, an asset class considered among the safest in the world.

In the five days following the mini-budget announcement, LDI funds sold approximately 25 billion pounds of government bonds — thirty percent of the total selling in the entire crisis period concentrated in those first five days alone. The moment thirty-year gilt yields breached 5 percent, the Bank of England intervened. From September 28 through October 14, a bond-buying program of up to 5 billion pounds per day, later expanded to 10 billion pounds. More than 100 billion pounds in total was deployed.

"For those two days I couldn't put down the phone," Yuna said. "Margin calls were cascading, gilt prices were changing every hour, and the rebalancing recommendations the system was generating were a step behind the market. Aladdin said 'sell,' but I judged that in a market where liquidity had already evaporated, selling would only amplify the losses. I waited."

"Intuition?" Junhyeok asked.

"I'd call it experience. Not something that comes from numbers, but a pattern recognition that builds from watching markets over a long time. A sense of when central banks intervene. When rates breach a certain level, it becomes systemic risk, and when it becomes systemic risk, the central bank steps in. There's no variable in the model that calculates 'the probability that the central bank will intervene tomorrow.' But if you have experience, you have a feel for it. In 2008, in 2020, in 2022 — the central bank always came in the end."

"Is that intuition always right?"

Yuna answered honestly. "No. About half the time. In March 2020 I was right, and other times I've been wrong. But right or wrong, at least the responsibility for that judgment is mine. There's a person who takes the call. The client calls, the boss calls, compliance calls, the regulator calls."

Then she added — her voice carrying no edge, a sincere question born from different experiences of the same world: "When an agent gets it wrong, is there someone who takes the call?"

Junhyeok turned his coffee cup for a moment. It took him several seconds to answer.

"For now, I do. Because it's the agent I built. But when there are ten agents, a hundred, a thousand? When different agents on different protocols interact with one another and produce unexpected outcomes — whose phone is supposed to ring?"

The jazz piano in the cafe changed tunes. Bill Evans's "Waltz for Debby." The slow waltz filled the silence between them.

Recalling the loan review committee from Chapter 3, a third scene overlaps. A conference room where a twelve-person rectangular table nearly touched the walls. That space where "LTV 62%" — written by someone the previous week and then erased — lingered faintly on the whiteboard, where afternoons were spent flipping through binders, scrutinizing presale rate assumptions, and trying to read the project developer's eyes. Reviewing two hundred pages of documents to arrive at the conclusion of "conditional approval," then pressing a single stamp. The feel of the crimson ink pad bleeding onto paper. The 0.5 seconds before the stamp landed — a brief hesitation, the question is this really right? skimming beneath the surface of consciousness. Get it wrong and your name stays in the minutes. Get it badly wrong and the FSS (Financial Supervisory Service, Korea's financial watchdog) comes. Get it catastrophically wrong and the prosecutors come.

That was the crux. For 600 years, there had always been a human who bore responsibility for the judgment of capital allocation. The Medici branch manager who signed the bill of exchange. The Bank of England director who approved the bond subscription. The loan committee chairman who pressed his seal. The Moody's analyst who assigned AAA to a CDO. The LTCM partners who put on the leverage. When they failed — and as history shows, they failed often — at least the question "whose fault is this?" could be asked. Of Portinari, of Newton, of Mozilo, of Merton. There were names, there were faces, and they could be summoned to a courtroom.

In the world of AI agents, that question has no answer yet. Is the developer who designed the agent responsible? The operator who set the parameters? The audit firm that reviewed the code? Or the user who entrusted capital to the agent? Can the validators on the blockchain be held accountable? The governance token holders of the DAO that deployed the smart contract? When responsibility is distributed, it dilutes, and diluted responsibility becomes difficult to distinguish from no responsibility at all. Just as the savings bank's loan committee — the Busan Savings Bank committee from Chapter 3 — blurred individual responsibility within the form of collective decision-making. The tools have changed, but the danger created by the diffusion of responsibility repeats the same structure.

Junhyeok set down his coffee cup and said, "Still, transparency is progress. Transactions on the blockchain are visible to anyone. No one knew about Busan Savings Bank's 120 SPCs. At least on-chain, the flow of funds isn't hidden."

Yuna nodded. "That's true. But transparency doesn't make responsibility clear. Even if every transaction is public, the question remains: 'Who is responsible for the consequences of this transaction?' Transparency is a condition for audit, not a condition for accountability."

Placing the two responses side by side, the same contour emerges. Yuna said, "The difference, if there is one, is speed and scale." Junhyeok said, "The tools are different, but the essence is the same — deciding where to put money." Where they differed was in how responsibility was borne when things went wrong. And that difference was not trivial.

The question that had been repeating for 600 years — "Can this person repay the money?" — was changing form. From who can repay to who should make the judgment of capital allocation. From a question of ability to a question of ought. When the question that began in the Medici scrittoio crossed six centuries, the question itself remained, but its weight had changed.


5

The cafe lights began to dim. Evening was approaching. The Han River beyond the window was darkening into winter night, and as the streetlights on the bridges switched on one by one, they drew long columns of light on the water. Yuna picked up her coat. Junhyeok slipped his iPad into his bag.

As the interview drew to a close, Junhyeok said:

"In the end, the tools change, but the question stays the same, doesn't it?"

Yuna set her coffee cup down. She gazed at the river beyond the window for a moment, then shook her head.

"It's not that the question stays the same — it's that the one asking the question is changing. And that's what's more frightening."

The gap between those two sentences may have been the heart of this book.

Junhyeok's perspective is one of continuity. From the Medici bill of exchange to Ethereum transactions, the essence is the same — the movement of capital. The tools have changed from parchment to blockchain, from wax seals to digital signatures, from couriers to fiber optics. Speed, scale, and precision have shifted, but the core act — deciding where to send money — is identical. From this perspective, AI agents are nothing to fear. They are simply faster, more accurate, more consistent tools.

Yuna's perspective is one of discontinuity. A change in tools and a change in the agent of judgment are qualitatively different transformations. Even when the Medici branch manager swapped his quill for a fountain pen, the person making the judgment was still the branch manager. Even after the Black-Scholes formula appeared, it was human traders who interpreted and applied the formula. Tools changed, but the agent of judgment was consistently human. But when AI agents begin performing judgment itself, that is not the evolution of a tool — it is the replacement of the subject. The subject of the sentence changes. From "a human judges" to "code judges." The verb is the same, but when the subject changes, the meaning of the sentence changes entirely.

Junhyeok countered. "Even if the subject changes, it's still a human who writes the code. I'm the one who designs the agent's objective function. I judge through the agent — I haven't given up judgment."

Yuna smiled. "Aladdin's developers say the same thing. That we're the ones who designed the system. But once the system starts operating in ways its designers didn't predict, does 'I designed it' serve as the basis for responsibility, or does it serve as the basis for absolution?"

Junhyeok was silent for a moment. Then he said, "Both, I think."

Yuna nodded. "That's why it's so hard."

Outside the window, a cruise boat passed beneath a bridge. The boat's lights spread like oil on the water, overlapping with the bridge's streetlights, brightening briefly before scattering.

"When my father worked at the bank," Yuna said, "he always paused for a moment before pressing his seal. Half a second, maybe less. A habit, he said. A final check — 'Is this really right?' He never skipped it in thirty years. When he retired, he told a junior colleague about it. 'Pause for half a second before you press the stamp. That half second will protect you.' The junior laughed. 'Half a second, in this day and age?' My father replied. 'Yes, half a second. A lifetime isn't enough to master it.'"

Junhyeok said, "An agent doesn't have that half second. There's no reason for it to pause."

"Is that efficiency, or is it a defect?"

"Efficiency. What happens in that half second isn't judgment — it's anxiety. Anxiety is noise that should be eliminated."

Yuna shook her head. "Maybe not. That half second might be the time it takes for conscience to operate. Anxiety might be part of judgment. Flight 447's autopilot had no anxiety. And it didn't stop pulling back on the stick."

The two shook hands as they left the cafe. Yuna walked toward Yeouido; Junhyeok walked toward the subway station. The winter night wind off the Han River was cold.


6

This book offers no tidy answers. In private equity, conviction was a virtue; at savings banks, conservative judgment was a virtue. But measured against 600 years of history, both conviction and caution turn out to be only half the story.

Walking back to the hotel after leaving the cafe, I thought of the scrittoio in Florence.

Giovanni di Bicci de' Medici is writing a bill of exchange. Age sixty. Deep creases around his eyes, and a callus on the middle finger of his right hand from decades of gripping a quill. "At the Medici branch in Bruges, pay to the payee, within the term of usance." Commercial prose in a mix of Latin and Italian. Just before the ink descended from the tip of the quill onto the parchment, there would have been a momentary pause. Half a second, perhaps less. The instant the nib of the quill stopped one millimeter above the surface of the parchment.

Is the Bruges wool trade safe? If Duke Philip of Burgundy goes to war, wool prices will be unsettled. Is the political situation in Flanders stable — there had been rumors of a weavers' uprising. Is this counterparty's reputation trustworthy — the last payment was two weeks late. What might happen while the courier crosses the Brenner Pass — in winter the pass is blocked, and bandits appear. Dozens of variables intersected in his mind, and the intuition accumulated through experience weighed those variables, until at last the ink touched paper.

A red wax seal was pressed. The end of the sealing wax stick was held to the candle flame, and a red droplet fell onto the parchment. A momentary wisp of sweet wax smoke. The metal seal was pressed into the still-soft wax, and the six palle — the Medici crest — were left in sharp relief on the parchment. A faint click as the wax hardened settled into the silence of the room. The warmth of the wax cooled from his fingertips. Signature complete. When this bill of exchange crossed the Alps and arrived in Bruges, the branch manager there would hand over gold coins. Giovanni's name — the name Medici — was the guarantee.

That half second was the engine of 600 years of financial history.

The moment of judgment. The moment of pause. The instant the question "Is this right?" skimmed just beneath the surface of consciousness. That moment repeated itself in the Bank of England's board vote, in the instant a Chicago options trader peered at a valuation sheet, in the silence before the savings bank loan committee chairman pressed his seal. The forms differed, but the structure was the same. A human pauses, weighs, and decides.

1694, London. The moment of silence before Parliament voted to establish the Bank of England. Facing the unprecedented decision to issue currency on the credit of the nation, those legislators, too, must have experienced a half-second pause. That pause opened 330 years of central banking.

1973, Chicago. In the dingy smoking lounge of the CBOE, the moment before a trader pulled a valuation sheet from his pocket, verified the theoretical price, and shouted his quote. He, too, must have hesitated for half a second — is the formula really right? Is mathematics better than the market? Within that hesitation he called his quote, and the derivatives era began.

1987, New York. Black Monday morning, the instant before a portfolio insurance manager at his trading desk pressed the sell button. Was it right to do what the model commanded? Wouldn't this sell order make the crash even worse? That half-second hesitation contributed to the Dow's 22.6 percent plunge.

2008, New York. The instant before a Moody's analyst signed the document assigning a AAA rating to a mortgage pool. Was this rating really correct? Was the model's baseline assumption — that housing prices would not decline nationally and simultaneously — still valid? That half-second question was buried, and the subprime crisis arrived.

2011, Busan. The moment the loan review committee chairman raised his seal. In that room where the majority shareholder's wishes, the FSS's gaze, and employees' livelihoods all intersected. The half second before the crimson ink touched paper. If he could have said "no" in that half second, history might have been different. But there were reasons he said "yes" in that half second, too. Human judgment always exists within context.

March 2020, Seoul. The moment Seo Yuna chose to ignore Aladdin's sell recommendation and wait. The system said "sell," but her instinct said "wait." The half second of pause that became the courage to override the system's directive.

Every one of these moments shares the same structure. A human pauses, weighs, and decides. Sometimes wisely, sometimes foolishly. But always a human.

600 years later, 2026. In a data center somewhere in the world — the precise location cannot be pinpointed, since thousands of Ethereum nodes are distributed across the globe — an AI agent is executing transactions. Supplying USDC to a liquidity pool, borrowing against ETH as collateral, optimizing yield. A new block is produced every twelve seconds, and within it, capital crosses borders. No monitors, no pens, no candles. No sounds. Only the low hum of the data center's cooling fans — an unceasing wind, the sound of machines cooling themselves. Blue LEDs on the server racks blink in steady rhythm. The blinking resembles a heartbeat, but it is the heartbeat of something without a heart. Fluorescent light reflects off the epoxy floor, bathing the entire corridor in a bluish glow. No human shadow is visible. Dust-free air circulates in the draft of the cooling fans. In Giovanni's scrittoio, the air was filled with the greasy scent of candle smoke and the bitter tang of ink — here, there is no smell. No human warmth. No weight of judgment.

The agent's "computation" does the same work Giovanni did. The depth of the liquidity pool, the deviation of the oracle price, the expected return relative to gas fees, the correlations within the portfolio. Hundreds of variables are processed in milliseconds, and capital is sent along the path the objective function deems optimal. It is essentially the same computation Giovanni performed when he mentally weighed Bruges wool prices against the Burgundian court's ability to pay. Only the number of variables and the processing speed have changed.

One thing is different. Giovanni's half second — that pause — does not exist for the agent. Because there is no reason to pause. "Is this right?" is not a question included in the objective function. Whether something is efficient, whether the yield is high, whether the risk falls within acceptable bounds — these can be calculated. But "is this right?" is not an object of calculation. Rightness depends on context, and context is composed of things that cannot be reduced to numbers — history, relationships, intuition, and a sense of responsibility. It is something that flickers somewhere in human consciousness just before ink is laid on parchment. The pause that Seo Yuna's father never once skipped in thirty years before pressing his seal. What Junhyeok called "noise." What Yuna called "conscience."

The flicker that never went out in 600 years now stands, for the first time, before the possibility of being extinguished.

Is that progress, or is it loss? This book does not answer. All this book can do is measure the weight of the question.

If one thing has become clear, it is this: every time the methods of capital allocation changed, the old guard feared the new methods; the new guard mocked the old, and in the end, both were only partially right. Goldsmiths feared the Medici bill of exchange. Provincial bankers distrusted the Bank of England's banknotes. Veteran Chicago traders scoffed at the Black-Scholes formula. Central bankers dismissed Bitcoin as fraud. And now, regulators are wary of AI agents, and protocol builders view human gatekeepers as relics of a bygone era.

History shows that neither side was ever entirely wrong. It also shows that neither side was ever entirely right. The bill of exchange was an innovation, but the Medici bank collapsed. Black-Scholes was elegant, but LTCM went bankrupt. Bitcoin survived, but Terra/Luna evaporated. Every system looks perfect while it operates, and the moment it fails, it looks as though it was flawed from the start. And the new system, unable to fully replace its predecessor, takes a seat beside the old one. Even after the bill of exchange disappeared, banks survived. Even after Black-Scholes, human traders remained on the trading floor. Even after Bitcoin arrived, central banks endure. As the digital currency wars of Chapter 11 show, the historical pattern is not that the new replaces the old, but that the new sits beside the old, creating tension.

Yet there is one scene I keep returning to. The conference room of the loan review committee from Chapter 3, after the meeting ended and the lights went out. In that space where only cold coffee cups and binders remained, the fate of 68 billion won (roughly $50 million) had been decided. Five humans had wrestled with uncertainty for over two hours before arriving at the conclusion of "conditional approval," and a single stamp was pressed. A slow, expensive, bias-prone process. The pressure of the consortium and the temptation of performance targets were always present in that conference room. And that imperfect process was the engine that had sustained capital allocation for 600 years.

That engine is being replaced. Whether the replacement will be total, whether the old and new will coexist, or what form the transition will ultimately take — no one yet knows. What is clear is that this change has already begun, and that the 600-year trajectory now stands upon a single inflection point.

Looking at the history of speed, the contour of that transition comes into view.

The speed at which capital moves has increased millions of times over 600 years, and the window in which humans can intervene has narrowed at the same rate. Whether this is the progress of technology or the eviction of human judgment — they are merely two names for the same phenomenon. As the invisible hand moves ever faster, the time to ask whose hand it is grows ever shorter.

In the scrittoio in Florence, when Giovanni signed the bill of exchange, he did not know he was creating a grammar for the world 600 years hence. From a single sheet of paper called a bill of exchange that crossed the Alps, traversed the Mediterranean, and spanned the Atlantic, until at last it became a blockchain transaction. The principle of separating value from physical goods and storing it in relationships has not changed. The wax seal became a digital signature, the secret ledger became a distributed ledger, and the courier's leather document case became a fiber optic cable.

But one thing has changed. For 600 years, it was always a human who signed that piece of paper. It was a human who judged, hesitated, decided, and bore responsibility. Sometimes that judgment was superb — Giovanni's quiet screening protected the Medici bank for half a century. Sometimes it was catastrophically wrong — Portinari fell under the spell of the Burgundian court; Moody's stamped AAA on garbage; Bonin pulled back on the stick in a stall. But the agent of judgment had always been human. That continuity now stands, for the first time, before the possibility of being broken.

Through the window of a Seoul hotel room, the Han River was visible. The river was black in the winter night, and above it, the lights of the bridges repeated at even intervals. Banpo Bridge, Dongjak Bridge, Hannam Bridge — each bridge cast a different color of light that rippled long across the water. The Han River is a kilometer wide. Ten times the width of the Arno. Between the moment Giovanni laid ink on parchment in a stone building on the banks of the Arno and this winter night by the Han River, there are 600 years. From the Arno to the Han. From parchment to blockchain. From quill to code. From the warmth of a wax seal to the cold blink of an LED.

The Han River was wide and dark and quiet. The Arno would have been narrow and warm and noisy. But the same question flowed over both rivers. Is it safe to send this money? Is this judgment right? Who bears responsibility?

In a data center somewhere, at this very moment, an AI agent is executing transactions. A new block is produced every twelve seconds, and within each block, capital moves. No one presses a seal. No one pauses. Without a half-second margin, with millisecond precision, capital flows without rest.

The invisible hand has not yet ceased its trading. Only whether that hand still belongs to a human — of that, we grow less and less certain.