Sydney, one night in 2012.
Someone was feeding bundles of cash into an Intelligent Deposit Machine (IDM) at a Commonwealth Bank of Australia (CBA) branch. Past midnight. The cameras were running, but no one was watching. The machine accepted the cash, credited the account, and printed a receipt. No questions asked.
The same thing happened the next day, and the day after. For three years, thousands of times. CBA's IDMs were convenient cash deposit devices installed across Australia — anyone could deposit any amount, at any time. They were equally convenient for criminal syndicates. Drug proceeds, smuggling funds, money that needed to stay off the radar — it all flowed into IDMs night after night.
In late 2012, an unrelated software update disabled the automatic Threshold Transaction Report (TTR) function. The system supposed to flag cash transactions exceeding 10,000 Australian dollars quietly stopped working. By 2015, the number of unreported transactions reached 53,506 — approximately 95% of TTRs during that period were missed. The funds estimated to have been laundered totaled 624.7 million Australian dollars.
In its statement of claim, AUSTRAC, Australia's financial intelligence agency, wrote: "CBA did not assess the money laundering and terrorism financing risks before the IDMs were deployed." Not a single red flag was raised in three years. Criminal syndicates had been systematically exploiting the IDMs through structuring, rapid cash cycling, and account farming. After CBA discovered the coding error, it took an additional two years to fully fix it.
CEO Ian Narev dismissed 53,506 legal violations and more than 600 million dollars in laundered criminal funds as "a single software coding error." The fine was 700 million Australian dollars, plus a separate 2.5 million Australian dollars for AUSTRAC's litigation costs. It was the largest civil penalty in Australian corporate history.1
CBA is both a failure of RegTech and the most powerful proof of its necessity. Had the anti-money-laundering detection system functioned properly, the 53,506 violations would never have occurred. Technology did not create the problem. The absence of oversight did. Someone should have checked the code. Someone should have asked why no alarms were sounding. For three years, no one asked.
This story makes two points at once. First, technology can be a tool of regulation. Second, when that tool fails, the failure is amplified on a scale far larger than human error alone. A single coding error enabled tens of thousands of crimes. It was the moment when the "absence of oversight" that Horner discovered in Chapter 2 was translated into digital form. In 1834 Lancashire, four inspectors could not monitor thousands of factories. In 2012 Sydney, a single software update disabled monitoring of tens of thousands of transactions. Only the scale changed; the structure remained the same. Automation scales efficiency. It scales errors, too.
In Part 3, we saw the invisible judges — algorithms that reject loans in three seconds, systems that assign different risk scores by race, workers fired without a safety net. At the end of Chapter 9, we asked: if there is no safety net, who will weave one? In this chapter, we examine attempts to regulate technology with technology. And we ask what those attempts need in order to become answers.
1. The Invention of the Sandbox — Regulatory Sandboxes
2014, London. The Financial Conduct Authority (FCA) quietly opened a new office in one wing of its Canary Wharf headquarters. The Innovation Hub. Its initial staff numbered five.
The problem the FCA faced was straightforward. For a fintech startup to launch an innovative financial service, it first had to obtain full authorization — a process that was complex, costly, and slow. Startups ran out of funding waiting. Others gutted their most innovative features to meet authorization requirements. Entering the market without authorization was illegal. Both paths were blocked — innovation on one side, protection on the other.
FCA CEO Martin Wheatley declared at the time: "It is our duty as a regulator to be on the right side of progress. We want to create a space for the best and most innovative businesses to enter the financial markets." In its first year, the Innovation Hub received more than 700 inquiries. The FCA saw in that number the energy of the market.
The FCA's solution was a sandbox. Like a child playing in a sandbox — a structure for trying new things, failing, and learning in a space isolated from the real world. Director of Strategy and Competition Christopher Woolard defined the program as follows: "A safe space where businesses can test new ideas without immediately bearing the full consequences of normal regulation." And he added with candor: "This is as much an experiment for us as it is for businesses." It was the moment a regulator publicly admitted that it, too, was part of the experiment.
In 2016, the world's first regulatory sandbox announced its first cohort. Sixty-nine firms applied, thirty-one were approved, and twenty-four entered actual testing. Firms tested real financial services on a limited number of consumers for a limited period (six to twelve months). The FCA monitored in real time. The key was bidirectional learning. Firms tested products in a regulated environment; the FCA learned how new technologies actually worked. Ed Maslaveckas, founder of Bud, a fintech startup that participated in Cohort 1, captured the essence of the structure in a single sentence: "It gives innovators a chance to show that their innovation is consistent with the spirit of regulation, and it gives the regulator a chance to see use cases that weren't imagined when the rules were written — creating a collaborative environment."
It was a third way — neither pre-emptive regulation (the EU approach) nor post-hoc regulation (the US approach). Pre-emptive regulation prohibits or permits everything in advance, risking the stifling of innovation. Post-hoc regulation responds only after problems erupt, leaving consumers to absorb the damage. The sandbox lets you try first in a controlled environment and builds regulation from what you learn. In October 2017, the FCA published a "lessons learned report" drawing on its experience with Cohorts 1 and 2. It was a rare instance of a regulator publicly reporting what it had learned from its own experiment. Woolard announced the results: "We found that the risk of not opening the market to innovation was greater than the risk of taking the leap. Ninety percent of Cohort 1 firms went on to market."
The numbers speak for themselves. By 2025, the FCA sandbox had processed more than 630 applications. Investment in participating firms was 6.6 times that of comparable peers, and their survival rate was 25% higher. More than ninety-five countries benchmarked this model. In 2019, the FCA launched the Global Financial Innovation Network (GFIN), bringing together regulators from over fifty countries. It was the beginning of a "global sandbox" in which firms could test simultaneously across multiple jurisdictions. A feedback loop in which one country's experiment reshapes another's regulation — a learning network among regulators had formed.2
Yet the FCA's lessons learned report did not record only successes. The failure patterns were equally clear.
Some firms in Cohort 1 met regulatory requirements during their sandbox period but saw consumer complaints surge after full launch. "Consumers in a test environment" were not the same as "consumers in a real market." The risk profiles of services chosen by a limited number of early adopters differed from those of services targeting general consumers with lower digital literacy. The FCA named this the "internal validity limitation of the sandbox" and strengthened consumer protection plans from Cohort 2 onward.
Another pattern: startups entered the sandbox with innovative ideas that earned approval, but twelve months later, upon graduation, the pressure to attract investors and scale up led some to compromise the consumer protection principles they had started with. To close this gap, the FCA introduced a mandatory twelve-month post-graduation monitoring period. Experimentation was permitted, but the aftereffects of experimentation were now tracked as well.
Yet the supervisors, too, had begun using AI. SupTech — Supervisory Technology — refers to the technology that regulators deploy for supervisory purposes. As of 2025, 197 financial authorities across 140 countries had deployed at least one SupTech solution. That figure had surged 3.6 times in three years, up from 54 authorities in 2022. The RegTech market as a whole is projected to grow from 19 billion dollars in 2025 to 77 billion dollars by 2034.3 Just as Leonard Horner monitored Lancashire cotton mills with four inspectors in 1834, today's financial supervisory agencies face a structural impossibility: overseeing tens of millions of simultaneous decisions by hundreds of AI systems with human staff alone. The response is to fill the speed gap created by AI with AI. A paradox hides within.
2. CEO Choi's Sandbox
CEO Choi first learned about regulatory sandboxes in 2024. A thirty-two-year-old co-founder running an AI medical imaging startup in Seongsu-dong, Seoul. She had raised 500 million won (approximately 370,000 dollars) in seed funding, but the legal interpretation of whether her AI diagnostic software counted as a "medical device" or "software" varied from one law firm to the next. She commissioned legal opinions from three firms and received three different answers. The cost was 12 million won — 2.4% of her seed funding, equivalent to one month's salary for her five developers.
On CEO Choi's desk sat a thick binder. It contained printouts of nineteen AI-related bills that had been pending concurrently in the National Assembly before the AI Basic Act (formally, the Framework Act on the Development of Artificial Intelligence and the Establishment of Trust) passed. In the financial sector alone, there were forty AI-related guidelines. Some bills contradicted each other; some guidelines used different names for the same technology. The absence of law was not freedom but lawlessness. At investor meetings, she faced the same question over and over: "What about legal risk?" The law itself was unclear, so she could not answer.
South Korea had also introduced a regulatory sandbox in 2019 through the Special Act on Financial Innovation Support, benchmarking the FCA model. As of December 2024, the cumulative number of designations had reached 500, with a record 436 new applications filed that year alone. The number itself reflected the scale of regulatory uncertainty — because a sandbox is, by definition, "a system that temporarily permits services whose legality is unclear under existing regulations."
CEO Choi applied to the sandbox. Her AI imaging diagnostic software demonstrated 12% higher sensitivity in detecting pulmonary nodules compared to board-certified radiologists. The technology worked. The problem lay outside the technology. Writing the application alone took two months. Technical documentation, a consumer protection plan, a risk assessment report, an exit strategy. More legal fees.
She spent twelve months inside the sandbox. The technology worked. Validation was completed on imaging data from 3,000 patients. But when she applied for formal authorization after graduation, the required documentation was three times longer than what she had needed to enter the sandbox — like passing a graduation exam only to be told to retake the entrance exam. This was the phenomenon known as the "post-sandbox wall." A structure that permits experimentation but does not pave the road beyond it. South Korea's maximum sandbox designation period was four years — longer than the UK's six to twelve months — but the correspondingly slower pace of amending underlying regulations made the post-sandbox wall even higher. As large corporations also began utilizing the sandbox, criticism emerged that a system designed for startups was becoming a preemptive positioning tool for major institutions.
Investors' calls began to taper off. Half of the 500-million-won seed funding had already been spent. Developer salaries, server costs, legal advisory fees. Eight months of runway remained. To raise a Series A, she needed formal authorization; to obtain formal authorization, the enforcement decree had to be issued; and the enforcement decree was still in the legislative notice period. "We proved that our technology can save lives, but no one will tell us whether we're allowed to use it."
In December 2025, the draft enforcement decree for the AI Basic Act was announced for legislative notice. CEO Choi opened and read the notice. Medical AI was still classified as "high-risk AI." High-risk AI operators were required to implement "trustworthiness assurance measures."
But the specific criteria for "trustworthiness assurance measures" had been delegated: "to be determined by ordinance of the Ministry of Science and ICT." The enforcement decree had arrived, but the decree itself delegated to a ministerial ordinance. The ordinance did not yet exist.
On January 22, 2026, the AI Basic Act went into effect. What specific obligations applied to CEO Choi's company remained undetermined. She opened a spreadsheet and recalculated her runway. Six months. It seemed the money would run out before the ordinance was issued.
Still, the reason CEO Choi did not give up was simple. During hospital testing, a radiologist had told her: "If we'd had this software, we would have caught a nodule we missed." Once you witness the moment technology touches a human life, it is hard to walk away.
3. Behind the Green Light — Compliance Theater
A mid-sized bank in London's financial district. A compliance officer checks the RegTech dashboard. All indicators are green. Reports are auto-generated and submitted to the supervisory authority. Green lights mean reassurance — a signal that regulations are being followed.
But when an auditor from the European Banking Authority (EBA) arrives, no one can explain what lies behind those green lights. Because no one has verified whether the algorithm "correctly" implements the regulation. The EBA documented this phenomenon in an official report. Controls had been left "untested, unmapped, and not understood." As RegTech was layered on top of existing compliance silos without integration, it was producing programs that appeared robust but collapsed under close scrutiny.
The name the EBA gave it: Compliance Theater. On stage, a flawless performance unfolds. The audience (the supervisory authority) applauds. Backstage, nothing works.
How are green lights produced? The mechanism is simple. A RegTech system implements a specific provision of a specific regulation in a specific way. Once implementation is complete, a checkbox is filled. Once all checkboxes are filled, the dashboard turns green.
The problem is the gap between what the regulation intended and what the system implemented. The regulation says: "Report suspicious transactions." The system translates this as: "Report cash transactions exceeding 10,000 dollars." Structuring — repeated transactions below 10,000 dollars — stays off the system's radar. The checkbox is green. The regulation is being violated.
In CBA's case, structured transaction detection failed precisely in this gap. The algorithm met the letter of the regulation but missed its spirit. And no one asked: what is our system failing to detect?
This risk is reality in South Korea as well. In the financial sector alone, there are forty AI-related guidelines. The Electronic Times (Etnews) called it "the testing ground for banks' regulatory response capacity." The moment the goal shifts from complying with regulation to appearing to comply, RegTech ceases to be a shield and becomes a mask.
CBA's lesson echoes again here. And CBA was only the beginning. The UK's digital challenger bank Starling Bank had an automated sanctions screening system that checked only part of the UK Treasury's sanctions list, not all of it — for seven years. Its customer base had surged from 43,000 to 3.6 million, but its financial crime controls had not kept pace with growth. FCA Director of Enforcement Therese Chambers said: "Starling's financial sanctions screening controls were shockingly lax. It left the financial system wide open to criminals." The fine was 28.96 million pounds.4
Fintech company Revolut made an even more dramatic choice. In 2018, when its automated anti-money-laundering monitoring system generated too many false positives, the company simply switched it off. The watchdog barked too much, so they muzzled it. During the period from July through September when monitoring was suspended, illegal transactions may have passed through undetected. Revolut did not inform the FCA. Lithuania's central bank imposed a 3.5-million-euro fine in 2025.
A pattern emerges. A single coding error scaled an error across 53,506 automated decisions (CBA). A system diligently screening an incomplete list sent a pass signal for seven years (Starling). Monitoring was suspended entirely because of too many false positives (Revolut). An individual human's mistake affects a single judgment. Poorly designed RegTech repeats the same error across thousands, tens of thousands of automated decisions. The democratization of error. And until that error is discovered — three years in CBA's case, seven years in Starling's — no one knows. Because the green light is on.
Harvard law professor Lawrence Lessig wrote in 1999: "Code is Law."5 The insight that software design itself determines what is possible and impossible. In the age of AI, this logic cuts more sharply. If AI monitors AI, who monitors the monitoring AI? UC Berkeley's Stuart Russell confronted this limitation head-on: "Deep neural networks cannot have rules directly embedded in the code. Models cannot be engineered to demonstrate compliance with regulations."6 Here lies the fundamental limit of regulating code with code. The moment Lessig's "Code is Law" framework fractures before AI.
The more sophisticated RegTech becomes, the more it reinforces automation bias — the confidence that "the system is running, so everything is fine." Human operators blindly following algorithmic outputs even when those outputs are wrong: this is not a failure of technology but a failure of overconfidence in technology. The fact that CBA's IDMs had an anti-money-laundering detection system built in bred the belief that "technology is handling it," and that belief created a culture in which no one asked questions for three years.
The deeper problem is opacity. RegTech vendors refuse to disclose the internals of their compliance algorithms, citing intellectual property protection. Regulators cannot verify whether algorithms correctly implement regulations. When errors occur, the chain of accountability breaks. The same structural problem we saw in Chapter 8 — Northpointe's "trade secret" defense of the COMPAS algorithm — is being reproduced inside regulatory tools themselves.
Here, one of this book's core patterns surfaces again. Incumbent capture. Large financial institutions can absorb RegTech costs, but for small firms, compliance infrastructure is an existential threat. In the EU, the annual compliance cost for high-risk AI is 29,277 euros per unit, with certification costs of an additional 16,800 to 23,000 euros.7 For large corporations, this is an operating expense; for startup founders like CEO Choi, it is a significant share of seed funding. Rather than democratizing compliance, RegTech concentrates compliance capacity in large institutions. The regulatory tool itself reproduces the very power asymmetry that regulation was meant to prevent. Just as Roman senators monopolized enforcement authority over agrarian laws, technology firms convert RegTech into a tool of self-defense. Singapore's release of AI Verify as open source is one answer to this problem, but making the tool free does not bridge the gap in specialized personnel and infrastructure. The EU AI Act's requirement for third-party conformity assessments of "high-risk AI systems" is another answer.
RegTech is not the answer. It is a necessary but insufficient condition.
4. Two Paths — And South Korea's Middle Ground
Around the same time, an entirely different experiment was underway on the other side of the world.
Singapore. A city-state of 5.9 million people. It cannot leverage 500 million consumers to impose regulation like the EU, nor can it enforce directives with the uniformity of party command like China. The strategy this small nation chose was frameworks instead of legislation, testing tools instead of penalties, partnerships instead of regulation.
In 2022, it launched AI Verify, the world's first AI governance testing framework. A tool that allows companies to self-assess whether their AI systems comply with eleven international AI ethics principles. Rather than issuing certifications, it produces "test reports," providing evidence that companies have transparently evaluated their own AI. Because the tool itself is open source, the cost approaches zero. Not compelled transparency but incentivized transparency. Google, Microsoft, and Amazon joined as pilot partners — the paradox of global Big Tech adopting the AI governance tool of a city-state of 5.9 million before anyone else did.
Minister for Communications and Information Josephine Teo articulated Singapore's philosophy: "Our interest is governance, not regulation." And she put it more directly: "Before we become very clear about what the outcomes are that we want to achieve, introducing regulations could actually be counterproductive." She offered a concrete example: "It depends on use. If it's being used for elections, yeah, we'll have rules. But if it's not being used for elections, frankly, in Singapore, go ahead." A use-case-centered approach that stood in sharp contrast to the EU AI Act's comprehensive risk classification.
Without passing a single law, Singapore updated its governance to address two generations of AI technology in eighteen months — from the Generative AI Governance Framework in 2024 to the world's first Agentic AI Governance Framework in January 2026. During the same period, the EU was fastening the first buttons of AI Act implementation, and CEO Choi was waiting for the enforcement decree. In February 2025, at the Global AI Action Summit in Paris, the contrast was dramatic. While the EU delegation explained the AI Act's enforcement architecture, Singapore's delegation presented a project with Japan's AI Safety Institute to validate LLM safety guardrails in ten languages. Where the EU arrived on the world stage with legislation and China with ministerial regulations, Singapore made its presence felt through "testing infrastructure."
Singapore's approach has also drawn criticism for sacrificing depth in exchange for speed. The frameworks do not specify concrete technical requirements, and there is no independent verification mechanism for test results. In a structure where companies "test themselves and report themselves," the incentive to withhold unfavorable results is ever-present.
On the opposite end stood the EU. On August 2, 2025, the EU AI Office began full operations. It was the enforcement body for the EU AI Act, the product of a three-year legislative process. Fines for violations involving prohibited AI practices could reach 35 million euros or 7% of global revenue. Democratic legitimacy grounded in the consensus of twenty-seven nations. Yet as of March 2026, no public enforcement actions have been taken. The law has been made, but the institution to enforce it is still preparing.8
Soft law and hard law. Which is right? The answer is not simple. The optimal regulatory model varies with scale, speed, and cultural context. Singapore's agility is possible because it is a city-state of 5.9 million people. The EU's comprehensive legislation carries legitimacy because it passed through the democratic consensus of twenty-seven nations. Each model comes at a price. Singapore's price is the absence of legal enforceability — in a structure that depends on corporate "goodwill," there is no guarantee that frameworks will be honored when interests conflict. Evidence that self-regulation has actually prevented AI harms remains scarce. The EU's price is speed — during the three years it took to legislate the AI Act, the technology had already moved on to the next generation.
South Korea stands somewhere between these two models. The AI Basic Act passed the National Assembly on December 26, 2024, with 260 votes in favor, zero against, and 4 abstentions out of 264 members present. Behind the apparently unanimous numbers lay criticism from civil society: "Both ruling and opposition parties aligned in a direction that serves the interests of AI technology companies, and no member of the National Assembly fought to protect citizens' safety and human rights from AI risks." Ideological barriers were at work here, too. The same function as the mos maiorum (ancestral custom) in Rome that we saw in Chapter 1 — where it is not the content of regulation but the very attempt to regulate that is attacked as "obstructing innovation." Administrative fines were deferred by one year, taking effect from January 2027. The draft enforcement decree was only announced for legislative notice in December 2025. The law was passed, but more time was needed before the law would actually operate.9
CEO Choi, who in Chapter 6 had sensed an opportunity for overseas expansion when she observed the EU AI Act's "Brussels Effect," now stood before her own country's regulatory labyrinth. Forty guidelines. Some called it an "AI system," others called it "intelligent information technology," still others an "automated decision-making system" — three names for the same technology. Singapore's tool was one; the path was clear. South Korea's guidelines numbered forty; the path was opaque.
A story circulated in the industry about a Korean startup founder who had sent a question to a contact at Singapore's Infocomm Media Development Authority (IMDA): "We want to supply our AI medical diagnostic software to hospitals in Singapore. What do we need to do?" A reply came three days later: "Use the AI Verify framework to check your system and submit the test report. The tool is free." If the same question were sent to South Korea's Ministry of Food and Drug Safety — there was no statutory deadline for a response. Some founders waited two months; others eventually submitted a formal inquiry through a law firm. It was not the content of regulation but the manner of accessing regulation that formed the barrier.
CEO Choi could not turn off her monitor at two in the morning. She drafted a message to her co-founder, then deleted it. "I don't know what we did wrong." They had done nothing wrong. The problem was the absence of a clear path. She contacted a radiologist she had interned under when she first started the company. "If you wanted to use our software, how would you go about it?" The doctor replied: "I'm not sure either. I think we'd need to ask the IRB (Institutional Review Board), but the IRB said they don't have guidelines on how to classify AI software." The technology worked. The clinical efficacy was proven. There were people who wanted to use it. And yet it could not be used. Not because there was no path, but because there were too many.
5. The Speed Gap, and a Choice
The pattern we have traced since the beginning of this book resurfaces here — the time gap between technological change and institutional response. From Black-Scholes derivatives to the Dodd-Frank Act: thirty-seven years. From Bitcoin to national regulations: more than eighteen years. From generative AI to the EU AI Act: two years.
Thirty-seven years to eighteen, eighteen to two. Looking at the pattern alone, it seems humanity is learning. Brooksley Born's attempt to regulate derivatives, which we saw in Chapter 3, came in 1998. Ten years later, the global financial system collapsed, and only then was the Dodd-Frank Act passed. Crises accelerate regulation; whether humanity has truly learned is less clear.
But the pace of AI's evolution is accelerating, too. From GPT-4 to o3, the cycle of change is measured in months. If technology accelerates faster than regulation can, the gap does not close. Just as the Roman Senate remained silent while the latifundia spread over 130 years, today's parliaments deliberate for years while AI replaces itself every few months.
The cycle that adaptive regulation proposes looks like this: real-time monitoring, automatic alerts, conditional adjustment, post-hoc verification, learning. The feedback cycle shrinks from years to days. From reactive response to proactive detection. Not punishing violations after they occur, but detecting the possibility of violations in real time and issuing alerts. What if CBA had had such a system? An automatic alert would have fired the first week TTR reporting stopped. AUSTRAC would have issued a conditional remediation order immediately. The problem would have been resolved in three weeks, not three years. It would have stopped at a few hundred cases, not 53,506.
But for this structure to work in practice, what is needed is not a technology choice but a governance choice. Faced with the same AI technology, companies are walking diametrically opposite paths. Walmart is maintaining its workforce of 2.1 million while investing roughly one billion dollars in upskilling.10 Klarna cut 40% of its staff, then watched service quality decline. The pattern from Chapter 9 — the same technology can either replace workers or complement them. The technology is identical. Only the governance differs. Who decides. Whose interests are reflected. Who bears the cost.
6. The Mirror — Technology Is a Tool
CBA's Intelligent Deposit Machine was technology. A single coding error turned the tool into a conduit for crime. The FCA's regulatory sandbox was also technology. Yet Monzo, often cited as the sandbox's emblematic "graduate," was fined 21.09 million pounds by the FCA in July 2025 for anti-money-laundering failures. Its automated onboarding system checked only whether addresses matched a UK postcode format; it did not verify actual addresses — customers who entered "Buckingham Palace" and "10 Downing Street" as their addresses passed through the system. The fact that a company nurtured in an innovation-friendly environment could fail at basic regulatory compliance proved that sandboxes are not a panacea. Experiments offer the opportunity to learn, but if learning is not internalized, experimentation becomes not a license but a delusion.11
Technology itself is neither good nor evil. Who uses it, and within what governance framework, is what decides.
The two cross-cutting patterns revealed in this chapter confirm this. Incumbent capture — the structure in which large institutions convert RegTech into a tool of self-defense, and compliance costs function as barriers to entry for small firms. Ideological barriers — the structure in which technological solutionism ("technology will solve the problem") blocks the legitimacy of institutional intervention. The fact that CBA's IDMs had an anti-money-laundering detection system built in bred the belief that "the system will handle it," and that belief enabled three years of neglect. It is the technological version of the same logic by which Alan Greenspan believed "self-interest is the safeguard." As we saw in Chapter 3, he ultimately had to confess: "I have found a flaw."
The preconditions for RegTech to work lie outside technology. Human oversight, transparent algorithm audits, regular stress testing, and a culture that asks what lies behind the green lights. The FCA's publication of its "lessons learned report" is one example of this culture — the institutional courage to share failures rather than conceal them. None of this is technology; it is institutions, and institutions arise from consensus. What was needed to discover CBA's coding error was not better AI but someone who asked, "Why aren't the alarms going off?" When the EBA exposed Compliance Theater, it was not a more sophisticated algorithm but an auditor who looked behind the green lights.
In Chapter 8, Mr. Park, denied a loan by AI, never learned the reason. The AI Basic Act imposes an explanation obligation on AI developers, but not on banks that use AI. If AI monitors Mr. Park's loan denial — is there a structure in which someone like Mr. Park can participate in the design of that monitoring algorithm? CEO Choi's startup proved its technology in the sandbox but stands before the wall of formal authorization. The FCA's sandbox, Singapore's AI Verify, South Korea's Innovative Financial Services — all are top-down selection structures. The regulator decides who gets to experiment. The voices of "those who have been pushed aside" still cannot reach the institutions.
In the end, the problem is not technology. It is consensus. The FCA sandbox was a consensus between regulator and firms. Singapore's AI Verify was a consensus between government and industry. The EU AI Act was a consensus among twenty-seven national parliaments. But in all of these consensuses, voices are missing — the victims of money laundered through CBA, those harmed (not benefited) by the illicit funds that passed through Starling's flawed sanctions screening, founders like CEO Choi who have lost their way in the regulatory labyrinth. People who could not take a seat at the table of consensus. How do we build that consensus?
In the next chapter, we will see the answer that Audrey Tang of Taiwan discovered. The story of how removing the reply button made the trolls disappear, and citizens began to reach consensus. The story of a farmer who participated in Seoul's deliberative poll on the Shin-Kori nuclear reactor and stayed up late reading a hundred-page briefing packet. If technology is a tool, can consensus, too, be designed?