← Slow Justice, Fast Order Vol. 4 8 / 13 한국어
Vol. 4 — Slow Justice, Fast Order

Chapter 7 — Three Seconds to Judge, Three Years to Regulate


1. What "Three Seconds to Judge" Means

AI loan-screening systems decide within seconds.

Upstart, an American fintech company, feeds more than 1,600 data points into its credit-assessment model (as disclosed in its 2020 IPO filing; later expanded to over 2,500).6 Education, employment history, transaction patterns — all of it. According to the company, the automated processing rate was approximately 70 percent at IPO and has since surpassed 90 percent, meaning most loan applications are processed without human intervention. Within seconds, "approved" or "denied" appears on screen. The decision is final. Another AI lending company, Zest AI, self-reported a 20 percent reduction in default rates and a 25 percent increase in approval rates. These figures, however, are the company's own claims and have never been independently verified.7

Filing an appeal takes months to years. CFPB (Consumer Financial Protection Bureau) complaints are processed within fifteen to sixty days, but structural change — regulatory guidance, class-action certification, legislation — takes five to ten years.

Seconds versus years. This is the temporal asymmetry of the AI age.

The asymmetry is not an abstraction. It governs people's daily lives.


Five a.m., Yeongtong-gu, Suwon, Gyeonggi Province, South Korea. In a one-room apartment, two portable battery packs sit charging on a power strip. Driver Lee — a composite character, like Mr. Park in Chapter 1 — pulls a padded jacket over his pajamas and opens the front door. He starts the motorcycle parked in the lot. The cold engine sputters, the sound bouncing off the apartment walls. He slots his smartphone into the handlebar mount, and the delivery app has already pushed the first call.

Thirty-four years old. Net monthly income roughly 2.8 million won — about $2,100. He spends ten to twelve hours a day on the motorcycle. The dispatch algorithm governs his workday, recalculating routes every 0.3 seconds. Where to go, how many minutes until arrival, in what order to deliver. The algorithm decides. Driver Lee follows.

A Korean delivery worker who participated in an academic study testified:

"Artificial intelligence calculates the quantity and route together. It means that the AI sets up for whom, how much volume, and which route to allocate."

— Anonymous delivery rider, Frontiers in Public Health (2025)

For whom, how much, which route — the algorithm decides everything. For Driver Lee, the employer is invisible. The app assigns work, the app evaluates, the app suspends accounts. "I don't even know who my employer is," he said in a 2024 interview.

When rain delays a delivery, a one-star rating appears. Between safety and ratings, there is no room for choice. Against an algorithm that reroutes every 0.3 seconds, no channel for appeal exists. The wait is infinite.

Here is the paradox. South Korea's MyData financial data-portability service had approximately 117.87 million enrollments as of February 2024 (including duplicates).8 Through sixty-nine licensed operators, financial data is seamlessly integrated. Driver Lee's credit card transactions, insurance policies, national pension contributions — everything is transparently linked. Yet nowhere is there a legal obligation to explain how algorithms read and judge that data. According to a 2024 report by the Financial Stability Board (FSB), more than forty jurisdictions worldwide have issued policy responses related to AI in finance, but most remain principle-level guidelines with no legal binding force.9 Transparent data, opaque judgment.

And on the other side of the labor market, another three-second judgment operates. The testimony of Noh Sang-beom, a fifteen-year veteran IT headhunter:

"Everyone who works sitting in front of a computer is in danger. From the employer's perspective, paying 300,000 won a month for AI for a year is far more cost-effective than hiring a junior developer at an annual salary of 50 million won."

— Noh Sang-beom, 15-year IT headhunter (KBS Investigative Report 60 Minutes, 2026)

Fifty million won a year versus 300,000 won a month. An employer's cost calculation takes three seconds. The junior developer on the receiving end of that calculation may need three years to find another career.

Counterarguments exist. AI loan screening and AI interviews were not introduced merely for corporate convenience. Traditional human evaluation had structural problems of its own. A bank teller's first impression, an interviewer's bias toward certain alma maters, a loan officer's mood — such subjective factors dominated conventional assessments. In practice, some fintech companies that adopted AI-based lending opened credit access to populations that traditional banks had denied: immigrants, the self-employed, recent graduates with thin credit files. Research suggests that people previously excluded from conventional finance were, for the first time, able to enter the credit market thanks to AI. AI-powered interviews were introduced with a similar rationale — to reduce recruiters' unconscious biases and evaluate candidates on substance rather than appearance or mannerisms. The possibility that AI could be a tool for fairness rather than just speed was not baseless hope.

But for possibility to become reality, one condition must be met: verification. The problem is not AI's speed per se but the absence of oversight at that speed. When fast judgment produces fast harm, the capacity to detect and correct that harm must be equally fast. The fact that Upstart uses 1,600 data points does not make its judgments fair. The problem is that no one is checking, at anything close to the same speed, whether those judgments operate unfairly against someone. AI may open new access or create new exclusion — what determines the difference is not the algorithm but the structure of oversight.

Judgment is fast. Recovery is slow.


2. The Asymmetry of Speed — Three Cases

May 7, 2016, Florida. A Tesla Model S was traveling in Autopilot mode at 119 kilometers per hour. The system perceived the white side of a tractor-trailer ahead as open sky. It crashed without warning. The driver, Joshua Brown, was killed.

"This is only level two of a zero to five scale on automation... with level two by definition driver engagement is necessary."5

That was NTSB Chair Sumwalt's diagnosis. Level 2 on a scale from 0 to 5 — a stage that, by definition, requires driver engagement. Yet Tesla had named the system "Autopilot," borrowing from aviation's automatic flight-control systems.

Marketing overstated the technology. Overstatement bred complacency. Complacency killed a person. The judgment at 119 kilometers per hour took milliseconds. The NTSB report on the crash took sixteen months.


When judgment fails on the road, people die. But when judgment fails in places no one can see, people are pushed out slowly and quietly.

If Tesla exaggerated through marketing, Amazon amplified bias through data.

Amazon began developing an AI-based resume screening system in 2014. It was trained on the previous decade's hiring data. The problem was that those ten years coincided with the technology industry's male-dominated era. "Successful applicants" were predominantly men, so the system learned that male-patterned resumes were good resumes.

Jeffrey Dastin of Reuters reported the findings. The AI penalized resumes containing the word "women's." Captain of a women's chess club — a credential demonstrating leadership and strategic thinking. To the AI, it was a minus. Captain of a women's soccer team, graduate of a women's college — all penalized. Dastin summarized the root cause in a single phrase: "Garbage in, garbage out." Biased data in, biased judgment out.

Amazon scrapped the system in 2017. How many female applicants the AI had filtered out over three years was never disclosed. The numbers were buried.


Amazon's bias at least left traces in the data. A Reuters reporter could follow those traces and publish a story. But there are cases where no trace remains at all.

And if Amazon's bias hid inside data, the problem with South Korea's AI interviews was more fundamental: the criteria for judgment were absent entirely.

As of 2021, numerous South Korean public institutions had adopted AI-powered interviews. Among them were Incheon International Airport Corporation (인천국제공항공사), Korea Airports Corporation (한국공항공사), KOICA (Korea International Cooperation Agency), and KEPCO KDN. AI analyzed applicants' facial expressions, vocal patterns, and word choices to assign scores.

At Incheon International Airport Corporation, the results were bizarre. Not a single final hire came from the top 10 percent of AI interview scores. From the bottom 10 percent, 35 percent were ultimately hired. A systematic disconnect existed between the AI's judgment and actual outcomes.

A civil society representative pointed out that public institutions had paid no attention to what questions were asked or how the system evaluated people — a fact "revealed in the form of information non-existence." The verification methods were even more absurd — at one institution, the entire pre-deployment validation consisted of "five employees testing it."

A system that would determine the futures of thousands was deployed on the basis of a five-person test.

As noted earlier, AI was introduced against a backdrop of real human-evaluation problems. But the question remained: not "Is it better than humans?" but "Has it been sufficiently verified?" An algorithm deployed without verification risked not reducing human bias but propagating new biases at scale.

The common thread across all three cases is this: AI judgment takes milliseconds to seconds. When that judgment is wrong, the time it takes for victims to challenge it and for institutions to respond is measured in months to years.

The asymmetry of speed is the asymmetry of power.


3. The Timeline of Regulation — NTSB 16 Months, CFPB 5 Years

Technology judges in milliseconds. Institutions respond in years.

On March 18, 2018, Elaine Herzberg died. The NTSB published its final investigation report in November 2019 — roughly twenty months later. Even after the report was released, Uber's autonomous driving tests were temporarily suspended, then resumed. As of 2026, no federal autonomous vehicle safety legislation has passed Congress.

The Tesla Autopilot crash followed the same trajectory. The accident occurred in May 2016; the NTSB report landed in September 2017. Sixteen months. It criticized Tesla's Autopilot design and the absence of driver monitoring, but it produced no binding regulatory change. The criticism became a record. The record went into a drawer.

The timeline for AI credit discrimination stretched even longer. Research documenting how AI-based lending produces discriminatory outcomes by race and gender began accumulating around 2017. A UC Berkeley research team published an analysis showing that AI lending algorithms were imposing approximately $450 million per year in excess interest charges on Black and Hispanic borrowers.10

It was not until September 2023 that the CFPB issued official guidance stating that "black-box AI models cannot be used as a means to evade credit-denial explanation requirements." From problem recognition to regulatory guidance — more than five years.

Then, in May 2025, a new milestone was set. Mobley v. Workday, Inc. — the first collective legal proceeding addressing discrimination by an AI hiring system — moved forward in the U.S. District Court for the Northern District of California. Preliminary collective certification was granted for Age Discrimination in Employment Act (ADEA) claims.11 The allegation was that Workday's AI hiring-screening system systematically excluded applicants over forty, Black applicants, and applicants with disabilities. The court held that Workday's algorithm constituted "a unified policy applicable to the entire class." More significant was the court's recognition of Workday not as a mere software vendor but as "an active participant in the hiring process." The company that built the AI bore responsibility for the AI's outcomes — a fundamental shift in the theory of AI vendor liability.

In March of the same year, the ACLU filed complaints of discrimination with the Colorado Civil Rights Division and the EEOC against HireVue and Intuit.12 A Native American applicant who was also deaf alleged that HireVue's video interview platform, through its automatic speech recognition system, failed to provide a fair evaluation. Significantly lower scores for non-white applicants were also cited. The year 2025 marked the beginning of concrete legal accountability for AI vendors. Yet COMPAS had first been deployed in the early 2000s. It had taken roughly twenty years for legal responses to begin.

Lay out the timeline:

Time for AI to render a judgment: 0.001 to 3 seconds. Time for harm to result from that judgment: immediate. Time to recognize the harm: months to years. Time for institutions to respond: 5 to 10 years.

As long as a temporal gap exists between technology and institutions, people fall through it. Elaine Herzberg was pushed out in 5.6 seconds. The female applicants filtered by Amazon's AI were pushed out quietly over three years. The three seconds it took to deny Mr. Park's loan and the years it will take before a system for challenging that denial is established — that gap is the subject of this book.


4. What Three Years of Regulation Means

Two of the five structural patterns introduced in Chapter 1 emerge sharply in this chapter.

First, information asymmetry.

Elaine Herzberg could not know that the approaching vehicle's AI was classifying her as an "unknown object." The women who submitted resumes to Amazon could not know that the phrase "women's chess club" was a penalty factor. Korean AI interview candidates could not know what criteria they were being evaluated on.

The inner workings of algorithms are a black box. Victims often cannot even recognize they have been harmed.

Joy Buolamwini pinpointed this structure. Algorithmic bias, like human bias, produces unfairness — but algorithms propagate bias at massive scale and speed, like a virus. Human bias stays with one person's judgment. Algorithmic bias, the moment the code is deployed, reaches millions simultaneously.

The numbers from Buolamwini's Gender Shades study proved the point. IBM's facial recognition system misidentified darker-skinned females at a rate of 34.7 percent, while the maximum error rate for lighter-skinned males was just 0.8 percent. Bias amplified to technological scale.

"The people who own the code deploy it on other people and there is no accountability."13

That is Cathy O'Neil's observation. The owners of the code deploy it on others, and there is no accountability. Just as Roman senators in Chapter 1 wrote laws and never enforced them, the designers of algorithms deploy code and bear no responsibility for the outcomes.

The structure is the same. Information asymmetry between decision-makers and those affected leads to an asymmetry of accountability.

Second, the gradualism of crisis.

Elaine Herzberg's death made the news because it was dramatic. But most AI harms are not dramatic. One denied loan. One rejected resume. One lowered rating. Each is trivial in isolation.

Millions of such incidents must accumulate before they become statistics. Statistics must be reported before they become awareness. Awareness must translate into political will before it becomes regulation.

Before the Uber crash, near-misses involving autonomous vehicles had been reported countless times. Tesla Autopilot-related incidents predated Joshua Brown's death. AI hiring discrimination was already known inside Amazon. Yet at each point, the accumulated warnings were "still tolerable."

Just as the dispossession of Roman smallholders in Chapter 1 unfolded over 130 years, AI harm accumulates gradually. Just as no one acted when Columella wrote that "the latifundia have ruined Italy," institutions move slowly — too slowly — even when studies on AI bias are published.

Regulation comes only after harm has piled up. But while it piles up, the harm spreads at the speed of algorithms.


5. The Mirror — Three Hundred Deaths and One Pedestrian

Return to Chapter 1.

In 133 BC, on the Capitoline Hill, more than three hundred people died. Not one was killed by a blade. All were beaten to death — with clubs fashioned from the broken legs of Senate chamber chairs.

On March 18, 2018, on Mill Avenue in Tempe, one person died. An autonomous driving algorithm had failed to classify her for 5.6 seconds while barreling toward her at sixty-three kilometers per hour.

The scale is different. The structure is the same.

In Chapter 1, law existed. The Lex Licinia-Sextia capped public land holdings at 500 iugera. For 234 years, it was not enforced. In Chapter 7, technology existed. AEB — automatic emergency braking — a system designed to detect collisions and stop the vehicle on its own. Uber deliberately disabled it.

Law existed but was not enforced. Technology existed but its safety mechanisms had been switched off.

In Rome, the law was not enforced because enforcement ran counter to the interests of the powerful. The senators were the owners of the latifundia. At Uber, the safety system was disabled because safety ran counter to the interests of speed. If the AEB triggered unnecessary hard braking, the testing schedule would be delayed.

Speed took priority over safety. That was precisely the NTSB's diagnosis — "an organization that did not make safety the top priority."

Just as the Senate did not regulate itself, technology companies did not regulate themselves. The Senate's violence was improvisational — they did not bring weapons but broke apart chairs. Uber's disabling of the safety system was, if anything, more deliberate. Someone inside the code made the decision to turn off the AEB, and that decision was maintained without review.

Between the death of three hundred and the death of one, 2,151 years elapsed. In that time, civilization built roads, wrote legal codes, established legislatures, and developed artificial intelligence. But the structure in which institutions fail to keep pace with danger, in which those harmed stand outside the circle of decision-making, in which the powerful deprioritize safety — that structure has not changed.

Only the tools have changed.


In this chapter, we have seen the gap between the speed at which AI judges and the speed at which institutions respond. 5.6 seconds and 16 months. 3 seconds and 5 years. 0.3 seconds and infinity — when no channel for appeal exists, the wait is forever.

But in the next chapter, we will see a more covert way that algorithms cause harm: not by killing, but by classifying, ranking, and discriminating. Elaine Herzberg's death made the news. But it took years before the news caught up with COMPAS classifying eighteen-year-old Brisha Borden, a Black girl, as "high risk."

Invisible judgments, invisible discrimination, invisible exclusion — spreading at the speed of algorithms.


Notes