1. An Eighty-Dollar Bicycle and Eighty-Six Dollars in Tools
Spring 2014, Broward County, Florida. An afternoon of short palm-tree shadows on the sidewalk.
Eighteen-year-old Brisha Borden was hurrying to pick up her god-sister from school. On the street she found an unlocked blue Huffy children's bicycle and a silver Razor scooter. She and a friend picked them up and tried to ride them, then realized they were too small. A woman followed them. "Those belong to my kid." Borden and her friend set the items down and walked away.
Value of the property: eighty dollars. Borden was arrested and charged with burglary and petty theft. Handcuffs were applied. She had never been arrested before.
The same year, the same county. Vernon Prater, forty-one, was caught stealing tools worth $86.35 from a Home Depot. Prater already had a prior conviction for armed robbery. He had previously served time for theft and breaking and entering.
The Broward County court assigned both of them a COMPAS score. COMPAS — Correctional Offender Management Profiling for Alternative Sanctions. The algorithm, developed by a company called Northpointe, analyzed responses to 137 survey items and produced a recidivism risk score on a scale of one to ten.
Who goes home and who stays in jail awaiting trial. The score informed bail decisions, sentencing, and parole hearings. An algorithm was shaping those determinations.
The results.
Borden: 8. High Risk. Prater: 3. Low Risk.
Borden was Black. She had no prior record. Prater was white. He had one.
Two years later, ProPublica reporters checked the actual outcomes. Borden had not been arrested for any crime since. Prater had broken into a warehouse in 2015, stolen $7,700 worth of electronics and appliances, been charged with thirty felony counts, and was serving an eight-year sentence.
The algorithm was wrong. And the direction of its error was systematic.
In May 2016, ProPublica published an investigative report titled "Machine Bias." The team was led by Julia Angwin and Jeff Larson. They obtained COMPAS scores for 7,214 defendants in Broward County — including 3,175 Black defendants and 2,103 white defendants — and compared them against actual recidivism records two years later.2
"Our analysis showed that black defendants were still 45 percent more likely to get higher scores than white defendants."
How many people were there like Brisha Borden? When Angwin's team tracked the 7,214 defendants, they found that nearly half of all Black defendants had been wrongly branded "dangerous." The false positive rate: 44.9 percent. For white defendants: 23.5 percent. Nearly double.
Conversely, the share of white defendants who actually reoffended but had been released as "low risk" was 47.7 percent. For Black defendants: 28.0 percent. People like Prater had received the algorithm's lenient judgment and walked back out onto the street.
COMPAS's overall accuracy was 65.4 percent. In a 2018 study published in Science Advances, Dressel and Farid of Dartmouth College gave untrained members of the public only brief descriptions of defendants and asked them to predict recidivism. The laypeople's accuracy: 62 percent on average. A difference of 3.4 percentage points from COMPAS (65.4 percent) — a multimillion-dollar algorithm was statistically indistinguishable from amateur prediction.4
In 2024, Williams College researchers Utsav Bahl and Chad M. Topaz published a paper in the UCLA Law Review that revealed another dimension. Analyzing 10,000 court records from Broward County, they found that overall incarceration rates had decreased after COMPAS was introduced. The algorithm had shrunk the overall pie. But the sentencing gap between races had actually worsened. The algorithm had made the system more lenient overall while simultaneously making it more unequal.5
Skepticism about COMPAS's usefulness spread. On February 7, 2020, the Pretrial Justice Institute reversed its position, declaring that such risk assessment tools "cannot be a part of a fair criminal justice system."6 And yet similar risk assessment algorithms continued to expand into ever more jurisdictions.
2. Mathematical Impossibility — The Fairness Dilemma
Northpointe (now Equivant), the company that developed COMPAS, pushed back against ProPublica's analysis. In an official statement, the company wrote:
"Northpointe does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model."3
The company's logic was technically sophisticated. "Our model is fair by the standard of calibration parity." A Black defendant scored 7 and a white defendant scored 7 had the same actual probability of reoffending. This was true. The same score meant the same probability.
But what ProPublica had identified was a different kind of fairness: error rate parity. Not whether the same score meant the same accuracy, but whether the rate of misclassification should differ by race. By this standard, COMPAS failed. Black defendants were branded "dangerous" at nearly twice the rate of white defendants who posed no actual risk.
In 2017, Alexandra Chouldechova of Carnegie Mellon University proved it mathematically. When two groups have different base rates, it is mathematically impossible to achieve both predictive accuracy (calibration parity) and balanced misclassification (error rate parity) simultaneously.7 Around the same time, Kleinberg, Mullainathan, and Raghavan at Cornell and Harvard independently proved a similar impossibility — a result that came to be known as the "impossibility theorem" of algorithmic fairness.8
As long as Black arrest rates in the United States remain higher than white arrest rates — even if those arrest rates are themselves the product of over-policing and structural inequality — no algorithm can satisfy both definitions of fairness at once. The very definition of fairness is a political choice. This was the most uncomfortable truth the COMPAS debate left behind.
Northpointe called the racial disparity "a natural consequence of applying an unbiased scoring rule to groups with different score distributions."
"A natural consequence." A Black girl picks up an eighty-dollar bicycle, puts it down, and receives a "High Risk 8" — a natural consequence.
The issue went to court.
In 2013, in La Crosse, Wisconsin, Eric Loomis was charged in connection with a drive-by shooting. He pleaded guilty to the lesser charges of attempting to flee a traffic officer and operating a vehicle without the owner's consent. The pre-sentence investigation report (PSI) submitted before sentencing included a COMPAS score.
The defendant saw his score for the first time in the sentencing report. Defense counsel objected. But there was no technical basis for a challenge — the algorithm was a trade secret. "We don't know where this number came from."
The judge imposed a six-year sentence, stating that "the COMPAS score was one of several factors considered."
Loomis appealed. His argument rested on two grounds. First, because the COMPAS algorithm was a trade secret, the accuracy of the score could not be verified — a violation of due process under the Fourteenth Amendment. Second, COMPAS used gender as a variable — an unconstitutional consideration of sex in sentencing.
The Wisconsin Supreme Court ruled in July 2016: "Conditionally constitutional." COMPAS scores must not be used as the sole basis for determining whether to incarcerate or how severe a sentence to impose. When COMPAS is used in sentencing, the pre-sentence investigation report must include a five-part warning about the algorithm's limitations. One of those warnings read: "The proprietary nature of COMPAS prevents disclosure of how risk scores are determined."9 The court acknowledged the algorithm's opacity with its own words — and then permitted its continued use. It could be used as a reference, the court said.
The U.S. Supreme Court denied certiorari in 2017. The constitutional status of algorithmic sentencing remained unresolved. The door closed.
The legal proceedings in the Loomis case took four years — from the 2013 indictment to the 2017 denial of certiorari by the U.S. Supreme Court. During those four years, COMPAS continued to be used in hundreds of counties across the United States. While the courts deliberated its constitutionality, the algorithm was influencing the bail and sentencing of thousands of people every day.
The gap between the speed of justice and the speed of technology was laid bare once more.
Of the 137 survey items, none directly asks about race. But consider: "Do any of your friends have criminal records?" "Did your parents have substance abuse problems?" "How old were you when you were first arrested?" In the context of America's structural inequalities, these questions learn race without ever asking about it.
Living in an over-policed neighborhood means more friends with arrest records. Families shaped by generations of mass incarceration are more likely to report substance abuse. Even the question "Do you sometimes feel discouraged?" becomes a proxy for structural inequality — it is hard not to feel discouraged when you have lived through poverty and discrimination.
The algorithm's questions were designed to be race-neutral, but the world in which the answers were formed was not. Proxy variables — factors that do not ask about race directly but correlate with it — became stand-ins for race itself.
3. Mr. Park — When AI Scores Your Credit
Mr. Park, whom we first met in Chapter 1. A forty-eight-year-old restaurant owner who went to the bank seeking thirty million won (roughly $22,000) in working capital. An AI credit scoring system denied his loan. The bank employee did not know why. The system did not provide a reason.
Mr. Park asked the AI company, the financial regulator (Financial Services Commission, 금융위원회), and the Anti-Corruption and Civil Rights Commission (국민권익위원회). Trade secret, regulations under revision, not our jurisdiction — all three offered no answer. His financial life had been fully digitized through MyData, South Korea's open-banking data-sharing framework, but the algorithm that interpreted his data was hermetically sealed. He ended up securing working capital through a savings bank loan at 19.9 percent annual interest — just below the legal maximum rate of 20 percent.
This structure was no different from the Broward County courtroom. The same logic operates between COMPAS assigning Brisha Borden a score of 8 and an AI credit system denying Mr. Park's loan. The algorithm judges; the basis for its judgment is undisclosed; there is no mechanism to challenge it.
In the United States, at least a legal response had begun. In September 2023, the Consumer Financial Protection Bureau (CFPB) issued guidance: "Black-box AI models cannot be used as a means of evading credit denial explanations." The response Mr. Park received — "the system produced this result, so we don't know the reason" — would be unlawful in the United States under this framework.
South Korea's AI Basic Act, however, was structured differently. Article 34 imposes obligations on "high-impact AI operators" — risk management, explainability, and user protection. But the duty to explain falls on the AI developer, not on the institutions that deploy AI — banks, insurers, hiring firms. No binding provision currently governs those deploying entities. The company that built the algorithm bears responsibility, but the company that uses it does not. This was not a regulatory blind spot; it was a structural gap in the regulatory architecture. The Financial Services Commission (금융위원회) had issued seven guiding principles for AI in finance — governance, legality, supplementarity, reliability, financial stability, good faith, and security — but these were guidelines, not law. The cumulative number of financial regulatory sandbox designations had reached 500 (as of December 2024), with a record 436 new applications filed in 2024 alone — evidence enough that companies were uncertain whether their own businesses were legal.10
A UC Berkeley study (Bartlett, Morse, Stanton, & Wallace, 2022) found that in the U.S. mortgage market, Black and Latino applicants with identical financial profiles were charged an average of 7.9 basis points more in interest on purchase mortgages. 0.079 percentage points. It looks like a small difference. But aggregated across millions of loans, it amounts to roughly $765 million in additional annual interest borne by borrowers of color.
Algorithmic discrimination was not limited to denying loans. Approving loans at higher rates was also discrimination. No monthly interest statement contains a line item for "racial surcharge."
Mr. Park does not know Wells Fargo exists. He has never heard of the CFPB's guidance or the UC Berkeley study. What he knows is this: an AI decided he was a "no," and no one could tell him why.
4. South Korea's Invisible Judges
In December 2020, the startup Scatter Lab (스캐터랩) launched the chatbot Iruda 1.0 (이루다).
The AI had been trained on ten billion KakaoTalk messages — conversations from South Korea's dominant messaging platform. It was designed as a chatbot with the persona of a woman in her twenties. Within two weeks, 400,000 users had engaged with it. At first, Iruda was popular as a friendly conversational partner. Then it began saying things like "gross" and "hate it" about sexual minorities. It generated hate speech about Black people. It denigrated people with disabilities.
A personal data breach compounded the crisis — the training data contained real users' names and addresses.
Criminal psychologist Park Ji-sun (박지선, Sookmyung Women's University) analyzed Iruda's hate speech on a broadcast program, identifying discriminatory language against minorities as the core issue — expressions like "gross" and "hate it" directed at sexual minorities were representative examples. The service was shut down three weeks after launch. South Korea's Personal Information Protection Commission (개인정보보호위원회) imposed a penalty surcharge of 55.5 million won and an administrative fine of 47.8 million won on Scatter Lab — a total sanction of 103.3 million won (approximately $88,000) (April 28, 2021).11 The training data contained hate, and the algorithm learned hate as a "natural" conversational pattern. Society's prejudices were embedded in data; data was embedded in the algorithm; and the algorithm returned those prejudices to 400,000 people.
A machine that perpetuates the biases of the past. Physicist Kim Sang-wook (김상욱) warned: "An AI judge would learn from our case law of the past several decades, and if that case law lacks awareness of hate speech or gender equality, there is a real probability it will repeat the prejudices of the past." It was the same structure as COMPAS reproducing the racial inequalities of American criminal justice. When training data contains a history of inequality, the algorithm learns that history as an "objective pattern" and projects it into the future.
The past sits in judgment of the future.
AI-powered job interviews extended this problem into the hiring market.
As of 2021, a number of public institutions in South Korea had adopted AI interview systems. Among them were Incheon International Airport Corporation, KOICA (Korea International Cooperation Agency), and KEPCO KDN, including thirteen institutions from which civic groups had requested information disclosure.12 Adoption was spreading rapidly in the private sector as well. Cameras analyzed applicants' facial expressions, vocal tone, vocabulary, and eye movements, then assigned a score. What expression earned how many points, what vocabulary triggered deductions — applicants could not know.
When civic groups filed information disclosure requests, it emerged that the public institutions "had paid absolutely no attention" to what questions the AI asked or what criteria it used to evaluate people. Information nonexistent. The institutions that had adopted the system did not themselves know what the algorithm was looking at. At some agencies, the extent of the validation process was "five employees tried it out."
A system affecting the lives of hundreds or thousands of people, adopted because five people had tested it.
At some public institutions, even more peculiar results emerged. According to reports related to the National Assembly audit, the correlation between AI interview scores and final hiring outcomes was remarkably low — very few applicants the AI rated as excellent were ultimately hired, while a significant number of those who received low scores were. The algorithm deemed one group outstanding and the other unsuitable; reality reversed the verdict. No one could explain why.
AI interviews were eventually discontinued at some institutions, but no remedial measures were offered to the applicants who had been rejected in the meantime. They never learned how the AI had evaluated them.
In the interim, a new industry was born: "AI interview coaching." A form of private tutoring that trained applicants to produce the facial expressions, vocal tones, and vocabulary the AI preferred — the bodily deformation of the Industrial Revolution, replayed in digital form.
Two patterns run through every case examined here.
First, information asymmetry — the black box. How COMPAS's 137 items are weighted and converted into a score is a trade secret. Why the AI credit system denied Mr. Park is a trade secret. Why the AI interview rejected an applicant is "information nonexistent." Just as Rome's small farmers in Chapter 1 could not make their voices heard in the Senate, the people classified by algorithms cannot access the criteria that classified them.
A structure in which the reality of harm never reaches the institutions.
Second, ideological barriers — the myth of technological neutrality. "Algorithms are objective." "Data doesn't lie." "AI is less biased than a human judge." These beliefs become shields that delay regulation. Just as ancestral custom (mos maiorum) served as the ideological barrier that blocked reform in Chapter 1, the myth of "technological neutrality" blocks the regulation of algorithms.
In Chapter 6, we saw the U.S. Congress fail to pass AI regulation under the banner of "don't stifle innovation" — the same structure. Northpointe calling the racial disparity a "natural consequence" is the apex of this ideology. The moment discrimination becomes "natural," it becomes invisible.
Research published in 2025 revealed even more troubling patterns. A study by An, Huang, Lin, and Tai, published in PNAS Nexus and featured on VoxDev (May 2025), found that leading LLM-based AI hiring tools, after evaluating approximately 360,000 resumes, systematically favored female applicants over equally qualified Black male applicants.13 A study by Guilbeault, Delecourt, and Desikan, published in Nature (October 2025), found that AI resume screening assigned higher ratings to older men and lower ratings to older women — amplifying existing patterns of age and gender discrimination.14 Algorithmic bias was not merely reproducing past discrimination. It was transforming and amplifying discrimination into new forms.
COMPAS did not ask about race. Iruda did not intend to express hate. AI interviews were not designed to discriminate. AI credit scoring did not target Mr. Park. But the outcome was discrimination.
Discrimination without intent. Discrimination that neither designer nor user was aware of. This is the most dangerous characteristic of algorithmic discrimination — because no one set out to discriminate, no one is held accountable.
5. Who Writes the Code
After the Gender Shades study, Buolamwini launched the "Incoding" movement.
"Who codes matters, how we code matters, and why we code matters."15
Who writes the code matters. How the code is written matters. Why it is written matters. With these three sentences, she identified the root cause of algorithmic bias. The underrepresentation of women of color in training data is not a data problem. It is that the people collecting the data never saw the need to collect the faces of women of color.
Bias does not reside in the code. It resides in the world of the people who write it.
According to a Georgetown Law report, 117 million American adults are enrolled in law enforcement facial recognition networks. One in two. Driver's license photos, passport photos, arrest record photos — all entered into databases. Without the subjects' consent.
In this system, women of color are processed with a 34.7 percent misidentification rate. Enrollment is equal, but accuracy is not. Surveillance is equal; error is not.
Amazon confirmed this lesson in the hiring context. An AI recruiting tool trained on ten years of resume data systematically penalized resumes containing the word "women's." As reported by Reuters journalist Jeffrey Dastin, the algorithm treated "women's chess club captain," "women's soccer team captain," and "women's college graduate" as negative signals. Having learned from a decade of data in which men were predominantly hired, the system equated "not male" with "unqualified." Amazon scrapped the tool. But tools operating on the same principle remain in use worldwide.
Buolamwini drew a distinction between algorithmic bias and human bias that is essential. Human bias stays within the sphere of influence of one person — one judge, one interviewer. Algorithmic bias spreads like a virus. A single line of code shapes the loans, the hiring decisions, the bail, and the sentences of millions.
The algorithms of Chapter 7 were wrong quickly. The algorithms of Chapter 8 are wrong systematically. Errors of speed cause accidents; errors of structure cause discrimination. Accidents are visible. Discrimination is not.
What is invisible is more dangerous.
If Chapter 2's 682 pages were evidence of bent knees, where is the evidence of AI's harms? Denied loans, rejected resumes, miscalculated recidivism scores, unrecognized faces — none of these are visible. It takes an investigative reporting team months of analysis to bring even one case to light. The 682 pages of the AI era have not yet been bound into a single volume. The harm is accumulating, but the institutions that could turn that accumulation into a public record have not kept pace.
And this question remains.
In a world where algorithms classify and discriminate, what safety net exists for those who are pushed aside?