Algorithms in Judges’ Hands

Chad M. Topaz
7 min readAug 17, 2023

--

Incarceration and Inequity in Broward County, Florida

Note: This post is based on a collaboration led by the incomparable Utsav Bahl.

Ah, Florida! Famous for its alligators, its theme parks, and for being an endless treasure trove of bizarre “Florida Man” news headlines. But beyond its quirks and sunny beaches, Florida is a key player in a less-publicized debate that has profound implications for all of us. Today, we’ll dive into the intersection of technology and criminal justice in the Sunshine State. What if I told you that in Florida, algorithms could play a pivotal role in determining a person’s future within the criminal justice system?

The Rise of Algorithms

Every day, individuals in the criminal justice system — ranging from bail magistrates to judges and parole boards — make crucial decisions impacting personal freedoms. Many of these decisions are based on assessments of criminal risk, such as the likelihood of a defendant showing up for their court date, or the likelihood of a sentenced individual committing another crime. Historically, these decisions relied on personal judgments, including factors like criminal history. However, in recent decades, risk algorithms, which use data to generate criminal risk scores, have gained traction, with at least 46 states using them in pretrial stages. One prominent risk-assessment tool example is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). COMPAS attempts to predict criminal risk based on a defendant’s responses to over 100 questions that probe family structure, work situation, emotional state, and more. Though many assume these algorithms offer superior accuracy, evidence suggests that COMPAS predictions are comparable in accuracy to laypeople’s assessments.

Unpacking the Debate

The use of such algorithms raises significant concerns regarding their fairness. A 2016 analysis by ProPublica on COMPAS usage in Broward County, Florida found that the system was more likely to inaccurately label black defendants as future criminals compared to white defendants. While the creators of the algorithm, Northpointe (now Equivant), asserted its unbiased nature, debates have arisen about potential sources of unfairness. These sources encompass the data driving the algorithms, errors in data entry, and the algorithms’ proprietary nature that conceals their internal mechanics from the public. Further complicating the issue, different definitions of fairness can lead to contrasting evaluations.

Despite the extensive discussions regarding the fairness of risk algorithms, their concrete influence on the American justice system is largely unexamined. Let’s remedy this! The question we’ll address is: irrespective of the fairness of COMPAS scores, how do judges adjust their sentencing decisions when presented with these scores for defendants?

A Quest for Data

To delve deeper into the issue, we embarked on a quest for more data. The ProPublica study provides the only known public dataset of COMPAS scores, focusing on Broward County, Florida cases from 2013–2014. Each COMPAS score we study has two parts: a score for risk of recidivism (low/medium/high) and a score for risk of violence (also low/medium/high). To gauge COMPAS’s influence, we analyzed data from both before its 2008 introduction (2006–2007) and during ProPublica’s study period (2013–2014). Our goal was to compare case outcomes from these two periods, shedding light on the role of COMPAS scores in decision-making.

Obtaining these court records proved tricky. We made a public records request but it went unanswered, causing us to resort to using the county court clerk’s website. This website allows the public to search court records one at a time by a unique identifier called a docket number. To find all of the cases we needed, we employed data scraping: we designed an automated bot to methodically navigate the website, searching a range of docket numbers to pinpoint the pertinent records.

We also needed information about defendants’ criminal histories, a crucial factor in sentencing decisions. Unfortunately, Broward County court records don’t contain criminal history information. To bridge the gap, we used the Inmate Release Information search from the Florida Department of Corrections. By matching names and birthdates, we identified any previous incarcerations for an individual, noting the number and duration of such episodes. Finally, we removed all identifying information from our dataset to preserve privacy. In the end, we had over 10,000 court records we could analyze.

Evaluating Confinement Trends

There are many ways we could choose to assess a judge’s behavior in sentencing. We chose to focus on an aspect that could be most significant for a defendant: the judges’ decisions of whether or not to mandate confinement (incarceration) as part of the sentence. Answering such a question demands a quantitative model. While I’ll sidestep the technical intricacies, you can envision our model as shown below.

It takes into account defendant-specific variables like the defendant’s race, gender, age, criminal history, the charges they face, their plea, and their COMPAS score. It also accounts for the type of court in which the case was heard, whether a public defender represented the defendant, and the specific judge presiding over the sentencing.

Playing the Odds

To understand our results, let’s do a thought experiment. Keep in mind that this thought experiment is merely an analogy to aid understanding and in no way is meant to diminish the profound effects of the criminal justice system on individuals.

Imagine a hypothetical scenario — again, intended only as an illustrative tool — akin to a game show (a dystopian and racially reductive one, as you will see). On this show, there are two groups of contestants: one group with 100 white contestants and another with 100 Black contestants. They’re all trying to win a prize, where, in this thought experiment, the prize is “not being confined.” We’re going to watch two episodes of this show.

In the first episode, the game show judges decide who wins based on information they have about the contestants. Among the 100 white contestants, 39 of them win, and among the Black contestants, 29 of them win.

For the second episode, the game gets a twist. Now, contestants roll dice, but these dice have a quirk: they don’t have numbers. Instead, they show COMPAS scores like low/low, medium/medium, and so on. A low/low roll raises your chances with the judges, while a high/high roll makes it tougher to win. But here’s the catch: the white and Black contestants don’t get the same pair of dice. The dice are weighted differently. In fact, the dice rolled by white contestants turn up low/low 57 out of 100 times, while those rolled by Black contestants do so only 31 out of 100 times. At the other extreme, only 5 white contestants roll the unfavorable high/high score, while 15 Black contestants encounter this outcome. As a result, in this dice-rolling episode, once the judges take dice rolls into account, 55 white contestants win, while 38 Black contestants do.

Comparing both episodes reveals that each group wins more in the second. However, the win-gap between white and Black contestants increases: it’s 10 in the first episode and 17 in the second. Moving out of our game show analogy, we see that while COMPAS reduces confinement overall, white defendants benefit more than Black defendants.

The Big Picture

Drawing from our research and analyses, here’s the big picture. Criminal risk assessment algorithms, such as COMPAS, can pass on racial disparities from the information on which they are based. While the overall rates of confinement decreased with the implementation of COMPAS, the racial disparities in sentencing outcomes were magnified.

It’s interesting to think about this situation from a judge’s point of view. A typical Broward County judge could rightfully claim that in the COMPAS era, they are mandating less confinement and thus supporting decarceration. Unless shown the aggregate effects of COMPAS, they would be unlikely to know that they are also widening the race gap by passing on upstream racial bias.

The repercussions echo broader societal issues. Just as policies aiming to uplift education, healthcare, or housing in racially marginalized communities can inadvertently boost all community members and widen racial disparities, COMPAS seems to follow a similar pattern. This is a manifestation of white privilege.

One Last Thing

Our investigation underscores a pressing need for accessible and comprehensive data. In today’s world, companies can harness public data, build proprietary algorithms like COMPAS, and influence justice systems, impacting individual freedoms. Yet, barriers to the data persist for those of us who are ordinary citizens merely wanting transparency in policing, prosecution, and sentencing. These barriers are a formidable hurdle in the pursuit of justice.

I encourage you to probe further, ask more questions, and become a part of grassroots movements or advocacy groups championing a just and equitable criminal justice system. Together, we can push for change, aiming to reduce, and ultimately eradicate, racial disparities.

Your neighbor,

Chad

--

--

Chad M. Topaz
Chad M. Topaz

Written by Chad M. Topaz

Data Scientist | Social Justice Activist | Professor | Speaker | Nonprofit Leader

No responses yet