The Impact of Algorithmic Bias in Criminal Risk Assessment on Sentencing Outcomes
1. INTRODUCTION AND CONCEPTUAL BACKGROUND
Algorithmic risk assessment tools are increasingly used in criminal justice systems worldwide to
guide sentencing, parole, and probation decisions. These tools are designed to predict the likelihood
of recidivism, aiming to make judicial processes more efficient and objective. However, growing
evidence suggests that these algorithms may reinforce racial and socioeconomic biases, leading to
unfair sentencing outcomes. While proponents argue that risk assessment tools reduce human bias,
critics highlight that they may instead perpetuate systemic discrimination, particularly against
marginalized communities. Understanding how algorithmic bias operates in criminal risk assessment
is crucial for ensuring fairness in judicial decision-making.
Research shows that risk assessment algorithms are not neutral. Instead, they often reflect and
amplify existing societal inequalities (Angwin et al., 2016; Eubanks, 2018). These tools rely on
historical crime data, which may be skewed by over-policing in Black, Latino, and low-income
neighborhoods. As a result, individuals from these communities are more likely to be flagged as
"high-risk," regardless of their actual likelihood of reoffending (Larson et al., 2016). For example, if
an algorithm uses arrest records as a proxy for criminal behavior, it may unfairly penalize people
from over-policed areas, where arrests do not necessarily indicate higher criminality but rather
biased law enforcement practices.
A useful framework for understanding algorithmic bias is the concept of structural inequality in data
science. O’Neil (2016) explains that algorithms trained on biased data reproduce and even
exacerbate existing disparities. This means that risk assessment tools do not simply predict
crime—they encode historical prejudices into their scoring mechanisms. In the U.S., for instance,
Black defendants are often assigned higher risk scores than white defendants with similar criminal
histories (ProPublica, 2016). Similar patterns have been observed in other countries where predictive
policing tools are used, raising concerns about their global impact on sentencing fairness.
Recent studies highlight how algorithmic bias affects judicial discretion. Judges, often overburdened
with heavy caseloads, may rely on risk scores as an objective measure, even when these scores are
flawed (Starr, 2014). Research by Stevenson (2018) found that when risk assessment tools label a
defendant as "high-risk," judges are more likely to impose longer sentences or deny parole,
regardless of mitigating circumstances. This creates a feedback loop: biased predictions lead to
harsher sentences, which then feed back into the system as "evidence" supporting the algorithm’s
accuracy.
The intersection of race, class, and algorithmic bias is particularly concerning. Eubanks (2018) found
that low-income individuals are disproportionately affected because risk assessments often include
socioeconomic factors such as employment history, education level, and family background. Since
poverty is closely linked to racial discrimination in many societies, these tools effectively penalize
people for being poor. Similarly, Richardson et al. (2019) argue that algorithms may misinterpret
cultural differences as risk factors—for example, associating certain neighborhoods or family
structures with criminality, even when no direct causation exists.