LWL #15 Algorithmic Justice, Risk Assessments, and Fairness

LINKS WE LIKE #15

Algorithmic Justice, Risk Assessments, and Fairness

Algorithms are increasingly taking part in (human) decision-making processes, supposedly fostering efficiency and objectivity. In the justice system, risk-assessment and predictive algorithms are being used to shape and support policing, pre-trial decisions, and sentencing. In a world of inequalities, can justice be made fairer through artificial intelligence?

The Wisconsin Supreme Court recently sentenced a man to six years in prison - taking into consideration a report produced by “Compas”, a risk-assessment algorithm sold by Northpointe Inc., which pointed to “a high risk of violence, high risk of recidivism, high pretrial risk.” In a recent interview, Sharad Goel, Assistant Professor of Management Science and Engineering at Stanford University, discussed the benefits and limitations of using algorithms for decision-taking in criminal justice systems, arguing that “to gain wider support and adoption, (...) these algorithms need to be developed with more transparency. The leading risk assessment tools are often built under a veil of secrecy, which understandably sows misunderstanding and distrust.”

Laurel Eckhouse, a doctoral candidate in Political Science at UC Berkeley and Researcher with the Prison University Project at San Quentin State Prison and Human Rights Data Analysis Group, has argued that Big Data may be reinforcing racial bias in the criminal justice system, because “the data [data-driven tools] rely on are collected by a criminal justice system in which race makes a big difference in the probability of arrest — even for people who behave identically. Inputs derived from biased policing will inevitably make Black and Latino defendants look riskier than white defendants to a computer. As a result, according to her, data-driven decision-making risks exacerbating, rather than reducing, racial bias in criminal justice”. Focusing on algorithms for pretrial release decisions, the working paper “Algorithmic decision making and the cost of fairness”, made available in February 2017, analyze several techniques recently proposed to achieve algorithmic fairness.

Sent to Prison by a Software Program’s Secret Algorithms
By Adam Liptak. New York Times. May 1, 2017

Exploring the use of algorithms in the criminal justice system
Vignesh Ramachandran. Interview with Sharad Goel, assistant professor of management science and engineering at Stanford University. Stanford Engineering. May 3, 2017

Big Data may be reinforcing racial bias in the criminal justice system
Laurel Eckhouse, researcher with the Human Rights Data Analysis Group’s Policing Project, and a doctoral candidate in political science at the University of California at Berkeley. The Washington Post. February 10 2017.

Algorithmic decision making and the cost of fairness
Corbe‚-Davies et al. Working paper, February 17, 2017, Stanford University

The Los Angeles Police Department has been using data-driven approaches to predict likely crime hot-spots and improve crime prevention by deploying officers more accurately. This pilot project, developed in partnership with University of California and PredPol, a predictive policing technology company, is using a “mathematical model of Moher to predict the areas where crime is likely to occur.” The model is said to have contributed to decreasing burglaries, violent and property crimes. However, Cathy O’Neil, author of “Weapons of Math Destruction”, and one of Data-Pop Alliance Research Affiliates, has argued that such models also create a pernicious feedback loop as “the policing itself spawns new data, which justifies more policing”. This is what O’Neil calls a “weapon of math destruction (WMD), or math-powered applications that encode human prejudice, misunderstanding and bias into their systems”, and thus reinforce inequalities and punishing the poor. All the more since this data, is then further used to feed recidivism models such as the ones mentioned above.

The Los Angeles Police Department Is Predicting and Fighting Crime With Big Data
Mark van Rijmenam, Founder of Datafloq. Datafloq. April 17 2017.

Justice in the age of big data
IDEAS.TED.COM. April 6, 2017
Excerpted from the new book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, by Cathy O’Neil, data scientist and author of the blog mathbabe.org

As part of a case-study to develop methodologies to track the implementation of recommendations issued by the United Nations Universal Periodic Review (UPR) process, text mining and machine-learning were as a viable strategy for monitoring gender discrimination within Fiji’s judiciary system. An extensive analysis of case law archives was conducted to determine whether or not patriarchal beliefs and gender discrimination have a systemic impact on the outcome of GBV sentences in Fiji. Read more about this study here.

Mining Case Law to Improve Countries’ Accountability To Universal Periodic Review
Soline Aubry, Hansdeep Singh, Ivan Vlahinic, Abhimanyu Ramachandran, Sara Fischer, Robert O'Callaghan, Natalie Shoup, Jaspreet Singh, David Sangokoya, Gabriel Pestre and Carson Martinez. Working paper, February 2017. CKM Advisors, ICAAD, Data-Pop Alliance, Global Insight

 
Share
Keywords
Author(s)
Share
Recommendations

Project Report

Stratégie Nationale des Données du Sénégal – Résumé

En 2023, DPA a fourni un soutien technique à l’élaboration de la

Project Report

Community-Based Social Protection Mechanisms in Africa’s Borderlands – Liberia and Sierra Leone Case Study

The report on “Community-based social protection mechanisms in Africa’s borderlands – Liberia

Annual Report

Overview and Outlook 2022-2024

10 YEARS IN REVIEW: A LETTER FROM OUR DIRECTOR Finding appropriate metrics