LWL #38 Algorithmic Justice: The Next Civil Rights Frontier?

LINKS WE LIKE #38

"The first duty of society is justice" - Wendell Phillips

When you think of “civil rights”, many things probably come to mind. Generations-long struggles for racial or gender equality and equal protection for other disenfranchised communities are most commonly associated with the phrase. What you do not think of, most likely, are algorithms. In fact, unless they are of special interest or part of your job, you probably don’t think much about algorithms at all. In our current digital age, however, this may be a mistake. Algorithms have a significant impact on our lives, in spheres ranging from work to recreation, including what we are shown on our favorite websites (influencing how we view the world). Baised algorithms are also creating new forms of discrimination, impacting the very groups that have previously fought so tirelessly to secure equal rights. To understand how this is happening, we explore what algorithms are, how they can perpetuate biases, and the fight being waged to ensure that this technology does not become a new source of inequality. 

What are Algorithms?

Before delving into the impact algorithms have on social justice, it is important to understand what they actually are. The most basic definition is that algorithms are “a plan, or step-by-step instructions on how to solve a problem”. They are extremely important to computer science, and act as one of the four “cornerstones” of computational thinking. Basically, algorithms allow  computers to transform vast amounts of data into useful information, through a process that involves input, computation, and finally, output. Advancements in Machine Learning (ML) have allowed algorithms to learn from past versions and become more complex (and useful). Taken together, algorithms are instrumental to most of the technology that we use today, both on the personal and societal levels. 

Algorithmic Bias

It may seem that algorithms are nothing more than a simple (yet vital) part of the digital ecosystem, helping make our computers work better for us. So how could something so benign be perpetuating biases or even actively harming marginalized groups? The answer actually lies in their ubiquity, as institutions across society increasingly turn to algorithms to streamline operations and advance capabilities. Recent years have brought to light multiple cases of “algorithmic bias”, leading to mostly unintentional (though still harmful) discrimination often caused by the data on which algorithms are “trained” on. Real-world examples of the impacts of this type of bias abound. Amazon, one of the world’s largest companies, came under fire for an AI-based tool developed to sort through resumes after it was shown to be biased against women. Upon further examination, it was determined that the bias was caused by AI “learning” from the resume data of mostly male applicants (reflecting the gender skew in the tech industry) and subsequently giving males preference to advance to the interview stage. Racial bias has also been shown to be perpetuated by algorithms, manifesting in the form of lower credit scores for Black and other minority borrowers, which can result in higher interest rates or denial of loans. In the medical field, AI-based healthcare scheduling software concluded that Black patients were more likely to “no-show” appointments and therefore relegated them to less desirable time slots, leading to 30% longer wait times compared to non-Black patients. In more serious instances, it has been suggested that ML algorithms may be less likely to recommend life-saving treatment options to Black patients and have more difficulty identifying dermatological conditions on those with darker skin tones. As ML advances, these issues will likely continue to worsen. In fact, Data-Pop Alliance’s own recently published research found that ML algorithms applied to earth observation data had a high likelihood of perpetuating gender bias. With equity in employment, financial inclusion, and healthcare at stake, the need to address these biases is clear, leaving only the question of how best to do so.

Algorithmic Justice 

The first steps in fighting against algorithmic bias involve recognizing that, unlike other forms of discrimination, it is often unintentional, and also that just because data is involved (rather than humans) does not mean neutrality can be assumed. From there, concrete steps can be taken to ensure more equitable design. Recognizing that the datasets from which AL/ML “train” are often biased is vital to finding a solution. Sometimes called “dirty data”, these biased samples increase the likelihood of biased algorithms, and should be avoided or compensated for. Luckily researchers are working towards both designing tests to determine if bias exists and, if found, developing methods to compensate for it. Algorithmic transparency is another major tool towards achieving fairness. Currently the “black box” nature of algorithms makes investigation into their inner workings impossible, leading to calls for open access from affected individuals and even government regulators. Government action, such as the UK’s new algorithmic transparency framework, will likely play a major role going forward, but like all struggles for equality, a diverse coalition of stakeholders is needed to meet the challenge. Organizations including the Algorithmic Justice League, The Center for Applied Artificial Intelligence, Data & Society, and many others are taking up the mantle and working to make algorithms more transparent, fair, and inclusive. As awareness grows, there is still hope to address these issues now, in the early stages of Fourth Industrial Revolution, before they have the chance to become ingrained and systemic. 

Join us in the edition of Links We Like as we explore the topic of algorithmic justice and what the future holds in the fight against digital inequality. 

In this insightful Ted Talk, Dr. Seth Dobrin, Chief Data Officer for IBM Cloud and Cognitive Services, explores the topic of bias in AI. He begins with a brief overview of the role AI already plays in our societies, impacting us on both an individual and societal scale. He moves on to discussing the enormous effort put into ensuring that his current team of data scientists was as diverse as possible, which resulted in a team composition with twice the diversity of the industry average. Next, he provides an in-depth analysis of how AI bias occurs, using the example of racial bias in mortgage lending that was briefly touched upon in the introduction. One striking aspect of these biases, he emphasizes, is how they are able to evade even strident effort from organizations to account for them, due to a combination of unintentional oversight and the fact that so much data the world has collected has certain biases hidden within. Finally, he outlines recommendations to mitigate algorithmic bias in AI, including the implementation of end-to-end bias detection and increased diversity amongst the human teams overseeing AI projects. 

In 2016, MIT Media Lab graduate student, Joy Buolamwini, experienced an upsetting incident with an algorithm first-hand, when a facial detection software did not recognize her face until she wore a white mask. After this incident Dr. Buolamwini became inspired to draw attention to the prevalence of bias in Artificial Intelligence (AI) and the implications it posed to equal rights. To address these concerns, Dr. Buolamwini founded the Algorithmic Justice League (AJL) in order to raise public awareness regarding the impact and harm of biased AI. As part of their work, the AJL carefully examines how AI systems are developed to prevent harm, relying on four guiding principles: affirmative consent, meaningful transparency, continuous oversight and accountability, and actionable critique. Through a combination of different modalities, including art, research, policy guidance and media advocacy, the Algorithmic Justice League is also raising awareness among the general public and creating a cultural movement that pushes towards the development of equitable and accountable AI. 

Una de las principales transformaciones del sistema judicial francés fue la introducción del software DataJust en los tribunales. Este sistema de inteligencia artificial fue creado por el decreto no. 2020-356 del 27 de marzo de 2020 y permite acceder a datos personales –que en principio han sido anonimizados– para tomar mejores decisiones judiciales. Por esto, el Consejo de Estado avaló la introducción de DataJust para facilitar la toma de decisiones y específicamente ayudar a definir el monto de las indemnizaciones a las que tienen derecho las víctimas de daños personales. Desde enero de 2022, sin embargo, múltiples organizaciones de derechos humanos y de la sociedad civil han denunciado la violación de la normativa de protección de datos y el marco europeo RGPD. Aunque la mayoría de los datos se encuentran anonimizados, sigue habiendo algunos datos significativos, como fechas de nacimiento o vínculos familiares, cuya visibilidad puede vulnerar los derechos de las víctimas. Por esto, investigadores de asociaciones como Quadrature du Net han afirmado que el Estado francés se ha desprendido de la protección de datos personales y de la privacidad al permitir el uso de este software. 

Written by Ruja Benjamin, a Professor in the Department of African American Studies at Princeton University, this timely book explores engineered inequality, focusing on racism that is replicated by the digital tools we use in daily life. The author argues that technology and automation entrench racial hierarchies and discrimination while appearing “neutral”. Even technologies built specifically to tackle racial bias can end up deepening the discriminatory process. Based on the phrase “Jim Crow“, she defined the “New Jim Code” era as: “the employment of new technologies that reflect and reproduce existing inequities, but that are promoted and perceived as more objective or progressive than the discriminatory systems of a previous era”. The book explores multiple forms of technology and coding, ranging from Polaroid cameras to computer software and invites the reader to question the impact they have on their daily lives. It is recommended reading for everyone who strives for a more equal world free of racism

The term “google it” has become so natural and unquestioned that we do not take the time to think about how and why search engines display certain results depending on the prompts we give them. In her book, Algorithms of Oppression Safiya Umoja Noble questions the assumed “neutrality” of these results. Noble argues that rather than being neutral, the algorithms used to determine what we see as “search results” are actually part of the systemic structural oppression around race and gender. This oppression is constantly reproduced in the way algorithms are created, often through lack of awareness on the part of engineers, and their refusal to accept their views as biased. Another major issue lies within the “for-profit” business models in which these search engines operate, which leads to users becoming a product to be “sold” to advertisers. This results in an uneven playing field for different forms of ideas and identities, and one in which those with money to pay for online advertising dictate what the public sees. In response to this scenario, Noble proposes a push for more rigorous regulation of search engines and moving away from “the neoliberal capitalist project of commercial search”. Most importantly, she argues that we need to let go of the idea that algorithms and artificial intelligence decision making is inherently neutral.

Share
Keywords
Author(s)
Share
Recommendations

Project Report

Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity

The rapid growth of artificial intelligence (AI) offers significant potential to improve

Project Report

Feminist Urban Design: A Gender-Inclusive Framework for Cities

The inception report “Feminist Urban Design: A Gender-Inclusive Framework for Cities,” developed

Toolkit

FAIR Process Framework

Work by Data-Pop Alliance on steps 1-5 has been integrated into FAIR