Data Feminism in the AI Era: A Conversation with Catherine D’Ignazio

Last Tuesday, November 21, DPA, through its DFN initiative, had the pleasure of hosting the webinar “Data Feminism in the AI Era: A Conversation with Catherine D’Ignazio”. The event sought to explore the intersections between principles of Data Feminism and Artificial Intelligence (AI) / Generative AI. Catherine D’Ignazio, co-author of “Data Feminism” and Director of the Data + Feminism Lab at MIT, shared her insights into the subject in a thought-provoking conversation with Julie Ricard, Director of DPA’s Technology and Democracy Program. Almost 100 participants joined this event. Read to learn more about what went on during the webinar!

DFN Acquisition

The webinar kicked off with the introduction of the entire group behind Data Feminism Network (DFN), namely the new board members and DPA’s Data Feminism Team, all of whom shared an interesting “fact” related to data and feminism that resonated with them. For example, Jade Greer (DFN Board Member) mentioned that over 90% of Venture Capital investors in the USA are men. Ali Dunn, DFN Founder and Board Member, then shared a brief overview of the reasons behind the foundation of DFN. Anna Spinardi, DPA Director of the Data Feminism Program, then discussed some of the plans we have for the incorporation of DFN, which include a book-movie club, more webinars, a charter and an ambassador’s network (more news soon!). 

The Conversation with Catherine D’Ignazio

The main event of the webinar, a conversation with Catherine D’Ignazio, began with the following question from  Julie Ricard: “What do you think the benefits and hindrances will be of Generative AI”? Catherine responded with her overall hesitancy towards the rush to embrace these technologies, focusing on the fact that due to its method of “learning” on existing data, it will likely exacerbate existing inequalities (and is already doing so). As she put it, while there could be benefits to the new applications coming out the field, we must be aware of their cost-benefit. 

Julie moved the discussion to GAI biases by showing a few images produced with different Generative AI tools such as ChatGPT and MidJourney resulting from prompts such as “The President of a Powerful Country” and “A House Worker Cleaning a House”. The images showed former US President Donald Trump in a very “imperialistic” setting and a woman cleaning the house (often in a sexualized manner), respectively. For Catherine, this precisely represents the problem she previously highlighted, that these tools use the massive amount of data that already exists, which tends to skew towards these gender, racial and other biases. Subsequently, she expressed concern about the current “hype” around these tools, with people who may not have previous AI knowledge or experience using these tools much more widely.

Images generated with Generative AI Tools with the prompt: "The president of a powerful country"
Images generated with Generative AI Tools with the prompt: "A houseworker cleaning a house""

The conversation then moved into the current power structures of the global AI ecosystem, in which white males from the Global North are highly overrepresented. Catherine shared her distrust of these actors to do the work needed to “correct” or account for these biases, and the role of journalists and civil society movements to hold them accountable. Finally, Julie asked about the 7 Principles of Data Feminism, and which Cathernie feels are the most applicable to AI and GAI. For her, “Examine Power” and “Challenge Power” were the most applicable, as well as Principle 7, “Make Data Visible”, highlighting the immense costs of these systems (both societal and environmental). Catherine further brought up the huge amount of natural resources (namely water) that go into running these initiatives, an important consideration that is not adequately known or covered in most of the conversations around the “cost” of AI. 

The floor was then opened to a Q&A. Participants’ questions included the ability of these algorithms to be “trained” to detect bias and how we can bring the technology sector more in line with the goals of feminism. Catherine provided thoughtful answers to these questions, highlighting specific steps that can be taken to make Generative AI more inclusive, while also mentioning the same trade-offs (namely natural resources) that must be taken into account. 

We would like to sincerely thank Catherine D’Ignazio for joining this webinar and lending her expert perspective on these timely and important discussions. 

Next Steps

  • To learn more about DFN and keep posted about upcoming projects, click here.
  • To join DPA-DFN’s Book-Movie Club “Among Cyborgs & Feminists” on Eureka (in English, Spanish or Portuguese), click here.
  • To view the entire webinar, click on the video below.
Share
Keywords
Author(s)
Share
Recommendations

Project Report

Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity

The rapid growth of artificial intelligence (AI) offers significant potential to improve

Project Report

Feminist Urban Design: A Gender-Inclusive Framework for Cities

The inception report “Feminist Urban Design: A Gender-Inclusive Framework for Cities,” developed

Toolkit

FAIR Process Framework

Work by Data-Pop Alliance on steps 1-5 has been integrated into FAIR