RightsCon 2025: Data-Pop Alliance Participation in the Leading Digital Rights Summit at Taipei

From February 24 to 27, 2025, Data-Pop Alliance (DPA) participated in RightsCon 2025, the world’s leading summit on human rights and technology, held this year in Taipei, Taiwan. Representing DPA at the event were Julie Ricard (Director of our Technology and Democracy Program), Ivette Yáñez (Director of Strategic Communications and Senior Researcher), Letícia Hora (Research and Project Officer), and Laila Lorenzon  (Partnerships Manager). 

The team led two key sessions, contributing to critical discussions on technology-facilitated gender-based violence (TF-GBV), democracy, and the future of AI, while also engaging with experts and organizations working at the forefront of digital rights worldwide.

This year’s RightsCon brought together policymakers, activists, technologists, and researchers to address the most urgent challenges in digital rights, AI governance, and online safety. The event was inaugurated by Vice President Hsiao Bi-khim and Vice Premier Chen Li, alongside the organizing team from AccessNow.

With over 350 sessions, the summit featured leading voices such as Audrey Tang, former Minister of Digital Affairs in Taiwan, renowned for her work in digital democracy. It also brought together key organizations, including the Center for Democracy and Technology, Freedom House, Article 19, Afrobarometer, Inter-American Commission on Human Rights (IACHR), UNICEF, Human Rights Center – UC Berkeley School of Law, Global Cyber Security Capacity Centre – University of Oxford, GIZ, Meta, Google, Microsoft, SocialTIC, Digital Security Lab Ukraine, R3D: Red en Defensa de los Derechos Digitales, Surveillance, Bertelsmann Stiftung, Heinrich Böll Foundation, Maria Stopes International, Amnesty International, Africa Knowledge Initiative, WikiMedia Foundation, Global Voices, PEN America, Internet Archive, Internet Freedom Foundation, Social Web Foundation, Internet Lab,  UltraViolet, TEDIC, Hiperderecho, Fundación Karisma, HURIDOCS, among many others, fostering collaboration across sectors to shape the future of digital governance.

Ivette Yáñez, Nina da Hora, Julie Ricard (left to right) discussing TF-GBV

Acknowledging Our Supporters

Our participation at RightsCon 2025 was made possible through the generous support of Open Society Foundations, which provided critical financial assistance for our lead members to attend. We are deeply grateful for their commitment to advancing discussions and action on pressing digital rights issues—many of which are explored in this blog post.

DPA’s Sessions at RightsCon 2025

DPA hosted two dynamic sessions focusing on TF-GBV, democracy, and AI. These discussions brought together experts, advocates, and participants to explore challenges and develop strategies for a more just digital future. Below are the session details:

From Paint to Deepfakes: Intimate-Image Abuse Harming Women & Gender-Diverse People in Latin America  

📅 February 25 | 9:00–10:00 AM (Taipei Time) 

👥 Panelists: Julie Ricard (FGV  CEAPG, DPA), Ivette Yañez Soria (DPA), Nina da Hora (Instituto da Hora)  

This session examined the alarming rise of digital defamation campaigns in Brazil and Mexico, particularly targeting women and gender-diverse individuals in politics. Panelists discussed how deepfake technology has increasingly been weaponized to undermine public figures, emphasizing the urgent need for policy interventions and survivor-centered solutions.

The Evolution of Intimate-Image Abuse

To frame the discussion, Julie Ricard (moderator and panelist) traced the history of intimate-image abuse, from early cases like the website IsAnyoneUp? –a website dedicated to sharing stolen or leaked intimate images of women in the early 2010s– to today’s AI-generated non-consensual intimate photos and videos, known as “deepfakes”. She emphasized how technology has amplified deep-rooted misogynistic and patriarchal patterns of violence, increasingly weaponized in coordinated political defamation campaigns against women and gender-diverse individuals

While intimate-image abuse is a global issue, its manifestations vary by context. Julie highlighted examples from Brazil, where manipulated images were used to undermine the political career of politicians, such as Manuela D’Ávila, ultimately pressuring her to leave politics. This case underscores how TF-GBV not only threatens individual safety but also silences voices in public and political spaces.

The Social and Technical Problem of Deepfakes

Nina da Hora (they/them), a computer scientist and director of Instituto da Hora, explained the technical workings of deepfake technology and its connection to facial recognition tools initially developed for benign purposes, such as providing more secure and convenient ways to identify individuals, usually for security and access control processes. They emphasized intentionality—highlighting that deepfake abuse is not just a technological issue but a deeply social one, shaped by misogyny and power dynamics.

According to Nina, the problem is not the technology itself, but how it is deliberately repurposed for harm, particularly against women and gender-diverse individuals.

Mexico’s Context: Political Violence and Legal Challenges

Ivette Yáñez provided a Mexican perspective, discussing landmark political events such as the election of the country’s first female president, Claudia Sheinbaum, in 2024. She examined the prevalence of online political violence, including image-based abuse and deepfake campaigns designed to delegitimize Sheinbaum’s leadership and increase polarization through misogynistic attacks focused on her appearance, religious background and supposed lack of capacity to govern.

Ivette also addressed the impact of advocacy efforts and legal advancements in Latin America, particularly Mexico’s Olimpia Law—a reform criminalizing digital violence against women. While a step forward, she noted that existing laws remain vague, making enforcement difficult, and often fail to protect gender-diverse individuals. She stressed that legal measures alone are not enough—a multi-sectoral approach involving civil society, academia, and the private sector is essential to combat intimate-image abuse.

Audience Engagement: Strategies for Resistance

Following the panel, participants enriched the session with discussions emphasizing proactive approaches over reactive responses. For instance, participants raised concerns about:

  • How to report deep fake incidents without re-traumatizing victims
  • The limitations of current content moderation policies
  • Holding tech platforms accountable for their role in amplifying gender-based violence

The session concluded with a participatory activity, where attendees proposed cross-sector strategies to tackle intimate-image abuse. Key recommendations included:

🔹 Civil Society: Strengthen alliances between feminist and digital rights organizations, advocate for preventive measures over punitive ones, establish dedicated helplines for online gender-based violence survivors, and expand digital literacy programs, including self-defense training for female candidates.

🔹 Academia: Integrate digital safety and sex education into curricula for primary, middle school and high school students, fund research positions for women and LGBTQ+ scholars, and study correlations between deepfakes and electoral manipulation.

🔹 Government: Increase transparency in AI governance, enforce stronger legal protections, and criminalize the intentional creation and distribution of non-consensual deepfakes.

🔹 Private Sector & Tech Platforms: Block abusive deepfake prompts, develop more effective content removal mechanisms, reform monetization models that profit from harmful content, and prioritize gender-inclusive AI development.

🔹 Technical Communities: Develop AI-powered tools for detecting and flagging abusive content, and create independent, community-led digital spaces that amplify marginalized voices.

EUREKA Moment: Feminism, Culture & AI to Reimagine Our Futures  

📅 February 26 | 3:15–4:15 PM (Taipei Time)  

🤖 Led by: DPA & Eureka  

 This interactive workshop, co-organized with Eureka, explored how intersectional feminism can reshape AI and technology. Participants engaged in hands-on exercises, moving beyond critique to co-creating alternative AI futures. Discussions covered bias in algorithms, feminist data practices, and the role of culture in AI ethics.

Guided by our unique edutainment and popular education methodology, and inspired by our 2024 trilingual Book and Movie Club, “Technology Through Feminist Lenses, the session blended literature, cinema, and creative AI interactions for an immersive experience.

The selected excerpts watched by participants included:

  • Coded Bias (Shalini Kantayya, documentary): Featuring researcher Joy Buolamwini, this documentary explores racial and gender biases embedded in facial recognition algorithms. It highlights the risks of biased AI and surveillance, urging ethical accountability in tech development.
  • Another Body (Sophie Compton & Reuben Hamlyn, documentary): This powerful documentary examines the deep personal and societal impacts of pornographic deepfake technology, following one individual’s fight against non-consensual imagery and her journey toward activism.
  • The Artifice Girl (Franklin Ritch, film): A fictional narrative about an AI system designed to identify online predators, sparking critical discussions on AI ethics, personhood, and autonomy. Participants debated whether AI entities should have rights and the broader implications of humanizing AI systems.

After watching, participants engaged in a critical conversation about how gender, race, and socioeconomic inequalities are both perpetuated and challenged by emerging technologies.

As a culminating exercise, participants created AI-generated images capturing their “Eureka Moments”—key insights and reflections on feminist AI futures. This creative activity encouraged attendees to imagine inclusive, feminist visions for the future, reinforcing the idea that technology can be shaped ethically, equitably, and collectively.

Key Takeaways from RightsCon 2025: Challenges, Resistance, and the Road Ahead

After attending multiple sessions across RightsCon 2025 we walked away with a deeper understanding of the most salient debates in the digital rights landscape. With hundreds of sessions covering topics such as AI governance, technology-facilitated gender-based violence (TF-GBV), and mis/disinformation, one thing was clear: while the challenges are mounting, so too are the strategies for resistance, adaptation, and change.

The Digital Rights Landscape

The state of digital rights is becoming increasingly precarious, particularly in light of the political climate in the U.S., the reductions in aid across multiple countries and the conservative turn of the platforms. These shifts are straining the ability of organizations to defend democracy, free speech, and journalism, as well as to combat dis/misinformation, TF-GBV, and other digital threats. Despite these obstacles, many speakers at RightsCon emphasized that there are still pathways for resistance. While frustration is warranted, participants underscored the importance of strategic action, community-driven solutions, and innovative policy solutions to counter these threats.

Technology-Facilitated Gender-Based Violence (TF-GBV): TF-GBV continues to escalate, disproportionately targeting women and gender-diverse individuals, particularly those in politics and public life. At RightsCon, multiple panels tackled this issue from different perspectives. We heard from a young Taiwanese female politician who has endured intimate-image abuse online, as well as researchers and advocacy organizations working to push for stronger platform regulation.

A key takeaway was that TF-GBV is both a mirror and an amplifier of real-world violence. Speakers agreed that efforts to address it must be multi-faceted, involving:

    • Policy change
    • Shifting social norms
    • Research into perpetrators
    • Stronger platform regulations

From prevention to punitive measures, every sector has a role to play in dismantling online gender-based violence.

Mis/Disinformation 

The spread of mis/disinformation remains one of today’s most urgent challenges. Discussions at RightsCon explored:

  • How shifting news consumption habits—along with digital divides in regions like Africa—affect how mis/disinformation spreads.
  • Strategies to monitor false information during elections.
  • Legal advocacy efforts to promote information integrity and policy reform.

A recurring theme was that mis/disinformation is highly contextual, meaning that localized solutions are critical to effectively countering its impact.

Content Moderation

RightsCon sessions on content moderation reinforced the complexity of this issue, emphasizing the need for:

  • Stronger technical solutions to protect users in all languages from harmful content.
  • Greater transparency in content moderation outsourcing, ensuring that moderators receive fair wages and mental health support.

Some speakers described current moderation practices as a form of “modern slavery”, highlighting how big tech outsources content moderation to firms in the Global South under exploitative conditions.

While some argue that content moderation risks creating ideological silos, experts countered that harmful and violent content spreads due to platform design itself. The key call to action? “Decentralize interventions on content moderation” to create more equitable, transparent, and effective solutions.

Research and Data Access

Access to social media data is becoming increasingly restricted, with rising costs and limited availability creating barriers for research. These challenges hinder the ability of academics, civil society organizations, and policymakers to:

  • Analyze digital harms
  • Develop evidence-based interventions
  • Monitor online trends

This issue, which has been worsening for over a decade, underscores the need for alternative research strategies and advocacy for data transparency.

Journalism Under Threat

Discussions at RightsCon also highlighted the mounting threats to journalism. While some governments claim to be combating disinformation, new laws are instead being used to restrict freedom of expression, particularly in:

  • Conflict zones like Ukraine
  • Authoritarian regimes

However, the erosion of press freedom is not limited to these contexts—even in democratic nations, journalists are facing increased attacks and declining public trust.

Despite these challenges, the resilience of journalists and independent media outlets was a recurring theme. Many are finding creative and legal avenues to continue their work and hold power to account.

Rethinking Platforms: Stay or Leave

A key debate at RightsCon centered on whether we should:

  • Continue fighting for reform within existing platforms, or
  • Move to alternative spaces that align with digital rights and ethical tech values.

Some speakers argued that real change requires abandoning mainstream platforms and investing in new, decentralized alternatives like Bluesky. Others countered that new platforms may eventually face the same governance issues, and that big tech remains too powerful to ignore.

The emerging consensus? A multi-pronged approach—pushing for stronger regulations while simultaneously developing and investing in alternative platforms—may be the most effective path forward.

Final Thoughts

RightsCon 2025 reinforced a critical message: as digital rights challenges intensify, so do the efforts to address them. From platform accountability to combatting TF-GBV, disinformation, and attacks on journalism, the summit highlighted the importance of cross-sector collaboration, policy advocacy, and grassroots mobilization.

At the same time, while there must be space for grief and emotional processing, urgency demands bold action. The conversations we engaged in left us grounded in the weight of these challenges yet inspired by the resilience and creativity of those fighting for a more just digital future.

Share
Keywords
Author(s)
Share
Recommendations
[WEB] Feature Blog Post
RightsCon 2025: Data-Pop Alliance Participation in the Leading Digital Rights Summit at Taipei
[WEB] Feature Blog Post-
Employee Spotlight #6: Meet Our Design Team
Feature blog Post
A Mudança de Verificação de Fatos da Meta: Uma Declaração Política na Era da Pós-Verdade
Feature blog Post
Meta’s Fact-Checking Shift: A Political Statement in the Post-Truth Era
Feature blog Post (5)
Employee Spotlight #5: Nelson Papi Kolliesuah
Feature blog Post (4)
Data-Pop Alliance and UN Women Partner for Gender Data Training in Pakistan
shutterstock_2322857937
Job Opportunity: Fundraising Intern
Header-News-letter-HZ
Our Latest Quarterly Update (Q4 2024) is Out!
WEBSITE for Feature blog Post
Lançamento do Clube "(Re)Imaginando Tecnologias: Caminhos para o Futuro"
WEBSITE for Feature blog Post
‘(Re)Imagining Technologies: Pathways To The Future’ Book and Movie Club Launches