Trust, Regulation, and Human-in-the-Loop AI within the European Region

Artificial Intelligence (AI) systems employ learning algorithms that adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behavior, a unit’s factory model behavior can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.

Topics

Author(s)

Author(s): 
Stuart E. Middleton, Emmanuel Letouzé, Ali Hossaini, Adriane Chapman

Partner Organization(s)

Share

Recommendations

Toolkit

Fair Process Framework

Work by Data-Pop Alliance on steps 1-5 has been integrated into FAIR

Event Paper

Politics vs. Policy in Disinformation Research: A Systematic Literature Review

Despite the wealth of research on disinformation, knowledge production is unevenly distributed

Annual Report

Overview and Outlook 2023-2024

The world of 2024 should be much safer, fairer, more empathetic, sustainable,

Project Report

Segundo Informe Nacional Voluntario de Guinea Ecuatorial 2024

El Segundo Informe Nacional Voluntario de Guinea Ecuatorial 2024 recoge el impacto

Journal Article

Unveiling Local Patterns of Child Pornography Consumption in France Using Tor

Child pornography—better known as child sexual abuse material (CSAM)—represents a severe form