DataFeed

Trust, Regulation, and Human-in-the-Loop AI within the European Region

Artificial Intelligence (AI) systems employ learning algorithms that adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behavior, a unit’s factory model behavior can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.

Trust, Regulation, and Human-in-the-Loop AI within the European Region

Authors:
Stuart E. Middleton, Emmanuel Letouzé, Ali Hossaini, Adriane Chapman

March, 2022

View