Governing AI through interaction: situated actions as an informal mechanism for AI regulation

Gleb PAPYSHEV*

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

This article presents a perspective that the interplay between high-level ethical principles, ethical praxis, plans, situated actions, and procedural norms influences ethical AI practices. This is grounded in six case studies, drawn from fifty interviews with stakeholders involved in AI governance in Russia. Each case study focuses on a different ethical principle—privacy, fairness, transparency, human oversight, social impact, and accuracy. The paper proposes a feedback loop that emerges from human-AI interactions. This loop begins with the operationalization of high-level ethical principles at the company level into ethical praxis, and plans derived from it. However, real-world implementation introduces situated actions—unforeseen events that challenge the original plans. These turn into procedural norms via routinization and feed back into the understanding of operationalized ethical principles. This feedback loop serves as an informal regulatory mechanism, refining ethical praxis based on contextual experiences. The study underscores the importance of bottom-up experiences in shaping AI's ethical boundaries and calls for policies that acknowledge both high-level principles and emerging micro-level norms. This approach can foster responsive AI governance, rooted in both ethical principles and real-world experiences.
Original languageEnglish
Pages (from-to)1109–1120
Number of pages12
JournalAI & Ethics
Volume5
Early online date27 Mar 2024
DOIs
Publication statusPublished - Apr 2025
Externally publishedYes

Funding

Open access funding provided by Hong Kong University of Science and Technology.

Keywords

  • AI governance
  • AI regulation
  • AI ethics
  • Situated actions
  • Plans

Fingerprint

Dive into the research topics of 'Governing AI through interaction: situated actions as an informal mechanism for AI regulation'. Together they form a unique fingerprint.

Cite this