Democratizing Value Alignment: From Authoritarian to Democratic AI Ethics

Linus HUANG, Gleb PAPYSHEV, James WONG

Research output: Other Conference ContributionsConference Paper (other)Researchpeer-review

Abstract

Value alignment is essential for ensuring that AI systems act in ways that are consistent with human values. Existing approaches, such as reinforcement learning with human feedback and constitutional AI, however, exhibit power asymmetries and lack transparency. These “authoritarian” approaches fail to adequately accommodate a broad array of human opinions, raising concerns about whose values are being prioritized. In response, we introduce the Dynamic Value Alignment approach, theoretically grounded in the principles of parallel constraint satisfaction, which models moral reasoning as a dynamic process that balances multiple value principles. Our approach also enhances users’ moral and epistemic agency by granting users greater control over the values that influence AI behavior. As a more user-centric, transparent, and participatory framework for AI ethics, our approach not only addresses the democratic deficits inherent in current practices but also ensures that AI systems are flexibly aligned with a diverse array of human values.
Original languageEnglish
Publication statusPublished - 7 Jun 2024
Externally publishedYes
EventThe Paris Conference on AI and Digital Ethics 2024 - Sciences Po, Paris, France
Duration: 6 Jun 20247 Jun 2024

Conference

ConferenceThe Paris Conference on AI and Digital Ethics 2024
Abbreviated titlePCAIDE 2024
Country/TerritoryFrance
CityParis
Period6/06/247/06/24

Fingerprint

Dive into the research topics of 'Democratizing Value Alignment: From Authoritarian to Democratic AI Ethics'. Together they form a unique fingerprint.

Cite this