Abstract
Fairness plays a crucial role in human-human interaction, so it is expected to play a significant role in human-AI interaction as well. Integrating the principles of fairness into AI design and investigating people’s perceptions of it can help improve user experience and ensure AI systems are responsible, trustworthy, ethical, and human-centered. In the current study, we simulated different human behaviors in economic games through a human fairness model and reinforcement learning approach and then conducted an experiment to investigate how people perceive AI agents with varying levels of fairness. The study was a within-subject experiment with 2 treatments (fair vs. unfair AI), in which the participants play the Alternated Repeated Ultimatum Game (ARUG) for 12 rounds with each AI agent. The results suggest that the participants evaluated fair AI as having higher levels of warmth, intelligence, animacy, likability, and safety compared to unfair AI. These findings indicate that AI that aligns with social norms is more favored by people. We discuss the theoretical implications for comprehending people’s behavior and attitude towards AI fairness and the practical implications for designing AI that has the potential to increase fairness in society.
Original language | English |
---|---|
Title of host publication | Artificial Intelligence in HCI : 4th International Conference, AI-HCI 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings |
Editors | Helmut DEGEN, Stavroula NTOA |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 421-430 |
Number of pages | 10 |
ISBN (Electronic) | 9783031358913 |
ISBN (Print) | 9783031358906 |
DOIs | |
Publication status | Published - 2023 |
Externally published | Yes |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Publisher | Springer |
Volume | 14050 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Name | International Conference on Human-Computer Interaction |
---|---|
Publisher | Springer |
Volume | HCII 2023 |
Bibliographical note
Publisher Copyright:© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
Funding
This work was supported by the National Natural Science Foundation of China (Grant No. T2192931).
Keywords
- Alternated Repeated Ultimatum Game
- Fairness
- Human-centered AI
- Perception of AI fairness