How is the AI Perceived When It Behaves (Un)Fairly?

Yang CHU, Jiahao LI, Jie XU*

*Corresponding author for this work

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

1 Citation (Scopus)

Abstract

Fairness plays a crucial role in human-human interaction, so it is expected to play a significant role in human-AI interaction as well. Integrating the principles of fairness into AI design and investigating people’s perceptions of it can help improve user experience and ensure AI systems are responsible, trustworthy, ethical, and human-centered. In the current study, we simulated different human behaviors in economic games through a human fairness model and reinforcement learning approach and then conducted an experiment to investigate how people perceive AI agents with varying levels of fairness. The study was a within-subject experiment with 2 treatments (fair vs. unfair AI), in which the participants play the Alternated Repeated Ultimatum Game (ARUG) for 12 rounds with each AI agent. The results suggest that the participants evaluated fair AI as having higher levels of warmth, intelligence, animacy, likability, and safety compared to unfair AI. These findings indicate that AI that aligns with social norms is more favored by people. We discuss the theoretical implications for comprehending people’s behavior and attitude towards AI fairness and the practical implications for designing AI that has the potential to increase fairness in society.
Original languageEnglish
Title of host publicationArtificial Intelligence in HCI : 4th International Conference, AI-HCI 2023, Held as Part of the 25th HCI International Conference, HCII 2023, Proceedings
EditorsHelmut DEGEN, Stavroula NTOA
PublisherSpringer Science and Business Media Deutschland GmbH
Pages421-430
Number of pages10
ISBN (Electronic)9783031358913
ISBN (Print)9783031358906
DOIs
Publication statusPublished - 2023
Externally publishedYes

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume14050
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349
NameInternational Conference on Human-Computer Interaction
PublisherSpringer
VolumeHCII 2023

Bibliographical note

Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. T2192931).

Keywords

  • Alternated Repeated Ultimatum Game
  • Fairness
  • Human-centered AI
  • Perception of AI fairness

Fingerprint

Dive into the research topics of 'How is the AI Perceived When It Behaves (Un)Fairly?'. Together they form a unique fingerprint.

Cite this