A pluralist hybrid model for moral AIs

Fei SONG*, Shing Hay Felix YEUNG

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

With the increasing degrees AIs and machines, the need for implementing ethics in AIs is pressing. In this paper, we first survey current approaches to moral AIs and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral AIs can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an AI based on the pluralist hybrid approach consists of two systems. The first is a deterministic algorithm system that embraces different moral rules for making explicit moral decisions. The second is a machine learning system that accounts for calculating the value of the variables required by the application of moral principles. The pluralist hybrid system is better than the existing proposals as it better addresses the moral disagreement problem of the top-down approach by including distinct moral principles. Besides, the pluralist hybrid system reduces the opacity of ethical decision-making by implementing explicit moral principles for moral decision-making.
Original languageEnglish
Pages (from-to)891-900
Number of pages10
JournalAI and Society
Volume39
Issue number3
Early online date27 Nov 2022
DOIs
Publication statusPublished - Jun 2024

Bibliographical note

We are grateful to all the audiences who gave us inspiring feedbacks at the CEPE/IACAP joint Conference 2021: Philosophy and Ethics of Artificial Intelligence.

Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.

Keywords

  • Moral AIs
  • Hybrid system
  • Moral disagreement problem
  • Opacity problem

Fingerprint

Dive into the research topics of 'A pluralist hybrid model for moral AIs'. Together they form a unique fingerprint.

Cite this