Robot lies in health care : when is deception morally permissible?

Research output: Journal PublicationsJournal Article (refereed)

9 Citations (Scopus)

Abstract

Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.
Original languageEnglish
Pages (from-to)169-192
Number of pages24
JournalKennedy Institute of Ethics Journal
Volume25
Issue number2
DOIs
Publication statusPublished - 1 Jun 2015

Fingerprint

Deception
robot
health care
Delivery of Health Care
Play and Playthings
Robotics
Child Care
Artifacts
toy
Healthcare
Robot
child care
erosion
artifact
autonomy

Cite this

@article{5976d9ef8ae1415c9b59b6e1fca5291d,
title = "Robot lies in health care : when is deception morally permissible?",
abstract = "Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.",
author = "Andreas MATTHIAS",
year = "2015",
month = "6",
day = "1",
doi = "10.1353/ken.2015.0007",
language = "English",
volume = "25",
pages = "169--192",
journal = "Kennedy Institute of Ethics Journal",
issn = "1054-6863",
publisher = "Johns Hopkins University Press",
number = "2",

}

Robot lies in health care : when is deception morally permissible? / MATTHIAS, Andreas.

In: Kennedy Institute of Ethics Journal, Vol. 25, No. 2, 01.06.2015, p. 169-192.

Research output: Journal PublicationsJournal Article (refereed)

TY - JOUR

T1 - Robot lies in health care : when is deception morally permissible?

AU - MATTHIAS, Andreas

PY - 2015/6/1

Y1 - 2015/6/1

N2 - Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.

AB - Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.

UR - http://commons.ln.edu.hk/sw_master/4398

U2 - 10.1353/ken.2015.0007

DO - 10.1353/ken.2015.0007

M3 - Journal Article (refereed)

C2 - 26144538

VL - 25

SP - 169

EP - 192

JO - Kennedy Institute of Ethics Journal

JF - Kennedy Institute of Ethics Journal

SN - 1054-6863

IS - 2

ER -