Abstract
With rapid technological development, humans are more likely to cooperatively work with intelligence systems in everyday life and work. Similar to interpersonal teamwork, the effectiveness of human-machine teams is affected by conflicts. Some human-machine conflict scenarios occur when neither the human nor the system was at fault, for example, when the human and the system formulated different but equally effective plans to achieve the same goal. In this study, we conducted two experiments to explore the effects of human-machine plan conflict and the different conflict resolution approaches (human adapting to the system, system adapting to the human, and transparency design) in a computer-aided visual search task. The results of the first experiment showed that when conflicts occurred, the participants reported higher mental load during the task, performed worse, and provided lower subjective evaluations towards the aid. The second experiment showed that all three conflict resolution approaches were effective in maintaining task performance, however, only the transparency design and the human adapting to the system approaches were effective in reducing mental load and improving subjective evaluations. The results highlighted the need to design appropriate human-machine conflict resolution strategies to optimize system performance and user experience.
Original language | English |
---|---|
Article number | 103377 |
Journal | International Journal of Human Computer Studies |
Volume | 193 |
Early online date | 24 Sept 2024 |
DOIs | |
Publication status | E-pub ahead of print - 24 Sept 2024 |
Bibliographical note
Publisher Copyright:© 2024 Elsevier Ltd
Keywords
- Adaptive automation
- Conflict resolution
- Human-machine conflict
- Transparency design
- Visual search