Abstract
Treating depression is challenging due to the lack of healthcare professionals and the stigma against depression. Artificial intelligence (AI) helps overcome these obstacles, particularly in reducing judgment due to depression stigma. Nonetheless, standalone AI systems may not assume accountability for potential adverse outcomes. To resolve this paradox, we propose the AI-human hybrid for depression treatment, an integration of AI and human intelligence. Employing a trust theory framework, we assess patient evaluations of three service agents: online human physicians, standalone AI systems, and AI-human hybrids. We investigate their impacts on trusting beliefs and intention to use these agents, specifically examining perceived judgment and accountability. Our scenario-based experiment reveals that AI-human hybrids enhance accountability and diminish judgment. Judgment hampers trust, while accountability builds trust, influencing the intention to use healthcare service agents. The study underscores the importance of integrating AI into mental healthcare services, offering both theoretical insights and practical implications.
Original language | English |
---|---|
Title of host publication | PACIS 2024 Proceedings |
Publisher | Association for Information Systems |
Publication status | Published - Jul 2024 |
Event | The annual Pacific Asia Conference on Information Systems (PACIS) : 2024 - Ho Chi Minh City, Viet Nam Duration: 1 Jul 2024 → 5 Jul 2024 https://aisel.aisnet.org/pacis2024/ |
Public Lecture
Public Lecture | The annual Pacific Asia Conference on Information Systems (PACIS) : 2024 |
---|---|
Abbreviated title | PACIS2024 |
Country/Territory | Viet Nam |
City | Ho Chi Minh City |
Period | 1/07/24 → 5/07/24 |
Internet address |
Keywords
- Artificial intelligence
- depression treatment
- trust
- judgement
- accountability