Towards Transparent Deep Learning in Medicine: Feature Contribution and Attention Mechanism-Based Explainability

Thanveer SHAIK*, Xiaohui TAO, Haoran XIE, Lin LI, Niall HIGGINS, Juan D. VELÁSQUEZ

*Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

1 Citation (Scopus)

Abstract

Artificial intelligence (AI) techniques are increasingly employed in mental health for remote patient monitoring, enabling the prediction of vital signs and classification of physical activities, which are essential for proactive patient care. However, the black-box nature of deep learning models limits their explainability, a critical factor in clinical applications where clinicians require transparent, reliable decision-making tools to support clinical interventions. In non-invasive monitoring, sensor data and clinical attributes serve as input features for predicting patient health outcomes. Understanding how these features contribute to model predictions is crucial for informed clinical decisions in a mental health context. This study proposes a novel quantitative explainability framework (QEF) that provides both post-hoc and intrinsic explainability for regression and classification tasks within deep learning models. The framework combines Shapley values to elucidate feature contributions and attention mechanisms to enhance interpretability. Two deep learning models—artificial neural networks (ANN) and attention-based bidirectional long short-term memory (BiLSTM)—were applied to predict heart rate and classify physical activities using sensor data, achieving state-of-the-art performance. Attention weights and Shapley values were computed for each input feature to provide global and local explanations, offering insights into the models’ behavior and feature importance. The QEF framework was evaluated using the PPG-DaLiA dataset for heart rate prediction and the MHEALTH dataset for physical activity classification. To address the computational complexity of Shapley value calculations, a Monte Carlo approximation method was implemented, reducing time and resource demands. This study introduces the QEF framework as a practical solution to balance model performance with explainability, providing clinicians with interpretable insights from deep learning models in the field of psychiatry and mental health.
Original languageEnglish
Pages (from-to)209-229
Number of pages21
JournalHuman-Centric Intelligent Systems
Volume5
Issue number2
Early online date21 Jun 2025
DOIs
Publication statusPublished - Jun 2025

Bibliographical note

Xiaohui Tao, Haoran Xie, Lin Li, Niall Higgins and Juan D. Velásquez contributed equally to this work.

Publisher Copyright:
© The Author(s) 2025.

Keywords

  • Attention
  • Explainability
  • Monte Carlo
  • Physical activities
  • Shapley
  • Vital signs

Fingerprint

Dive into the research topics of 'Towards Transparent Deep Learning in Medicine: Feature Contribution and Attention Mechanism-Based Explainability'. Together they form a unique fingerprint.

Cite this