Abstract
In multi-site studies of Alzheimer’s disease (AD), the difference of data in multi-site datasets leads to the degraded performance of models in the target sites. The traditional domain adaptation method requires sharing data from both source and target domains, which will lead to data privacy issue. To solve it, federated learning is adopted as it can allow models to be trained with multi-site data in a privacy-protected manner. In this paper, we propose a multi-site federated domain adaptation framework via Transformer (FedDAvT), which not only protects data privacy, but also eliminates data heterogeneity. The Transformer network is used as the backbone network to extract the correlation between the multi-template region of interest features, which can capture the brain abundant information. The self-attention maps in the source and target domains are aligned by applying mean squared error for subdomain adaptation. Finally, we evaluate our method on the multi-site databases based on three AD datasets. The experimental results show that the proposed FedDAvT is quite effective, achieving accuracy rates of 88.75%, 69.51%, and 69.88% on the AD vs. NC, MCI vs. NC, and AD vs. MCI two-way classification tasks, respectively.
Original language | English |
---|---|
Pages (from-to) | 3651-3664 |
Number of pages | 14 |
Journal | IEEE Transactions on Medical Imaging |
Volume | 42 |
Issue number | 12 |
Early online date | 1 Aug 2023 |
DOIs | |
Publication status | Published - 1 Dec 2023 |
Bibliographical note
Publisher Copyright:© 1982-2012 IEEE.
Keywords
- Adaptation models
- Alzheimer’s Disease
- Data privacy
- Domain Adaptation
- Feature extraction
- Federated Learning
- Federated learning
- Hospitals
- Magnetic resonance imaging
- Transformer
- Transformers
- transformer
- domain adaptation
- Alzheimer's disease
- federated learning