Abstract
Federated Learning (FL) faces challenges due to straggler clients that impede timely parameter uploads, potentially leading to suboptimal global model performance. Existing approaches using synchronous and asynchronous communication suffer from long waiting times or convergence issues. We propose Fed-OGD, a novel asynchronous FL method addressing the straggler problem through gradient orthogonalization. Our approach innovatively frames the straggler issue using catastrophic forgetting theory, viewing stragglers as instances of the global model “forgetting” to aggregate their parameters. Fed-OGD introduces an Orthogonal Gradient Descent (OGD) technique that caches straggler gradients and orthogonalizes the difference between these and current active client gradients. By projecting active gradients onto straggler orthogonal bases and subtracting the resulting components, we obtain orthogonalized gradients guiding the model towards optimality. We provide theoretical convergence guarantees and demonstrate Fed-OGD’s effectiveness through extensive experiments. Our method achieves state-of-the-art performance across multiple datasets among SOTA FL baselines, with notable improvements in non-IID (non-Independent and identically distributed) scenarios: there are few main categories with many samples while other categories hold few samples in a client. Fed-OGD achieves that 16.66% increase in accuracy on CIFAR-10, and significant gains on CIFAR-100 (5.37%), Tiny-ImageNet (38.51%), and AG_NEWS (16.30%).
| Original language | English |
|---|---|
| Pages (from-to) | 3018-3031 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Computers |
| Volume | 74 |
| Issue number | 9 |
| Early online date | 30 Jun 2025 |
| DOIs | |
| Publication status | Published - 2025 |
Bibliographical note
Publisher Copyright:© 1968-2012 IEEE.
Funding
This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFF1101100, in part by the National Natural Science Foundation of China under Grant 62202039, and in part by the financial support of Lingnan University (LU) under Grant DR25F4 and Lam Woo Research Fund at LU under Grant LWP20021.
Keywords
- Federated learning
- catastrophic forgetting
- gradient orthogonalization
- straggler issue
Fingerprint
Dive into the research topics of 'Fed-OGD: Mitigating Straggler Effects in Federated Learning via Orthogonal Gradient Descent'. Together they form a unique fingerprint.-
Unveiling a New Era of Mobile Crowdsensing: Balancing Privacy and Utility via Context-Aware Techniques
SHEN, J. (PI)
15/01/25 → 14/07/26
Project: Grant Research
-
Balancing User Privacy and Data Utility in Mobile Crowdsensing
SHEN, J. (PI)
27/03/23 → 26/03/26
Project: Grant Research
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver