Abstract
The increasingly deeper neural networks hinder the democratization of privacy-enhancing distributed learning, such as federated learning (FL), to resource-constrained devices. To overcome this challenge, in this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL), allowing multiple edge devices to offload substantial training workloads to an edge server via layer-wise model split. By observing that existing PSL schemes incur excessive training latency and a large volume of data transmissions, we propose an innovative PSL framework, namely, efficient parallel split learning (EPSL), to accelerate model training. To be specific, EPSL parallelizes client-side model training and reduces the dimension of activations' gradients for backpropagation (BP) via last-layer gradient aggregation, leading to a significant reduction in server-side training and communication latency. Moreover, by considering the heterogeneous channel conditions and computing capabilities at edge devices, we jointly optimize subchannel allocation, power control, and cut layer selection to minimize the per-round latency. Simulation results show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy compared with the state-of-the-art benchmarks, and the tailored resource management and layer split strategy can considerably reduce latency than the counterpart without optimization.
| Original language | English |
|---|---|
| Pages (from-to) | 9224-9239 |
| Number of pages | 16 |
| Journal | IEEE Transactions on Mobile Computing |
| Volume | 23 |
| Issue number | 10 |
| Early online date | 26 Jan 2024 |
| DOIs | |
| Publication status | Published - Oct 2024 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2002-2012 IEEE.
Funding
The work of Xianhao Chen was supported in part by HKU IDS Research Seed Fund under Grant IDS-RSF2023- 0012. The work of Kaibin Huang was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region, China under Grant HKU RFS2122-7S04, in part by the Areas of Excellence scheme under grants AoE/E-601/22-R and 17208319. The work of Yiqin Deng was supported in part by the National Natural Science Foundation of China under Grant 62301300.
Keywords
- Distributed learning
- edge computing
- edge intelligence
- resource management
- split learning