Abstract
Federated learning (FL) allows multiple parties (distributed devices) to train a machine learning model without sharing raw data. How to effectively and efficiently utilize the resources on devices and the central server is a highly interesting yet challenging problem. In this article, we propose an efficient split FL (ESFL) algorithm to take full advantage of the powerful computing capabilities at a central server under a split FL framework with heterogeneous end devices (EDs). By splitting the model into different submodels between the server and EDs, our approach jointly optimizes user-side workload and server-side computing resource allocation by considering users' heterogeneity. We formulate the whole optimization problem as a mixed-integer nonlinear program, which is an NP-hard problem, and develop an iterative approach to obtain an approximate solution efficiently. Extensive simulations have been conducted to validate the significantly increased efficiency of our ESFL approach compared with standard FL, split learning, and splitfed learning.
| Original language | English |
|---|---|
| Pages (from-to) | 27153-27166 |
| Number of pages | 14 |
| Journal | IEEE Internet of Things Journal |
| Volume | 11 |
| Issue number | 16 |
| Early online date | 7 May 2024 |
| DOIs | |
| Publication status | Published - 15 Aug 2024 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2014 IEEE.
Keywords
- Distributed machine learning (ML)
- federated learning (FL)
- split learning
- wireless networking