Abstract
Federated learning (FL) offers a machine learning paradigm that protects privacy, allowing multiple clients to collaboratively train a global model while only accessing their local data. Recent research in FL has increasingly focused on improving the uniformity of model performance across clients, a fairness principle known as egalitarian fairness. However, achieving egalitarian fairness in FL may sacrifice the model performance for data-rich clients to benefit those with less data. This tradeoff raises concerns about the stability of FL, as data-rich clients may opt to leave the current coalition and join another that is more closely aligned with its expected high performance. In this context, our work rigorously addresses the critical concern: Does egalitarian fairness lead to instability? Drawing from game theory and social choice theory, we initially characterize fair FL systems as altruism coalition formation games (ACFGs) and reveal that the instability issues emerging from the pursuit of egalitarian fairness are significantly related to the clients' altruism within the coalition and the configuration of the friends-relationship networks among the clients. Then, we theoretically propose the optimal egalitarian fairness bounds that an FL coalition can achieve while maintaining core stability under various types of altruistic behaviors. The theoretical contributions clarify the quantitative relationships between achievable egalitarian fairness and the disparities in the sizes of local datasets, disproving the misconception that egalitarian fairness inevitably leads to instability. Finally, we conduct experiments to evaluate the consistency of our theoretically derived egalitarian fairness bounds with the empirically achieved egalitarian fairness in fair FL settings.
Original language | English |
---|---|
Title of host publication | Advances in Neural Information Processing Systems 37 (NeurIPS 2024) |
Editors | A. GLOBERSON, L. MACKEY, D. BELGRAVE, A. FAN, U. PAQUET, J. TOMCZAK, C. ZHANG |
Publisher | Neural Information Processing Systems Foundation |
Number of pages | 27 |
Volume | 37 |
ISBN (Electronic) | 9798331314385 |
Publication status | Published - 2024 |
Event | 38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Canada Duration: 9 Dec 2024 → 15 Dec 2024 |
Publication series
Name | Advances in Neural Information Processing Systems |
---|---|
Publisher | Neural Information Processing Systems Foundation |
Volume | 37 |
ISSN (Print) | 1049-5258 |
Conference
Conference | 38th Conference on Neural Information Processing Systems, NeurIPS 2024 |
---|---|
Country/Territory | Canada |
City | Vancouver |
Period | 9/12/24 → 15/12/24 |
Bibliographical note
Publisher Copyright:© 2024 Neural information processing systems foundation. All rights reserved.
Funding
This work was supported by Key Programs of Guangdong Province under Grant 2021QN02X166. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding parties.