Secure Federated Learning with Model Compression

Yahao DING, Mohammad SHIKH-BAHAEI, Chongwen HUANG, Weijie YUAN

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

Abstract

Although federated Learning (FL) has become very popular recently, FL is vulnerable to gradient leakage attacks. Recent studies have shown that clients' private data can be reconstructed from shared models or gradients by attackers. Many existing works focus on adding privacy protection mechanisms to prevent user privacy leakage, such as differential privacy (DP) and homomorphic encryption. However, these defenses may cause an increase of computation and communication costs or degrade the performance of FL, and do not consider the impact of wireless network resources on the FL training process. Herein, we propose a defense method, weight compression, to prevent gradient leakage attacks for FL over wireless networks. The gradient compression matrix is determined by the user's location and channel conditions. Moreover, we also add Gaussian noise to the compressed gradients to strengthen the defense. This joint learning, wireless resource allocation and weight compression matrix is formulated as an optimization problem with the objective of minimizing the FL loss function. To find the solution, we first analyze the convergence rate of FL and quantify the effect of the weight matrix on FL convergence. Then, we seek the optimal resource block (RB) allocation by exhaustive search or ant colony optimization (ACO), and then use CVX toolbox to obtain the optimal weight matrix to minimize the optimization function. Our simulation results show that the optimized RB can accelerate the convergence of FL.

Original languageEnglish
Title of host publication2023 IEEE International Conference on Communications Workshops: Sustainable Communications for Renaissance, ICC Workshops 2023
PublisherIEEE
Pages843-848
Number of pages6
ISBN (Electronic)9798350333077
ISBN (Print)9798350333084
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event2023 IEEE International Conference on Communications Workshops, ICC Workshops 2023 - Rome, Italy
Duration: 28 May 20231 Jun 2023

Conference

Conference2023 IEEE International Conference on Communications Workshops, ICC Workshops 2023
Country/TerritoryItaly
CityRome
Period28/05/231/06/23

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Keywords

  • deep leakage from gradients (DLG)
  • Federated learning (FL)
  • resource block (RB) allocation

Fingerprint

Dive into the research topics of 'Secure Federated Learning with Model Compression'. Together they form a unique fingerprint.

Cite this