PDSA-FL: A Poisoning-Defense Secure Aggregation in Federated Learning

  • Zixuan Huang
  • , Yuanguo Bi
  • , Kuan Zhang
  • , Bing Hu
  • , Zhou Su
  • , Chong Tai
  • , Xukun Luan

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Federated learning (FL) has become a promising technology to provide edge Artificial Intelligence (AI) due to its advantages in privacy protection and reduced communication costs. However, FL is still confronted with privacy leakage issues because the sharing local model may expose the training data information. Existing works typically utilize secure aggregation techniques to eliminate privacy leakage, where local model parameters in FL are obfuscated before they are sent to the aggregator. Nevertheless, secure aggregation makes poisoning attacks more convenient given that existing anomaly detection methods mostly require access to plaintext local models. A Poisoning-Defense Secure Aggregation in FL (PDSA-FL) is proposed to enhance the privacy protection of honest clients and defend against poisoning attacks from malicious clients. Specifically, a Secure Aggregation scheme based on Random Parameters Decomposition (SARPD) is designed to protect client privacy during the FL aggregation process and eliminates the impact of dropped clients on the aggregation results. Secondly, a Poisoning Detection method based on Similarity Grouping (PDSG) is proposed to mitigate the impact of poisoning attacks on the global model of FL without leaking client model parameters. The security analysis discusses the effectiveness of the proposed PDSA-FL in terms of privacy protection. Extensive simulation results show that PDSA-FL can effectively defend against poisoning attacks, significantly improve the convergence performance of global models, and reduce the computation time of clients.

Original languageEnglish
Pages (from-to)7617-7632
Number of pages16
JournalIEEE Transactions on Information Forensics and Security
Volume20
DOIs
StatePublished - 2025

Keywords

  • Federated learning
  • poisoning attack
  • privacy-preserving
  • secure aggregation

Fingerprint

Dive into the research topics of 'PDSA-FL: A Poisoning-Defense Secure Aggregation in Federated Learning'. Together they form a unique fingerprint.

Cite this