TY - GEN
T1 - PARSIFAL
T2 - 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2025
AU - Lei, Runze
AU - Wang, Pinghui
AU - Zeng, Juxiang
AU - Wang, Chenxu
AU - Pei, Hongbin
AU - Zhao, Junzhou
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/8/3
Y1 - 2025/8/3
N2 - Federated learning (FL) is a popular collaborative training paradigm in which data owners offer gradients instead of private data to model owners for model training to protect data privacy. However, it faces security threats from two sides: dishonest model owners may extract sensitive information about private data from gradients; meanwhile, adversaries may pretend to be data owners and poison the model by sending malicious gradients. We propose a novel FL protocol, PARSIFAL, to address privacy leakage and model poisoning threats. A poisoning detection module is designed based on a novel sketch structure. This module efficiently detects potential malicious gradients that are dissimilar to the majority of benign gradients. PARSIFAL also contains a robust aggregation module based on sign gradients to mitigate the influence of poisoning gradients on aggregation results. Meanwhile, all processes of our PARSIFAL are protected by privacy protocols, mainly based on secret sharing, to guarantee that malicious detection and aggregation processes will not leak sensitive information. Experimental results show that PARSIFAL improves poisoning defense performance by up to 28% compared with recent baselines.
AB - Federated learning (FL) is a popular collaborative training paradigm in which data owners offer gradients instead of private data to model owners for model training to protect data privacy. However, it faces security threats from two sides: dishonest model owners may extract sensitive information about private data from gradients; meanwhile, adversaries may pretend to be data owners and poison the model by sending malicious gradients. We propose a novel FL protocol, PARSIFAL, to address privacy leakage and model poisoning threats. A poisoning detection module is designed based on a novel sketch structure. This module efficiently detects potential malicious gradients that are dissimilar to the majority of benign gradients. PARSIFAL also contains a robust aggregation module based on sign gradients to mitigate the influence of poisoning gradients on aggregation results. Meanwhile, all processes of our PARSIFAL are protected by privacy protocols, mainly based on secret sharing, to guarantee that malicious detection and aggregation processes will not leak sensitive information. Experimental results show that PARSIFAL improves poisoning defense performance by up to 28% compared with recent baselines.
KW - federated learning
KW - poisoning robustness
KW - privacy preservation
KW - secure multi-party computation
UR - https://www.scopus.com/pages/publications/105014587659
U2 - 10.1145/3711896.3737074
DO - 10.1145/3711896.3737074
M3 - 会议稿件
AN - SCOPUS:105014587659
T3 - Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
SP - 1296
EP - 1307
BT - KDD 2025 - Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining
PB - Association for Computing Machinery
Y2 - 3 August 2025 through 7 August 2025
ER -