TY - JOUR
T1 - AMA
T2 - Adaptive Model Poisoning Attacks Towards Federated Learning
AU - Wu, Di
AU - Guo, Qi
AU - Qi, Yong
AU - Qi, Saiyu
AU - Li, Qian
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated Learning (FL) is vulnerable to model poisoning attacks, where malicious updates (e.g., gradients) can adversely interfere with the global model. Existing attacks typically rely heavily on the updates of benign clients and aggregation algorithms to craft malicious updates. However, the benign updates and aggregation algorithms are usually hard to access for attackers, which makes their attacks weak and volatile. Therefore, in this work, we aim to design an adaptive model poisoning attack based on the agnostic adversary. Specifically, we propose a new concept from the perspective of adversarial learning, called adversarial model perturbation. This perturbation targets the parameters of the local model and aims to maximally mislead its predictions. Then, we develop a novel adaptive model poisoning attack named Adversarial Model Attack (AMA), which utilizes the adversarial model perturbation as the malicious updates to attack the global model. Instead of the benign updates and aggregation algorithms, we only leverage the original data of the malicious client to adaptively craft the malicious updates. AMA resolves the conflict between the knowledge requirement of the adversary and the impact of model poisoning attacks. Empirical results against multiple robust FL methods show that AMA surpasses state-of-the-art attack methods and updates the benchmark of attack impact on Fedavg, Trimean, Multi-Krum, FoundationFL, RFA, and Median.
AB - Federated Learning (FL) is vulnerable to model poisoning attacks, where malicious updates (e.g., gradients) can adversely interfere with the global model. Existing attacks typically rely heavily on the updates of benign clients and aggregation algorithms to craft malicious updates. However, the benign updates and aggregation algorithms are usually hard to access for attackers, which makes their attacks weak and volatile. Therefore, in this work, we aim to design an adaptive model poisoning attack based on the agnostic adversary. Specifically, we propose a new concept from the perspective of adversarial learning, called adversarial model perturbation. This perturbation targets the parameters of the local model and aims to maximally mislead its predictions. Then, we develop a novel adaptive model poisoning attack named Adversarial Model Attack (AMA), which utilizes the adversarial model perturbation as the malicious updates to attack the global model. Instead of the benign updates and aggregation algorithms, we only leverage the original data of the malicious client to adaptively craft the malicious updates. AMA resolves the conflict between the knowledge requirement of the adversary and the impact of model poisoning attacks. Empirical results against multiple robust FL methods show that AMA surpasses state-of-the-art attack methods and updates the benchmark of attack impact on Fedavg, Trimean, Multi-Krum, FoundationFL, RFA, and Median.
KW - federated learning
KW - Model poisoning attack
KW - robust aggregation
UR - https://www.scopus.com/pages/publications/105013060790
U2 - 10.1109/TDSC.2025.3594175
DO - 10.1109/TDSC.2025.3594175
M3 - 文章
AN - SCOPUS:105013060790
SN - 1545-5971
VL - 22
SP - 7125
EP - 7138
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
IS - 6
ER -