跳到主要导航 跳到搜索 跳到主要内容

AMA: Adaptive Model Poisoning Attacks Towards Federated Learning

  • Di Wu
  • , Qi Guo
  • , Yong Qi
  • , Saiyu Qi
  • , Qian Li
  • Xi'an Jiaotong University

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

Federated Learning (FL) is vulnerable to model poisoning attacks, where malicious updates (e.g., gradients) can adversely interfere with the global model. Existing attacks typically rely heavily on the updates of benign clients and aggregation algorithms to craft malicious updates. However, the benign updates and aggregation algorithms are usually hard to access for attackers, which makes their attacks weak and volatile. Therefore, in this work, we aim to design an adaptive model poisoning attack based on the agnostic adversary. Specifically, we propose a new concept from the perspective of adversarial learning, called adversarial model perturbation. This perturbation targets the parameters of the local model and aims to maximally mislead its predictions. Then, we develop a novel adaptive model poisoning attack named Adversarial Model Attack (AMA), which utilizes the adversarial model perturbation as the malicious updates to attack the global model. Instead of the benign updates and aggregation algorithms, we only leverage the original data of the malicious client to adaptively craft the malicious updates. AMA resolves the conflict between the knowledge requirement of the adversary and the impact of model poisoning attacks. Empirical results against multiple robust FL methods show that AMA surpasses state-of-the-art attack methods and updates the benchmark of attack impact on Fedavg, Trimean, Multi-Krum, FoundationFL, RFA, and Median.

源语言英语
页(从-至)7125-7138
页数14
期刊IEEE Transactions on Dependable and Secure Computing
22
6
DOI
出版状态已出版 - 2025

学术指纹

探究 'AMA: Adaptive Model Poisoning Attacks Towards Federated Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此