TY - GEN
T1 - NetGuard
T2 - 32nd ACM World Wide Web Conference, WWW 2023
AU - Gong, Xueluan
AU - Wang, Ziyao
AU - Chen, Yanjiao
AU - Wang, Qian
AU - Wang, Cong
AU - Shen, Chao
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/4/30
Y1 - 2023/4/30
N2 - Recently more and more cloud service providers (e.g., Microsoft, Google, and Amazon) have commercialized their well-trained deep learning models by providing limited access via web API interfaces. However, it is shown that these APIs are susceptible to model inversion attacks, where attackers can recover the training data with high fidelity, which may cause serious privacy leakage.Existing defenses against model inversion attacks, however, hinder the model performance and are ineffective for more advanced attacks, e.g., Mirror [4]. In this paper, we proposed NetGuard, a novel utility-aware defense methodology against model inversion attacks (MIAs). Unlike previous works that perturb prediction outputs of the victim model, we propose to mislead the MIA effort by inserting engineered fake samples during the training process. A generative adversarial network (GAN) is carefully built to construct fake training samples to mislead the attack model without degrading the performance of the victim model. Besides, we adopt continual learning to further improve the utility of the victim model. Extensive experiments on CelebA, VGG-Face, and VGG-Face2 datasets show that NetGuard is superior to existing defenses, including DP [37] and Ad-mi [32] on state-of-the-art model inversion attacks, i.e., DMI [8], Mirror [4], Privacy [12], and Alignment [34].
AB - Recently more and more cloud service providers (e.g., Microsoft, Google, and Amazon) have commercialized their well-trained deep learning models by providing limited access via web API interfaces. However, it is shown that these APIs are susceptible to model inversion attacks, where attackers can recover the training data with high fidelity, which may cause serious privacy leakage.Existing defenses against model inversion attacks, however, hinder the model performance and are ineffective for more advanced attacks, e.g., Mirror [4]. In this paper, we proposed NetGuard, a novel utility-aware defense methodology against model inversion attacks (MIAs). Unlike previous works that perturb prediction outputs of the victim model, we propose to mislead the MIA effort by inserting engineered fake samples during the training process. A generative adversarial network (GAN) is carefully built to construct fake training samples to mislead the attack model without degrading the performance of the victim model. Besides, we adopt continual learning to further improve the utility of the victim model. Extensive experiments on CelebA, VGG-Face, and VGG-Face2 datasets show that NetGuard is superior to existing defenses, including DP [37] and Ad-mi [32] on state-of-the-art model inversion attacks, i.e., DMI [8], Mirror [4], Privacy [12], and Alignment [34].
KW - Model inversion attacks
KW - Privacy-utility defense framework
KW - Secure web service
UR - https://www.scopus.com/pages/publications/85159267297
U2 - 10.1145/3543507.3583224
DO - 10.1145/3543507.3583224
M3 - 会议稿件
AN - SCOPUS:85159267297
T3 - ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023
SP - 2045
EP - 2053
BT - ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023
PB - Association for Computing Machinery, Inc
Y2 - 30 April 2023 through 4 May 2023
ER -