Rethinking Adversarial Examples Exploiting Frequency-Based Analysis

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Deep neural networks (DNNs) have been recently found vulnerable to adversarial examples. Several previous works attempt to relate the low-frequency or high-frequency parts of adversarial inputs with the robustness of models. However, these studies lack comprehensive experiments and thorough analyses and even yield contradictory results. This work comprehensively explores the connection between the robustness of models and properties of adversarial perturbations in the frequency domain using six classic attack methods and three representative datasets. We visualize the distribution of successful adversarial perturbations using Discrete Fourier Transform and test the effectiveness of different frequency bands of perturbations on reducing the accuracy of classifiers through a proposed quantitative analysis. Experimental results show that the characteristics of successful adversarial perturbations in the frequency domain can vary from dataset to dataset, while their intensities are greater in the effective frequency bands. We analyze the obtained phenomena by combining principles of attacks and properties of datasets and offer a complete view of adversarial examples from the frequency domain perspective, which helps to explain the contradictory parts of previous works and provides insights for future research.

Original languageEnglish
Title of host publicationInformation and Communications Security - 23rd International Conference, ICICS 2021, Proceedings
EditorsDebin Gao, Qi Li, Xiaohong Guan, Xiaofeng Liao
PublisherSpringer Science and Business Media Deutschland GmbH
Pages73-89
Number of pages17
ISBN (Print)9783030880514
DOIs
StatePublished - 2021
Event23rd International Conference on Information and Communications Security, ICICS 2021 - Chongqing, China
Duration: 19 Nov 202121 Nov 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12919 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference23rd International Conference on Information and Communications Security, ICICS 2021
Country/TerritoryChina
CityChongqing
Period19/11/2121/11/21

Keywords

  • Adversarial examples
  • Frequency analysis
  • Model robustness

Fingerprint

Dive into the research topics of 'Rethinking Adversarial Examples Exploiting Frequency-Based Analysis'. Together they form a unique fingerprint.

Cite this