Target attack on biomedical image segmentation model based on multi-scale gradients

  • Mingwen Shao
  • , Gaozhi Zhang
  • , Wangmeng Zuo
  • , Deyu Meng

Research output: Contribution to journalArticlepeer-review

25 Scopus citations

Abstract

Research shows that deep neural networks are vulnerable to adversarial examples due to the highly linear nature of deep neural networks (DNNs). Therefore, adversarial examples involve security of deep learning. However, there is a lack of research on the impact of adversarial examples on the biomedical segmentation model. Since a large part of medical image problems are segmentation problems, this paper analyzes the impact of adversarial examples on image segmentation models based on deep learning. We propose to fool the biomedical segmentation model and generate target segmentation masks with feature space perturbations and cross-entropy loss function. Different from the traditional gradient-based attack methods, which usually use the gradient of the final loss function, this paper adopts a Multi-scale Attack (MSA) method based on multi-scale gradients. Extensive experiments to attack U-Net on the ISIC skin lesion segmentation challenge dataset and the glaucoma optic disc segmentation dataset have proved that the predicted mask generated by this method has a high intersection over union(IoU) and pixel accuracy with the target mask. Besides, the L2 and L distances between the adversarial and clean examples are reduced compared with the state-of-the-art method.

Original languageEnglish
Pages (from-to)33-46
Number of pages14
JournalInformation Sciences
Volume554
DOIs
StatePublished - Apr 2021
Externally publishedYes

Keywords

  • Adversarial example
  • Biomedical image segmentation
  • Deep learning security
  • Multi-scale gradients
  • Target attack

Fingerprint

Dive into the research topics of 'Target attack on biomedical image segmentation model based on multi-scale gradients'. Together they form a unique fingerprint.

Cite this