TY - JOUR
T1 - Target attack on biomedical image segmentation model based on multi-scale gradients
AU - Shao, Mingwen
AU - Zhang, Gaozhi
AU - Zuo, Wangmeng
AU - Meng, Deyu
N1 - Publisher Copyright:
© 2020 Elsevier Inc.
PY - 2021/4
Y1 - 2021/4
N2 - Research shows that deep neural networks are vulnerable to adversarial examples due to the highly linear nature of deep neural networks (DNNs). Therefore, adversarial examples involve security of deep learning. However, there is a lack of research on the impact of adversarial examples on the biomedical segmentation model. Since a large part of medical image problems are segmentation problems, this paper analyzes the impact of adversarial examples on image segmentation models based on deep learning. We propose to fool the biomedical segmentation model and generate target segmentation masks with feature space perturbations and cross-entropy loss function. Different from the traditional gradient-based attack methods, which usually use the gradient of the final loss function, this paper adopts a Multi-scale Attack (MSA) method based on multi-scale gradients. Extensive experiments to attack U-Net on the ISIC skin lesion segmentation challenge dataset and the glaucoma optic disc segmentation dataset have proved that the predicted mask generated by this method has a high intersection over union(IoU) and pixel accuracy with the target mask. Besides, the L2 and L∞ distances between the adversarial and clean examples are reduced compared with the state-of-the-art method.
AB - Research shows that deep neural networks are vulnerable to adversarial examples due to the highly linear nature of deep neural networks (DNNs). Therefore, adversarial examples involve security of deep learning. However, there is a lack of research on the impact of adversarial examples on the biomedical segmentation model. Since a large part of medical image problems are segmentation problems, this paper analyzes the impact of adversarial examples on image segmentation models based on deep learning. We propose to fool the biomedical segmentation model and generate target segmentation masks with feature space perturbations and cross-entropy loss function. Different from the traditional gradient-based attack methods, which usually use the gradient of the final loss function, this paper adopts a Multi-scale Attack (MSA) method based on multi-scale gradients. Extensive experiments to attack U-Net on the ISIC skin lesion segmentation challenge dataset and the glaucoma optic disc segmentation dataset have proved that the predicted mask generated by this method has a high intersection over union(IoU) and pixel accuracy with the target mask. Besides, the L2 and L∞ distances between the adversarial and clean examples are reduced compared with the state-of-the-art method.
KW - Adversarial example
KW - Biomedical image segmentation
KW - Deep learning security
KW - Multi-scale gradients
KW - Target attack
UR - https://www.scopus.com/pages/publications/85098734517
U2 - 10.1016/j.ins.2020.12.013
DO - 10.1016/j.ins.2020.12.013
M3 - 文章
AN - SCOPUS:85098734517
SN - 0020-0255
VL - 554
SP - 33
EP - 46
JO - Information Sciences
JF - Information Sciences
ER -