TY - JOUR
T1 - DEAttack
T2 - A differential evolution based attack method for the robustness evaluation of medical image segmentation
AU - Cui, Xiangxiang
AU - Chang, Shi
AU - Li, Chen
AU - Kong, Bin
AU - Tian, Lihua
AU - Wang, Hongqiang
AU - Huang, Peng
AU - Yang, Meng
AU - Wu, Yenan
AU - Li, Zhongyu
N1 - Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/11/20
Y1 - 2021/11/20
N2 - Deep learning is an effective tool to assist doctors with many time-consuming and error-prone medical image analytical tasks. However, deep models are shown to be vulnerable to adversarial attacks, posing significant challenges to clinical applications. Existing works regarding the robustness of deep learning models are scarce, where most of them focus on the attack of medical image classification models. In this paper, a differential evolution attack (DEAttack) method is proposed to generate adversarial examples for medical image segmentation models. Our method does not require extra information such as the network's structures and weights compared with the most widely investigated gradient-based attack methods. Additionally, benefit from the embedded differential evolution algorithm, which can preserve diversities of the optimization space. The proposed method can achieve better results than gradient-based methods, which can successfully attack the segmentation model with only perturbing a small fraction of the image pixels, demonstrating that the medical image segmentation model is more susceptible to adversarial examples. In addition to evaluating model robustness attack with public datasets, our DEAttack method was also tested on the clinical diagnostic dataset, demonstrating its superior performance and elegant process for the robustness evaluation of deep models in medical image segmentation.
AB - Deep learning is an effective tool to assist doctors with many time-consuming and error-prone medical image analytical tasks. However, deep models are shown to be vulnerable to adversarial attacks, posing significant challenges to clinical applications. Existing works regarding the robustness of deep learning models are scarce, where most of them focus on the attack of medical image classification models. In this paper, a differential evolution attack (DEAttack) method is proposed to generate adversarial examples for medical image segmentation models. Our method does not require extra information such as the network's structures and weights compared with the most widely investigated gradient-based attack methods. Additionally, benefit from the embedded differential evolution algorithm, which can preserve diversities of the optimization space. The proposed method can achieve better results than gradient-based methods, which can successfully attack the segmentation model with only perturbing a small fraction of the image pixels, demonstrating that the medical image segmentation model is more susceptible to adversarial examples. In addition to evaluating model robustness attack with public datasets, our DEAttack method was also tested on the clinical diagnostic dataset, demonstrating its superior performance and elegant process for the robustness evaluation of deep models in medical image segmentation.
KW - Adversarial attack
KW - Differential evolution algorithm
KW - Medical image segmentation
KW - Robustness evaluation
UR - https://www.scopus.com/pages/publications/85114784992
U2 - 10.1016/j.neucom.2021.08.118
DO - 10.1016/j.neucom.2021.08.118
M3 - 文章
AN - SCOPUS:85114784992
SN - 0925-2312
VL - 465
SP - 38
EP - 52
JO - Neurocomputing
JF - Neurocomputing
ER -