跳到主要导航 跳到搜索 跳到主要内容

DEAttack: A differential evolution based attack method for the robustness evaluation of medical image segmentation

  • Xiangxiang Cui
  • , Shi Chang
  • , Chen Li
  • , Bin Kong
  • , Lihua Tian
  • , Hongqiang Wang
  • , Peng Huang
  • , Meng Yang
  • , Yenan Wu
  • , Zhongyu Li

科研成果: 期刊稿件文章同行评审

21 引用 (Scopus)

摘要

Deep learning is an effective tool to assist doctors with many time-consuming and error-prone medical image analytical tasks. However, deep models are shown to be vulnerable to adversarial attacks, posing significant challenges to clinical applications. Existing works regarding the robustness of deep learning models are scarce, where most of them focus on the attack of medical image classification models. In this paper, a differential evolution attack (DEAttack) method is proposed to generate adversarial examples for medical image segmentation models. Our method does not require extra information such as the network's structures and weights compared with the most widely investigated gradient-based attack methods. Additionally, benefit from the embedded differential evolution algorithm, which can preserve diversities of the optimization space. The proposed method can achieve better results than gradient-based methods, which can successfully attack the segmentation model with only perturbing a small fraction of the image pixels, demonstrating that the medical image segmentation model is more susceptible to adversarial examples. In addition to evaluating model robustness attack with public datasets, our DEAttack method was also tested on the clinical diagnostic dataset, demonstrating its superior performance and elegant process for the robustness evaluation of deep models in medical image segmentation.

源语言英语
页(从-至)38-52
页数15
期刊Neurocomputing
465
DOI
出版状态已出版 - 20 11月 2021

学术指纹

探究 'DEAttack: A differential evolution based attack method for the robustness evaluation of medical image segmentation' 的科研主题。它们共同构成独一无二的指纹。

引用此