TY - JOUR
T1 - InDuDoNet+
T2 - A deep unfolding dual domain network for metal artifact reduction in CT images
AU - Wang, Hong
AU - Li, Yuexiang
AU - Zhang, Haimiao
AU - Meng, Deyu
AU - Zheng, Yefeng
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2023/4
Y1 - 2023/4
N2 - During the computed tomography (CT) imaging process, metallic implants within patients often cause harmful artifacts, which adversely degrade the visual quality of reconstructed CT images and negatively affect the subsequent clinical diagnosis. For the metal artifact reduction (MAR) task, current deep learning based methods have achieved promising performance. However, most of them share two main common limitations: (1) the CT physical imaging geometry constraint is not comprehensively incorporated into deep network structures; (2) the entire framework has weak interpretability for the specific MAR task; hence, the role of each network module is difficult to be evaluated. To alleviate these issues, in the paper, we construct a novel deep unfolding dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded. Concretely, we derive a joint spatial and Radon domain reconstruction model and propose an optimization algorithm with only simple operators for solving it. By unfolding the iterative steps involved in the proposed algorithm into the corresponding network modules, we easily build the InDuDoNet+ with clear interpretability. Furthermore, we analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance. Comprehensive experiments on synthesized data and clinical data substantiate the superiority of the proposed methods as well as the superior generalization performance beyond the current state-of-the-art (SOTA) MAR methods. Code is available at https://github.com/hongwang01/InDuDoNet_plus.
AB - During the computed tomography (CT) imaging process, metallic implants within patients often cause harmful artifacts, which adversely degrade the visual quality of reconstructed CT images and negatively affect the subsequent clinical diagnosis. For the metal artifact reduction (MAR) task, current deep learning based methods have achieved promising performance. However, most of them share two main common limitations: (1) the CT physical imaging geometry constraint is not comprehensively incorporated into deep network structures; (2) the entire framework has weak interpretability for the specific MAR task; hence, the role of each network module is difficult to be evaluated. To alleviate these issues, in the paper, we construct a novel deep unfolding dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded. Concretely, we derive a joint spatial and Radon domain reconstruction model and propose an optimization algorithm with only simple operators for solving it. By unfolding the iterative steps involved in the proposed algorithm into the corresponding network modules, we easily build the InDuDoNet+ with clear interpretability. Furthermore, we analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance. Comprehensive experiments on synthesized data and clinical data substantiate the superiority of the proposed methods as well as the superior generalization performance beyond the current state-of-the-art (SOTA) MAR methods. Code is available at https://github.com/hongwang01/InDuDoNet_plus.
KW - CT imaging geometry
KW - Generalization ability
KW - Metal artifact reduction
KW - Physical interpretability
UR - https://www.scopus.com/pages/publications/85145971401
U2 - 10.1016/j.media.2022.102729
DO - 10.1016/j.media.2022.102729
M3 - 文章
C2 - 36623381
AN - SCOPUS:85145971401
SN - 1361-8415
VL - 85
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 102729
ER -