TY - GEN
T1 - NeuroExplainer
T2 - 26th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2023
AU - Xue, Chenyu
AU - Wang, Fan
AU - Zhu, Yuanzhuo
AU - Li, Hui
AU - Meng, Deyu
AU - Shen, Dinggang
AU - Lian, Chunfeng
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
PY - 2023
Y1 - 2023
N2 - In addition to model accuracy, current neuroimaging studies require more explainable model outputs to relate brain development, degeneration, or disorders to uncover atypical local alterations. For this purpose, existing approaches typically explicate network outputs in a post-hoc fashion. However, for neuroimaging data with high dimensional and redundant information, end-to-end learning of explanation factors can inversely assure fine-grained explainability while boosting model accuracy. Meanwhile, most methods only deal with gridded data and do not support brain cortical surface-based analysis. In this paper, we propose an explainable geometric deep network, the NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attention and respective discriminative representations in a spherical space to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximize the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies. The source code will be released on https://github.com/ladderlab-xjtu/NeuroExplainer.
AB - In addition to model accuracy, current neuroimaging studies require more explainable model outputs to relate brain development, degeneration, or disorders to uncover atypical local alterations. For this purpose, existing approaches typically explicate network outputs in a post-hoc fashion. However, for neuroimaging data with high dimensional and redundant information, end-to-end learning of explanation factors can inversely assure fine-grained explainability while boosting model accuracy. Meanwhile, most methods only deal with gridded data and do not support brain cortical surface-based analysis. In this paper, we propose an explainable geometric deep network, the NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attention and respective discriminative representations in a spherical space to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximize the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies. The source code will be released on https://github.com/ladderlab-xjtu/NeuroExplainer.
UR - https://www.scopus.com/pages/publications/85174701782
U2 - 10.1007/978-3-031-43895-0_19
DO - 10.1007/978-3-031-43895-0_19
M3 - 会议稿件
AN - SCOPUS:85174701782
SN - 9783031438943
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 202
EP - 211
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 - 26th International Conference, Proceedings
A2 - Greenspan, Hayit
A2 - Greenspan, Hayit
A2 - Madabhushi, Anant
A2 - Mousavi, Parvin
A2 - Salcudean, Septimiu
A2 - Duncan, James
A2 - Syeda-Mahmood, Tanveer
A2 - Taylor, Russell
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 8 October 2023 through 12 October 2023
ER -