TY - JOUR
T1 - MetaExplainer
T2 - Revisit domain generalization of functional connectome analyses from the perspective of explainability
AU - Qiu, Xinmei
AU - Sun, Yongheng
AU - Shi, Yilin
AU - Duan, Xujun
AU - Wang, Fan
AU - Ma, Jianhua
N1 - Publisher Copyright:
© 2025
PY - 2025/10
Y1 - 2025/10
N2 - Graph neural networks (GNNs) are at the forefront of learning-based functional connectome analyses and neuropsychiatric disorder diagnoses with fMRI data. The reliability of the deployed GNNs largely depends on their explainability and generalizability, due to the complexity of functional brain networks and heterogeneous fMRI data acquisitions. While developing explainable or generalizable GNNs has drawn great attention, the majority of the existing studies address these two critical topics independently. However, it is worth noting that explainability and generalizability are inherently associated from the application view, considering that stably capturing subtle alterations tied to the prodromal stage of a particular disease could be the precondition for cross-domain generalizable diagnosis. In this paper, we aim to bridge this gap by revisiting the domain-generalization (DG) of fMRI-based diagnoses from the perspective of explainability. Specifically, we propose MetaExplainer, a dedicated meta-learning framework powered by specific explanatory-generalizability regularizations, for the building of GNNs with fMRI data for accurate and explainable neuropsychiatric disorder diagnoses across varying clinical centers. By the dual-loop meta-learning process, MetaExplainer learns from fMRI BOLD signals task-oriented nonlinear functional networks that capture domain-agnostic, disease-related connectome alterations. Such explanation factors can be stably maintained across different centers to enhance domain-generalizable discriminative representation learning and disease diagnosis. Comprehensive experiments on two representative multi-center datasets (i.e., ABIDE and REST-meta-MDD) demonstrate that MetaExplainer achieves state-of-the-art performance (AUC: 76.32% for ASD, 65.31% for MDD) while revealing neurobiologically plausible biomarkers supported by existing studies (e.g., SMN dysfunction in MDD). We commit to releasing the complete source code and implementation details upon publication to ensure full reproducibility.
AB - Graph neural networks (GNNs) are at the forefront of learning-based functional connectome analyses and neuropsychiatric disorder diagnoses with fMRI data. The reliability of the deployed GNNs largely depends on their explainability and generalizability, due to the complexity of functional brain networks and heterogeneous fMRI data acquisitions. While developing explainable or generalizable GNNs has drawn great attention, the majority of the existing studies address these two critical topics independently. However, it is worth noting that explainability and generalizability are inherently associated from the application view, considering that stably capturing subtle alterations tied to the prodromal stage of a particular disease could be the precondition for cross-domain generalizable diagnosis. In this paper, we aim to bridge this gap by revisiting the domain-generalization (DG) of fMRI-based diagnoses from the perspective of explainability. Specifically, we propose MetaExplainer, a dedicated meta-learning framework powered by specific explanatory-generalizability regularizations, for the building of GNNs with fMRI data for accurate and explainable neuropsychiatric disorder diagnoses across varying clinical centers. By the dual-loop meta-learning process, MetaExplainer learns from fMRI BOLD signals task-oriented nonlinear functional networks that capture domain-agnostic, disease-related connectome alterations. Such explanation factors can be stably maintained across different centers to enhance domain-generalizable discriminative representation learning and disease diagnosis. Comprehensive experiments on two representative multi-center datasets (i.e., ABIDE and REST-meta-MDD) demonstrate that MetaExplainer achieves state-of-the-art performance (AUC: 76.32% for ASD, 65.31% for MDD) while revealing neurobiologically plausible biomarkers supported by existing studies (e.g., SMN dysfunction in MDD). We commit to releasing the complete source code and implementation details upon publication to ensure full reproducibility.
KW - Domain generalization
KW - Explainability
KW - Graph neural networks
KW - Meta-learning
KW - fMRI
UR - https://www.scopus.com/pages/publications/105008721823
U2 - 10.1016/j.media.2025.103664
DO - 10.1016/j.media.2025.103664
M3 - 文章
C2 - 40561670
AN - SCOPUS:105008721823
SN - 1361-8415
VL - 105
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 103664
ER -