MetaExplainer: Revisit domain generalization of functional connectome analyses from the perspective of explainability

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Graph neural networks (GNNs) are at the forefront of learning-based functional connectome analyses and neuropsychiatric disorder diagnoses with fMRI data. The reliability of the deployed GNNs largely depends on their explainability and generalizability, due to the complexity of functional brain networks and heterogeneous fMRI data acquisitions. While developing explainable or generalizable GNNs has drawn great attention, the majority of the existing studies address these two critical topics independently. However, it is worth noting that explainability and generalizability are inherently associated from the application view, considering that stably capturing subtle alterations tied to the prodromal stage of a particular disease could be the precondition for cross-domain generalizable diagnosis. In this paper, we aim to bridge this gap by revisiting the domain-generalization (DG) of fMRI-based diagnoses from the perspective of explainability. Specifically, we propose MetaExplainer, a dedicated meta-learning framework powered by specific explanatory-generalizability regularizations, for the building of GNNs with fMRI data for accurate and explainable neuropsychiatric disorder diagnoses across varying clinical centers. By the dual-loop meta-learning process, MetaExplainer learns from fMRI BOLD signals task-oriented nonlinear functional networks that capture domain-agnostic, disease-related connectome alterations. Such explanation factors can be stably maintained across different centers to enhance domain-generalizable discriminative representation learning and disease diagnosis. Comprehensive experiments on two representative multi-center datasets (i.e., ABIDE and REST-meta-MDD) demonstrate that MetaExplainer achieves state-of-the-art performance (AUC: 76.32% for ASD, 65.31% for MDD) while revealing neurobiologically plausible biomarkers supported by existing studies (e.g., SMN dysfunction in MDD). We commit to releasing the complete source code and implementation details upon publication to ensure full reproducibility.

Original languageEnglish
Article number103664
JournalMedical Image Analysis
Volume105
DOIs
StatePublished - Oct 2025

Keywords

  • Domain generalization
  • Explainability
  • Graph neural networks
  • Meta-learning
  • fMRI

Fingerprint

Dive into the research topics of 'MetaExplainer: Revisit domain generalization of functional connectome analyses from the perspective of explainability'. Together they form a unique fingerprint.

Cite this