Game Attribution-Based Causal Learning Interpretable Networks for Intelligent Fault Diagnosis

  • Junwei Gu
  • , Yu Wang
  • , Mingquan Zhang
  • , Cheng Zhu
  • , Ruijie Hu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In recent years, deep learning-based intelligent fault diagnosis methods have been widely applied and developed for the diagnostic and health management of mechanical systems. However, the black-box nature of these intelligent diagnostic models has significantly hindered their widespread application in risk-sensitive industrial fields. Few studies aim to ensure that models possess strong diagnostic performance while maintaining interpretability. Model interpretability is crucial as it helps engineers identify the root causes of faults and enhances the reliability of diagnostic systems. This interpretability is an indispensable component in the practical application of industrial fault diagnosis models. To address the current challenges, this paper proposes a Game Attribution-based Causal Learning Network (GA-CLN), which relies on causal relationships to establish interpretable intelligent diagnostic models that comply with physical laws. By employing game-theoretic attribution, the GA-CLN method learns causal fault features, mitigating the impact of irrelevant noise factors on diagnostic conclusions. This process preserves the causal features that remain invariant under physical laws and mathematical logic, thereby enabling generalized, interpretable diagnostics. The effectiveness of this method is demonstrated through experiments on diagnostic tasks for rotating machinery under varying speeds and loads.

Original languageEnglish
Title of host publicationNeural Computing for Advanced Applications - 6th International Conference, NCAA 2025, Proceedings
EditorsHaijun Zhang, Kim Fung Tsang, Fu Lee Wang, Kevin Hung, Tianyong Hao, Zenghui Wang, Zhou Wu, Zhao Zhang
PublisherSpringer Science and Business Media Deutschland GmbH
Pages205-216
Number of pages12
ISBN (Print)9789819537389
DOIs
StatePublished - 2025
Event6th International Conference on Neural Computing for Advanced Applications, NCAA 2025 - Hong Kong, China
Duration: 4 Jul 20256 Jul 2025

Publication series

NameCommunications in Computer and Information Science
Volume2665 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference6th International Conference on Neural Computing for Advanced Applications, NCAA 2025
Country/TerritoryChina
CityHong Kong
Period4/07/256/07/25

Keywords

  • Causal Learning
  • Deep Learning
  • Intelligent Fault Diagnosis
  • Model Interpretability

Fingerprint

Dive into the research topics of 'Game Attribution-Based Causal Learning Interpretable Networks for Intelligent Fault Diagnosis'. Together they form a unique fingerprint.

Cite this