Semantics-Consistent Representation Learning for Industrial Fault Diagnosis in Unseen Domains

Research output: Contribution to journalArticlepeer-review

Abstract

Domain generalizable fault diagnosis (DGFD) aims to develop robust and adaptable models for reliable fault diagnosis in unseen domains. Although many studies have been conducted to enhance model resistance against data distribution shifts, most existing models prioritize domain invariance and overlook class discriminability, which is crucial for DGFD. Furthermore, DGFD tasks often face significant challenges from the compound effect of class imbalance and subpopulation shifts. Therefore, this work proposes a novel Semantics-Consistent Representation Learning (SCRL) framework that enhances data-driven modeling for class-imbalanced DGFD (IDGFD) by more effectively learning domain-invariant but class-discriminative feature representations. To address class imbalance and prepare strategic data for robust model training, a multi-pipe interactive data processing scheme is designed to adaptively generate training samples. To improve the generalizability and discriminability of SCRL, modules for fault diagnosis, causal factorization, and affinity mining are jointly incorporated to address the data distribution and subpopulation shifts. This integration enables establishing domain-invariant and class-discriminative boundaries for effective IDGFD. Extensive experiments on four datasets demonstrate the superiority of SCRL over existing models in achieving accurate and robust fault diagnosis across unseen working conditions and systems.

Original languageEnglish
Article number0b000064947a6ec7
JournalIEEE Internet of Things Journal
DOIs
StateAccepted/In press - 2025

Keywords

  • Affinity Mining
  • Causal Factorization
  • Class Imbalance
  • Domain Generalization
  • Fault Diagnosis

Fingerprint

Dive into the research topics of 'Semantics-Consistent Representation Learning for Industrial Fault Diagnosis in Unseen Domains'. Together they form a unique fingerprint.

Cite this