Graph contrastive learning with high-order feature interactions and adversarial Wasserstein-distance-based alignment

  • Chenxu Wang
  • , Zhizhong Wan
  • , Panpan Meng
  • , Shihao Wang
  • , Zhanggong Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Graph contrastive learning (GCL) has proven to be an effective approach for unsupervised representation learning on graph-structured data. However, existing GCL models face two major limitations. First, existing feature augmentation methods fail to capture the high-order interactions among raw features, which are essential for feature engineering. Second, effective strategies for extracting global information from graphs for contrastive learning remain limited. To address these limitations, we propose a novel GCL model with high-order feature interactions and adversarial Wasserstein-distance-based alignment. Our model employs DNNs to capture complex interactions among raw features and introduce an alignment-based loss function to effectively extract global graph information. While traditional methods for calculating Wasserstein distance between graph views are computationally intensive, we overcome this challenge by training an adversarial Wasserstein-distance discriminator that enables efficient distance computation. We conduct extensive experiments on five benchmark datasets to evaluate the performance of the proposed method. The experimental results demonstrate that our approach achieves superior performance on classification tasks.

Original languageEnglish
Pages (from-to)3449-3460
Number of pages12
JournalInternational Journal of Machine Learning and Cybernetics
Volume16
Issue number5
DOIs
StatePublished - Jun 2025

Keywords

  • Adversarial Wasserstein-distance-based alignment
  • Feature interaction
  • Graph contrastive learning

Fingerprint

Dive into the research topics of 'Graph contrastive learning with high-order feature interactions and adversarial Wasserstein-distance-based alignment'. Together they form a unique fingerprint.

Cite this