TY - GEN
T1 - AFANS
T2 - 20th International Conference on Intelligent Computing, ICIC 2024
AU - Wang, Shihao
AU - Wang, Chenxu
AU - Meng, Panpan
AU - Wang, Zhanggong
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - Graph Contrastive Learning (GCL) has emerged as a highly promising methodology in graph representation learning, mainly due to its label-independent nature. The construction of positive and negative samples is crucial for the effectiveness of GCL. Yet, the accurate identification of genuine positive and negative samples poses a formidable challenge. Traditional methods employ data augmentation for constructing positive samples, which confronts two significant impediments: i) the potential distortion of semantic integrity and ii) the difficulty in devising augmentation strategies universally applicable across diverse datasets. In the realm of negative sample generation, reliance on in-batch negative samples or a memory bank may engender “easy” or “false” negative samples, detrimentally impacting model performance. To address these issues, we propose to use Exponential Moving Average (EMA) instead of data augmentation to construct effective positive samples. Additionally, we introduce an adversarial generator to fabricate more challenging negative samples, which are subsequently amalgamated with in-batch negative samples. Furthermore, we implement two constrained loss functions aimed at reducing redundancy amongst negative samples, enriching the model with more salient information. The effectiveness of our proposed method is validated through an unsupervised graph classification task on five real-world datasets. Empirical results substantiate that our approach surpasses current state-of-the-art methodologies.
AB - Graph Contrastive Learning (GCL) has emerged as a highly promising methodology in graph representation learning, mainly due to its label-independent nature. The construction of positive and negative samples is crucial for the effectiveness of GCL. Yet, the accurate identification of genuine positive and negative samples poses a formidable challenge. Traditional methods employ data augmentation for constructing positive samples, which confronts two significant impediments: i) the potential distortion of semantic integrity and ii) the difficulty in devising augmentation strategies universally applicable across diverse datasets. In the realm of negative sample generation, reliance on in-batch negative samples or a memory bank may engender “easy” or “false” negative samples, detrimentally impacting model performance. To address these issues, we propose to use Exponential Moving Average (EMA) instead of data augmentation to construct effective positive samples. Additionally, we introduce an adversarial generator to fabricate more challenging negative samples, which are subsequently amalgamated with in-batch negative samples. Furthermore, we implement two constrained loss functions aimed at reducing redundancy amongst negative samples, enriching the model with more salient information. The effectiveness of our proposed method is validated through an unsupervised graph classification task on five real-world datasets. Empirical results substantiate that our approach surpasses current state-of-the-art methodologies.
KW - Adversarial Generation
KW - Data Augmentation
KW - Graph Contrastive Learning
UR - https://www.scopus.com/pages/publications/85201080855
U2 - 10.1007/978-981-97-5615-5_30
DO - 10.1007/978-981-97-5615-5_30
M3 - 会议稿件
AN - SCOPUS:85201080855
SN - 9789819756148
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 376
EP - 387
BT - Advanced Intelligent Computing Technology and Applications - 20th International Conference, ICIC 2024, Proceedings
A2 - Huang, De-Shuang
A2 - Pan, Yijie
A2 - Guo, Jiayang
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 5 August 2024 through 8 August 2024
ER -