TY - JOUR
T1 - Multi-view representation learning with dual-label collaborative guidance
AU - Chen, Bin
AU - Ren, Xiaojin
AU - Bai, Shunshun
AU - Chen, Ziyuan
AU - Zheng, Qinghai
AU - Zhu, Jihua
N1 - Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2024/12/3
Y1 - 2024/12/3
N2 - Multi-view Representation Learning (MRL) has recently attracted widespread attention because it can integrate information from diverse data sources to achieve better performance. However, existing MRL methods still have two issues: (1) They typically perform various consistency objectives within the feature space, which might discard complementary information contained in each view. (2) Some methods only focus on handling inter-view relationships while ignoring inter-sample relationships that are also valuable for downstream tasks. To address these issues, we propose a novel Multi-view representation learning method with Dual-label Collaborative Guidance (MDCG). Specifically, we fully excavate and utilize valuable semantic and graph information hidden in multi-view data to collaboratively guide the learning process of MRL. By learning consistent semantic labels from distinct views, our method enhances intrinsic connections across views while preserving view-specific information, which contributes to learning the consistent and complementary unified representation. Moreover, we integrate similarity matrices of multiple views to construct graph labels that indicate inter-sample relationships. With the idea of self-supervised contrastive learning, graph structure information implied in graph labels is effectively captured by the unified representation, thus enhancing its discriminability. Extensive experiments on diverse real-world datasets demonstrate the effectiveness and superiority of MDCG compared with nine state-of-the-art methods. Our code will be available at https://github.com/Bin1Chen/MDCG.
AB - Multi-view Representation Learning (MRL) has recently attracted widespread attention because it can integrate information from diverse data sources to achieve better performance. However, existing MRL methods still have two issues: (1) They typically perform various consistency objectives within the feature space, which might discard complementary information contained in each view. (2) Some methods only focus on handling inter-view relationships while ignoring inter-sample relationships that are also valuable for downstream tasks. To address these issues, we propose a novel Multi-view representation learning method with Dual-label Collaborative Guidance (MDCG). Specifically, we fully excavate and utilize valuable semantic and graph information hidden in multi-view data to collaboratively guide the learning process of MRL. By learning consistent semantic labels from distinct views, our method enhances intrinsic connections across views while preserving view-specific information, which contributes to learning the consistent and complementary unified representation. Moreover, we integrate similarity matrices of multiple views to construct graph labels that indicate inter-sample relationships. With the idea of self-supervised contrastive learning, graph structure information implied in graph labels is effectively captured by the unified representation, thus enhancing its discriminability. Extensive experiments on diverse real-world datasets demonstrate the effectiveness and superiority of MDCG compared with nine state-of-the-art methods. Our code will be available at https://github.com/Bin1Chen/MDCG.
KW - Contrastive learning
KW - Graph information
KW - Multi-view representation learning
KW - Semantic information
UR - https://www.scopus.com/pages/publications/85208197738
U2 - 10.1016/j.knosys.2024.112680
DO - 10.1016/j.knosys.2024.112680
M3 - 文章
AN - SCOPUS:85208197738
SN - 0950-7051
VL - 305
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 112680
ER -