TY - GEN
T1 - Joint Contrastive Learning for Image Clustering
AU - Yan, Yuxuan
AU - Na, Lu
AU - Yan, Ruofan
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2023
Y1 - 2023
N2 - Recent advances in contrastive learning promote the research of many downstream tasks, especially deep clustering, which explores the potential semantic connections for unlabeled samples. However, these contrastive-based clustering methods focus on positive pairs in the pairwise contrastive framework and ignore the latent semantic relations of negative pairs, causing semantic information distortion in embedding space. In this paper, we propose joint distribution contrastive learning (JCL), an unsupervised image clustering method encoding semantic structures of negative pairs into the learned embedding space. Specifically, JCL introduces latent class variables and model discrimination task as a maximum average class conditional likelihood estimation to encourage negative pairs with the same semantic information to be closer in embedding space. The proposed joint contrastive loss of JCL is the negative-wise contrastive loss and serves as the objective function of deep clustering. JCL is a simple end-To-end online deep contrastive clustering method that jointly exploits the positive and negative pairs and synchronously learns representation and clustering to optimize the network. Extensive experiments on moderate-scale image clustering benchmarks demonstrate JCL remarkably outperforms the state-of-The-Art methods.
AB - Recent advances in contrastive learning promote the research of many downstream tasks, especially deep clustering, which explores the potential semantic connections for unlabeled samples. However, these contrastive-based clustering methods focus on positive pairs in the pairwise contrastive framework and ignore the latent semantic relations of negative pairs, causing semantic information distortion in embedding space. In this paper, we propose joint distribution contrastive learning (JCL), an unsupervised image clustering method encoding semantic structures of negative pairs into the learned embedding space. Specifically, JCL introduces latent class variables and model discrimination task as a maximum average class conditional likelihood estimation to encourage negative pairs with the same semantic information to be closer in embedding space. The proposed joint contrastive loss of JCL is the negative-wise contrastive loss and serves as the objective function of deep clustering. JCL is a simple end-To-end online deep contrastive clustering method that jointly exploits the positive and negative pairs and synchronously learns representation and clustering to optimize the network. Extensive experiments on moderate-scale image clustering benchmarks demonstrate JCL remarkably outperforms the state-of-The-Art methods.
KW - contrastive learning
KW - deep clustering
KW - unsupervised learning
UR - https://www.scopus.com/pages/publications/85191415976
U2 - 10.1109/CRC60659.2023.10488513
DO - 10.1109/CRC60659.2023.10488513
M3 - 会议稿件
AN - SCOPUS:85191415976
T3 - 2023 8th International Conference on Control, Robotics and Cybernetics, CRC 2023
SP - 310
EP - 314
BT - 2023 8th International Conference on Control, Robotics and Cybernetics, CRC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th International Conference on Control, Robotics and Cybernetics, CRC 2023
Y2 - 22 December 2023 through 24 December 2023
ER -