TY - JOUR
T1 - Understanding the Dimensional Need of Noncontrastive Learning
AU - Cao, Zhexiao
AU - Huang, Lei
AU - Wang, Tian
AU - Wang, Yinquan
AU - Shi, Jingang
AU - Zhu, Aichun
AU - Shi, Tianyun
AU - Snoussi, Hichem
N1 - Publisher Copyright:
© IEEE. 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - Noncontrastive self-supervised learning methods offer an effective alternative to contrastive approaches by avoiding the need for negative samples to avoid representation collapse. Noncontrastive learning methods explicitly or implicitly optimize the representation space, yet they often require large representation dimensions, leading to dimensional inefficiency. To provide negative samples, contrastive learning methods often require large batch sizes, thus regarded as sample inefficient, while noncontrastive learning methods require large representation dimensions, thus regarded as dimension inefficient. Although we have some understanding of the noncontrastive learning method, theoretical analysis of such phenomenon still remains largely unexplored. We present a theoretical analysis of the dimensional need for noncontrastive learning. We investigate the transfer between upstream representation learning and downstream tasks' performance, demonstrating how noncontrastive methods implicitly increase interclass distances within the representation space and how the distance affects the model performance of evaluation performance. We prove that the performance of noncontrastive methods is affected by the output dimension and the number of latent classes, and illustrate why performance degrades significantly when the output dimension is substantially smaller than the number of latent classes. We demonstrate our findings through experiments on image classification experiments, and enrich the verification in audio, graph and text modalities. We also perform empirical evaluation for image models on extensive detection and segmentation tasks beyond classification that show satisfactory correspondence to our theorem.
AB - Noncontrastive self-supervised learning methods offer an effective alternative to contrastive approaches by avoiding the need for negative samples to avoid representation collapse. Noncontrastive learning methods explicitly or implicitly optimize the representation space, yet they often require large representation dimensions, leading to dimensional inefficiency. To provide negative samples, contrastive learning methods often require large batch sizes, thus regarded as sample inefficient, while noncontrastive learning methods require large representation dimensions, thus regarded as dimension inefficient. Although we have some understanding of the noncontrastive learning method, theoretical analysis of such phenomenon still remains largely unexplored. We present a theoretical analysis of the dimensional need for noncontrastive learning. We investigate the transfer between upstream representation learning and downstream tasks' performance, demonstrating how noncontrastive methods implicitly increase interclass distances within the representation space and how the distance affects the model performance of evaluation performance. We prove that the performance of noncontrastive methods is affected by the output dimension and the number of latent classes, and illustrate why performance degrades significantly when the output dimension is substantially smaller than the number of latent classes. We demonstrate our findings through experiments on image classification experiments, and enrich the verification in audio, graph and text modalities. We also perform empirical evaluation for image models on extensive detection and segmentation tasks beyond classification that show satisfactory correspondence to our theorem.
KW - Generalization analysis
KW - non-contrastive learning
KW - self-supervised learning
UR - https://www.scopus.com/pages/publications/105009862067
U2 - 10.1109/TCYB.2025.3577745
DO - 10.1109/TCYB.2025.3577745
M3 - 文章
C2 - 40608878
AN - SCOPUS:105009862067
SN - 2168-2267
VL - 55
SP - 4089
EP - 4102
JO - IEEE Transactions on Cybernetics
JF - IEEE Transactions on Cybernetics
IS - 9
ER -