Understanding and mitigating dimensional collapse of Graph Contrastive Learning: A non-maximum removal approach.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Aug 22, 2024
Abstract
Graph Contrastive Learning (GCL) generates graph-level embeddings by maximizing Mutual Information between different augmented views of the same graph (positive pairs), and shows promising performance in graph representation learning (GRL) without the supervision of manual annotations. However, GCL suffers from a dimensional collapse problem, i.e., embedding vectors reside in a restricted low-dimensional subspace, curtailing the expressiveness of certain embedding dimensions. In this paper, we present a theoretical analysis identifying the smoothing effect of graph pooling and graph convolution's implicit regularization as principal causes of dimension collapse in graph contrastive learning. To mitigate the above effects, we propose a Non-Maximum Removal Graph Contrastive Learning (nmrGCL) approach, which removes "prominent" dimensions (those significantly contributing to the similarity measure) for positive pairs within the pretext task. Comprehensive experiments on multiple benchmark datasets are conducted and the results show that the proposed nmrGCL outperforms the state-of-the-art methods.