Differentiable self-supervised clustering with intrinsic interpretability.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Jul 24, 2024
Abstract
Self-supervised clustering has garnered widespread attention due to its ability to discover latent clustering structures without the need for external labels. However, most existing approaches on self-supervised clustering lack of inherent interpretability in the data clustering process. In this paper, we propose a differentiable self-supervised clustering method with intrinsic interpretability (DSC2I), which provides an interpretable data clustering mechanism by reformulating clustering process based on differentiable programming. To be specific, we first design a differentiable mutual information measurement to explicitly train a neural network with analytical gradients, which avoids variational inference and learns a discriminative and compact representation. Then, an interpretable clustering mechanism based on differentiable programming is devised to transform fundamental clustering process (i.e., minimum intra-cluster distance, maximum inter-cluster distance) into neural networks and convert cluster centers to learnable neural parameters, which allows us to obtain a transparent and interpretable clustering layer. Finally, a unified optimization method is designed, in which the differentiable representation learning and interpretable clustering can be optimized simultaneously in a self-supervised manner. Extensive experiments demonstrate the effectiveness of the proposed DSC2I method compared with 16 clustering approaches.