Adaptive decoupling-fusion in Siamese network for image classification.
Journal:
Neural networks : the official journal of the International Neural Network Society
PMID:
40101559
Abstract
Convolutional neural networks (CNNs) are highly regarded for their ability to extract semantic information from visual inputs. However, this capability often leads to the inadvertent loss of important visual details. In this paper, we introduce an Adaptive Decoupling Fusion (ADF) designed to preserve these valuable visual details and integrate seamlessly with existing hierarchical models. Our approach emphasizes retaining and leveraging appearance information from the network's shallow layers to enhance semantic understanding. We first decouple the appearance information from one branch of a Siamese Network and embed it into the deep feature space of the other branch. This facilitates a synergistic interaction: one branch supplies appearance information that benefits semantic understanding, while the other integrates this information into the semantic space. Traditional Siamese Networks typically use shared weights, which constrains the diversity of features that can be learned. To address this, we propose a differentiated collaborative learning where both branches receive the same input but are trained with cross-entropy loss, allowing them to have distinct weights. This enhances the network's adaptability to specific tasks. To further optimize the decoupling and fusion, we introduce a Mapper module featuring depthwise separable convolution and a gated fusion mechanism. This module regulates the information flow between branches, balancing appearance and semantic information. Under fully self-supervised conditions, utilizing only minimal data augmentation, we achieve a top-1 accuracy of 81.11% on the ImageNet-1k dataset using ADF-ResNeXt-101.