Dual-view global and local category-attentive domain alignment for unsupervised conditional adversarial domain adaptation.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Jan 8, 2025
Abstract
Conditional adversarial domain adaptation (CADA) is one of the most commonly used unsupervised domain adaptation (UDA) methods. CADA introduces multimodal information to the adversarial learning process to align the distributions of the labeled source domain and unlabeled target domain with mode match. However, CADA provides wrong multimodal information for challenging target features due to utilizing classifier predictions as the multimodal information, leading to distribution mismatch and less robust domain-invariant features. Compared to the recent state-of-the-art UDA methods, CADA also faces poor discriminability on the target domain. To tackle these challenges, we propose a novel unsupervised CADA framework named dual-view global and local category-attentive domain alignment (DV-GLCA). Specifically, to mitigate distribution mismatch and acquire more robust domain-invariant features, we integrate dual-view information into conditional adversarial domain adaptation and then utilize the substantial feature disparity between the two perspectives to better align the multimodal structures of the source and target distributions. Moreover, to learn more discriminative features of the target domain based on dual-view conditional adversarial domain adaptation (DV-CADA), we further propose global category-attentive domain alignment (GCA). We combine coding rate reduction and dual-view centroid alignment in GCA to amplify inter-category domain discrepancies while reducing intra-category domain differences globally. Additionally, to address challenging ambiguous samples during the training phase, we propose local category-attentive domain alignment (LCA). We introduce a new way of using contrastive domain discrepancy in LCA to move ambiguous samples closer to the correct category. Our method demonstrates leading performance on five UDA benchmarks, with extensive experiments showcasing its effectiveness.