Bidirectional Semantic Consistency Guided Contrastive Embedding for Generative Zero-Shot Learning.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Mar 29, 2025
Abstract
Generative zero-shot learning methods synthesize features for unseen classes by learning from image features and class semantic vectors, effectively addressing bias in transferring knowledge from seen to unseen classes. However, existing methods directly employ global image features without incorporating semantic information, failing to ensure that synthesized features for unseen classes maintain semantic consistency. This results in a lack of discriminative power for these synthesized features. To address these limitations, we propose a Bidirectional Semantic Consistency Guided (BSCG) generation model. The BSCG model utilizes a Bidirectional Semantic Guidance Framework (BSGF) that combines Attribute-to-Visual Guidance (AVG) and Visual-to-Attribute Guidance (VAG) to enhance interaction and mutual learning between visual features and attribute semantics. Additionally, we propose a Contrastive Consistency Space (CCS) to optimize feature quality further by improving intra-class compactness and inter-class separability. This approach ensures robust knowledge transfer and enhances the model's generalization ability. Extensive experiments on three benchmark datasets show that the BSCG model significantly outperforms existing state-of-the-art approaches in both conventional and generalized zero-shot learning settings. The codes are available at: https://github.com/ithicker/BSCG.