Quadruplet-Based Deep Cross-Modal Hashing.

Journal: Computational intelligence and neuroscience
Published Date:

Abstract

Recently, benefitting from the storage and retrieval efficiency of hashing and the powerful discriminative feature extraction capability of deep neural networks, deep cross-modal hashing retrieval has drawn more and more attention. To preserve the semantic similarities of cross-modal instances during the hash mapping procedure, most existing deep cross-modal hashing methods usually learn deep hashing networks with a pairwise loss or a triplet loss. However, these methods may not fully explore the similarity relation across modalities. To solve this problem, in this paper, we introduce a quadruplet loss into deep cross-modal hashing and propose a quadruplet-based deep cross-modal hashing (termed QDCMH) method. Extensive experiments on two benchmark cross-modal retrieval datasets show that our proposed method achieves state-of-the-art performance and demonstrate the efficiency of the quadruplet loss in cross-modal hashing.

Authors

  • Huan Liu
    Department of Chemical and Biochemical Engineering, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, Fujian, China.
  • Jiang Xiong
    Key Laboratory of Intelligent Information Processing and Control, Chongqing Municipal Institutions of Higher Education, Chongqing Three Gorges University, Chongqing 40044, China.
  • Nian Zhang
    Department of Electrical and Computer Engineering, University of the District of Columbia, Washington, D. C., SC 20008, USA.
  • Fuming Liu
    Key Laboratory of Intelligent Information Processing and Control, Chongqing Municipal Institutions of Higher Education, Chongqing Three Gorges University, Chongqing 40044, China.
  • Xitao Zou
    Key Laboratory of Intelligent Information Processing and Control, Chongqing Municipal Institutions of Higher Education, Chongqing Three Gorges University, Chongqing 40044, China.