3D-Aided Dual-Agent GANs for Unconstrained Face Recognition.

Journal: IEEE transactions on pattern analysis and machine intelligence
Published Date:

Abstract

Synthesizing realistic profile faces is beneficial for more efficiently training deep pose-invariant models for large-scale unconstrained face recognition, by augmenting the number of samples with extreme poses and avoiding costly annotation work. However, learning from synthetic faces may not achieve the desired performance due to the discrepancy betwedistributions of the synthetic and real face images. To narrow this gap, we propose a Dual-Agent Generative Adversarial Network (DA-GAN) model, which can improve the realism of a face simulator's output using unlabeled real faces while preserving the identity information during the realism refinement. The dual agents are specially designed for distinguishing real versus fake and identities simultaneously. In particular, we employ an off-the-shelf 3D face model as a simulator to generate profile face images with varying poses. DA-GAN leverages a fully convolutional network as the generator to generate high-resolution images and an auto-encoder as the discriminator with the dual agents. Besides the novel architecture, we make several key modifications to the standard GAN to preserve pose, texture as well as identity, and stabilize the training process: (i) a pose perception loss; (ii) an identity perception loss; (iii) an adversarial loss with a boundary equilibrium regularization term. Experimental results show that DA-GAN not only achieves outstanding perceptual results but also significantly outperforms state-of-the-arts on the large-scale and challenging NIST IJB-A and CFP unconstrained face recognition benchmarks. In addition, the proposed DA-GAN is also a promising new approach for solving generic transfer learning problems more effectively. DA-GAN is the foundation of our winning entry to the NIST IJB-A face recognition competition in which we secured the $1^{st}$ places on the tracks of verification and identification.

Authors

  • Jian Zhao
    Key Laboratory of Intelligent Rehabilitation and Barrier-Free for the Disabled (Changchun University), Ministry of Education, Changchun University, Changchun 130012, China.
  • Lin Xiong
    Key Laboratory of Drug-Targeting and Drug Delivery System of the Education Ministry and Sichuan Province, Sichuan Engineering Laboratory for Plant-Sourced Drug and Sichuan Research Center for Drug Precision Industrial Technology, Med-X Center for Materials, West China School of Pharmacy, Sichuan University, Chengdu, 610041, China.
  • Jianshu Li
  • Junliang Xing
  • Shuicheng Yan
  • Jiashi Feng