Unsupervised Gaze Representation Learning by Switching Features.

Journal: IEEE transactions on pattern analysis and machine intelligence
Published Date:

Abstract

It is prevalent to leverage unlabeled data to train deep learning models when it is difficult to collect large-scale annotated datasets. However, for 3D gaze estimation, most existing unsupervised learning methods face challenges in distinguishing subtle gaze-relevant information from dominant gaze-irrelevant information. To address this issue, we propose an unsupervised learning framework to disentangle the gaze-relevant and the gaze-irrelevant information, by seeking the shared information of a pair of input images with the same gaze and with the same eye respectively. Specifically, given two images, the framework finds their shared information by first encoding the images into two latent features via two encoders and then switching part of the features before feeding them to the decoders for image reconstruction. We theoretically prove that the proposed framework is able to encode different information into different parts of the latent feature if we properly select the training image pairs and their shared information. Based on the framework, we derive Cross-Encoder and Cross-Encoder++ to learn gaze representation from the eye images and face images, respectively. Experiments on pubic gaze datasets demonstrate that the Cross-Encoder and Cross-Encoder++ outperform the competitive methods. The ablation study quantitatively and qualitatively shows that the gaze feature is successfully extracted.

Authors

  • Yunjia Sun
  • Jiabei Zeng
  • Shiguang Shan
  • Xilin Chen

Keywords

No keywords available for this article.