Unsupervised multi-domain multimodal image-to-image translation with explicit domain-constrained disentanglement.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Image-to-image translation has drawn great attention during the past few years. It aims to translate an image in one domain to a target image in another domain. However, three big challenges remain in image-to-image translation: (1) the lack of large amounts of aligned training pairs for various tasks; (2) the ambiguity of multiple possible outputs from a single input image; and (3) the lack of simultaneous training for multi-domain translation with a single network. Therefore in this paper, we propose a unified framework for learning to generate diverse outputs using unpaired training data and allow for simultaneous multi-domain translation via a single model. Moreover, we also observed from experiments that the implicit disentanglement of content and style could lead to undesirable results. Thus we investigate how to extract domain-level signal as explicit supervision so as to achieve better image-to-image translation. Extensive experiments show that the proposed method outperforms or is comparable with the state-of-the-art methods for various applications.

Authors

  • Weihao Xia
    Tsinghua University, China. Electronic address: xiawh3@outlook.com.
  • Yujiu Yang
    Open FIESTA Center, Tsinghua University, Shenzhen, P.R. China.
  • Jing-Hao Xue
    Department of Statistical Science, University College London, UK. Electronic address: jinghao.xue@ucl.ac.uk.