A Plug-in Method for Representation Factorization in Connectionist Models.

Journal: IEEE transactions on neural networks and learning systems
Published Date:

Abstract

In this article, we focus on decomposing latent representations in generative adversarial networks or learned feature representations in deep autoencoders into semantically controllable factors in a semisupervised manner, without modifying the original trained models. Particularly, we propose factors' decomposer-entangler network (FDEN) that learns to decompose a latent representation into mutually independent factors. Given a latent representation, the proposed framework draws a set of interpretable factors, each aligned to independent factors of variations by minimizing their total correlation in an information-theoretic means. As a plug-in method, we have applied our proposed FDEN to the existing networks of adversarially learned inference and pioneer network and performed computer vision tasks of image-to-image translation in semantic ways, e.g., changing styles, while keeping the identity of a subject, and object classification in a few-shot learning scheme. We have also validated the effectiveness of the proposed method with various ablation studies in the qualitative, quantitative, and statistical examination.

Authors

  • Jee Seok Yoon
    Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
  • Myung-Cheol Roh
  • Heung-Il Suk
    Biomedical Research Imaging Center (BRIC) and Department of Radiology, University of North Carolina, Chapel Hill, NC, 27599, USA, hsuk@med.unc.edu.