Deep learning-based image deconstruction method with maintained saliency.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Visual properties that primarily attract bottom-up attention are collectively referred to as saliency. In this study, to understand the neural activity involved in top-down and bottom-up visual attention, we aim to prepare pairs of natural and unnatural images with common saliency. For this purpose, we propose an image transformation method based on deep neural networks that can generate new images while maintaining the consistent feature map, in particular the saliency map. This is an ill-posed problem because the transformation from an image to its corresponding feature map could be many-to-one, and in our particular case, the various images would share the same saliency map. Although stochastic image generation has the potential to solve such ill-posed problems, the most existing methods focus on adding diversity of the overall style/touch information while maintaining the naturalness of the generated images. To this end, we developed a new image transformation method that incorporates higher-dimensional latent variables so that the generated images appear unnatural with less context information but retain a high diversity of local image structures. Although such high-dimensional latent spaces are prone to collapse, we proposed a new regularization based on Kullback-Leibler divergence to avoid collapsing the latent distribution. We also conducted human experiments using our newly prepared natural and corresponding unnatural images to measure overt eye movements and functional magnetic resonance imaging, and found that those images induced distinctive neural activities related to top-down and bottom-up attentional processing.

Authors

  • Keisuke Fujimoto
    Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan.
  • Kojiro Hayashi
    Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan.
  • Risa Katayama
    Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan.
  • Sehyung Lee
    Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University, Japan. Electronic address: sehyung@sys.i.kyoto-u.ac.jp.
  • Zhen Liang
    Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan; School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China. Electronic address: jane-l@sys.i.kyoto-u.ac.jp.
  • Wako Yoshida
    Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom.
  • Shin Ishii
    Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo Ward, Kyoto, 606-8501, Japan.