A visual encoding model based on deep neural networks and transfer learning for brain activity measured by functional magnetic resonance imaging.

Journal: Journal of neuroscience methods
Published Date:

Abstract

BACKGROUND: Building visual encoding models to accurately predict visual responses is a central challenge for current vision-based brain-machine interface techniques. To achieve high prediction accuracy on neural signals, visual encoding models should include precise visual features and appropriate prediction algorithms. Most existing visual encoding models employ hand-craft visual features (e.g., Gabor wavelets or semantic labels) or data-driven features (e.g., features extracted from deep neural networks (DNN)). They also assume a linear mapping between feature representations to brain activity. However, it remains unknown whether such linear mapping is sufficient for maximizing prediction accuracy.

Authors

  • Chi Zhang
    Department of Thoracic Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
  • Kai Qiao
    National Digital Switching System Engineering and Technological Research Centre, Zhengzhou, China.
  • Linyuan Wang
    National Digital Switching System Engineering and Technological Research Centre, Zhengzhou, China.
  • Li Tong
    Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332.
  • Guoen Hu
    National Digital Switching System Engineering and Technological Research Center, Zhengzhou, 450000 China. Electronic address: 13838265028@126.com.
  • Ru-Yuan Zhang
    Center for Magnetic Resonance Imaging, Department of Neuroscience, University of Minnesota at Twin Cities, 55108 MN, USA. Electronic address: zhan1217@umn.edu.
  • Bin Yan
    National Digital Switching System Engineering and Technological Research Centre, Zhengzhou, China.