A temporal-spatial feature fusion network for emotion recognition with individual differences reduction.

Journal: Neuroscience
PMID:

Abstract

PURPOSE: In the context of EEG-based emotion recognition tasks, a conventional strategy involves the extraction of spatial and temporal features, subsequently fused for emotion prediction. However, due to the pronounced individual variability in EEG and the constrained performance of conventional time-series models, cross-subject experiments often yield suboptimal results. To address this limitation, we propose a novel network named Time-Space Emotion Network (TSEN), which capitalizes on the fusion of spatiotemporal information for emotion recognition.

Authors

  • Benke Liu
    University of Shanghai for Science and Technology, Shanghai 200093, China.
  • Yongxiong Wang
    School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, PR China; Engineering Research Center of Optical Instrument and System, Ministry of Education, Shanghai Key Lab of Modern Optical System, University of Shanghai for Science and Technology, Shanghai, 200093, PR China.
  • Zhe Wang
    Department of Pathology, The Eighth Affiliated Hospital, Sun Yat-sen University, Shenzhen 518033, China.
  • Xin Wan
    SCNU Environmental Research Institute, Guangdong Provincial Key Laboratory of Chemical Pollution and Environmental Safety & MOE Key Laboratory of Theoretical Chemistry of Environment, School of Environment, South China Normal University, Guangzhou, 510006, PR China; SCNU Qingyuan Institute of Science and Technology Innovation Co, Ltd, Qingyuan 511517, PR China.
  • Chenguang Li
    Department of Neurology, Guangdong Key Laboratory for Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department, National Key Discipline, the First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China.