Research on emotion recognition using sparse EEG channels and cross-subject modeling based on CNN-KAN-[Formula: see text] model.

Journal: PloS one
Published Date:

Abstract

Emotion recognition plays a significant role in artificial intelligence and human-computer interaction. Electroencephalography (EEG) signals, due to their ability to directly reflect brain activity, have become an essential tool in emotion recognition research. However, the low dimensionality of sparse EEG channel data presents a key challenge in extracting effective features. This paper proposes a sparse channel EEG-based emotion recognition method using the CNN-KAN-[Formula: see text] network to address the challenges of limited feature extraction and cross-subject variability in emotion recognition. Through a feature mapping strategy, this method maps features such as Differential Entropy (DE), Power Spectral Density (PSD), and Emotion Valence Index (EVI) - Asymmetry Index (ASI) to pseudo-RGB images, effectively integrating both frequency-domain and spatial information from sparse channels, providing multi-dimensional input for CNN feature extraction. By combining the KAN module with a fast Fourier transform-based [Formula: see text] attention mechanism, the model can effectively fuse frequency-domain and spatial features for accurate classification of complex emotional signals. Experimental results show that the CNN-KAN-[Formula: see text] model performs comparably to multi-channel models while only using four EEG channels. Through training based on short-time segments, the model effectively reduces the impact of individual differences, significantly improving generalization ability in cross-subject emotion recognition tasks. Extensive experiments on the SEED and DEAP datasets demonstrate the proposed method's superior performance in emotion classification tasks. In the merged dataset experiments, the accuracy of the SEED three-class task reached 97.985%, while the accuracy for the DEAP four-class task was 91.718%. In the subject-dependent experiment, the average accuracy for the SEED three-class task was 97.45%, and for the DEAP four-class task, it was 89.16%.

Authors

  • Fan Xiong
  • Mengzhao Fan
    Zhongyuan University of Technology, Zhengzhou, China.
  • Xu Yang
    Department of Food Science and Technology, The Ohio State University, Columbus, OH, United States.
  • Chenxiao Wang
    Zhongyuan University of Technology, Zhengzhou, China.
  • Jinli Zhou
    Zhongyuan University of Technology, Zhengzhou, China.