Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition.

Journal: Computational intelligence and neuroscience
Published Date:

Abstract

This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.

Authors

  • Yongrui Huang
    School of Software, South China Normal University, Guangzhou 510641, China.
  • Jianhao Yang
    School of Software, South China Normal University, Guangzhou 510641, China.
  • Pengkai Liao
    School of Software, South China Normal University, Guangzhou 510641, China.
  • Jiahui Pan
    School of Software, South China Normal University, Guangzhou 510641, China.