Distinguishing Emotional Responses to Photographs and Artwork Using a Deep Learning-Based Approach.

Journal: Sensors (Basel, Switzerland)
Published Date:

Abstract

Visual stimuli from photographs and artworks raise corresponding emotional responses. It is a long process to prove whether the emotions that arise from photographs and artworks are different or not. We answer this question by employing electroencephalogram (EEG)-based biosignals and a deep convolutional neural network (CNN)-based emotion recognition model. We employ Russell's emotion model, which matches emotion keywords such as happy, calm or sad to a coordinate system whose axes are valence and arousal, respectively. We collect photographs and artwork images that match the emotion keywords and build eighteen one-minute video clips for nine emotion keywords for photographs and artwork. We hired forty subjects and executed tests about the emotional responses from the video clips. From the -test on the results, we concluded that the valence shows difference, while the arousal does not.

Authors

  • Heekyung Yang
    Industry-Academy Cooperation Foundation, Sangmyung University, Seoul 03016, Korea. yhk775206@smu.ac.kr.
  • Jongdae Han
    Department of Computer Science, Sangmyung University, Seoul 03016, Korea. elvenwhite@smu.ac.kr.
  • Kyungha Min
    Department of Computer Science, Sangmyung University, Seoul 03016, Korea. minkh@smu.ac.kr.