Fear Detection in Multimodal Affective Computing: Physiological Signals versus Catecholamine Concentration.

Journal: Sensors (Basel, Switzerland)
PMID:

Abstract

Affective computing through physiological signals monitoring is currently a hot topic in the scientific literature, but also in the industry. Many wearable devices are being developed for health or wellness tracking during daily life or sports activity. Likewise, other applications are being proposed for the early detection of risk situations involving sexual or violent aggressions, with the identification of panic or fear emotions. The use of other sources of information, such as video or audio signals will make multimodal affective computing a more powerful tool for emotion classification, improving the detection capability. There are other biological elements that have not been explored yet and that could provide additional information to better disentangle negative emotions, such as fear or panic. Catecholamines are hormones produced by the adrenal glands, two small glands located above the kidneys. These hormones are released in the body in response to physical or emotional stress. The main catecholamines, namely adrenaline, noradrenaline and dopamine have been analysed, as well as four physiological variables: skin temperature, electrodermal activity, blood volume pulse (to calculate heart rate activity. i.e., beats per minute) and respiration rate. This work presents a comparison of the results provided by the analysis of physiological signals in reference to catecholamine, from an experimental task with 21 female volunteers receiving audiovisual stimuli through an immersive environment in virtual reality. Artificial intelligence algorithms for fear classification with physiological variables and plasma catecholamine concentration levels have been proposed and tested. The best results have been obtained with the features extracted from the physiological variables. Adding catecholamine's maximum variation during the five minutes after the video clip visualization, as well as adding the five measurements (1-min interval) of these levels, are not providing better performance in the classifiers.

Authors

  • Laura Gutiérrez-Martín
    University Institute on Gender Studies, Universidad Carlos III de Madrid, 28903 Getafe, Spain.
  • Elena Romero-Perales
    University Institute on Gender Studies, Universidad Carlos III de Madrid, 28903 Getafe, Spain.
  • Clara Sainz de Baranda Andújar
    UC3M4Safety Team, Universidad Carlos III de Madrid, c/Butarque, 15, 28911 Madrid, Spain.
  • Manuel F Canabal-Benito
    UC3M4Safety Team, Universidad Carlos III de Madrid, c/Butarque, 15, 28911 Madrid, Spain.
  • Gema Esther Rodríguez-Ramos
    UC3M4Safety Team, Universidad Carlos III de Madrid, c/Butarque, 15, 28911 Madrid, Spain.
  • Rafael Toro-Flores
    Fundación para la Investigación Biomédica del Hospital Universitario Príncipe de Asturias, Ctra, Alcalá-Meco s/n, 28805 Madrid, Spain.
  • Susana López-Ongil
    Fundación para la Investigación Biomédica del Hospital Universitario Príncipe de Asturias, Ctra, Alcalá-Meco s/n, 28805 Madrid, Spain.
  • Celia López-Ongil
    University Institute on Gender Studies, Universidad Carlos III de Madrid, 28903 Getafe, Spain.