Kids' Emotion Recognition Using Various Deep-Learning Models with Explainable AI.

Journal: Sensors (Basel, Switzerland)
Published Date:

Abstract

Human ideas and sentiments are mirrored in facial expressions. They give the spectator a plethora of social cues, such as the viewer's focus of attention, intention, motivation, and mood, which can help develop better interactive solutions in online platforms. This could be helpful for children while teaching them, which could help in cultivating a better interactive connect between teachers and students, since there is an increasing trend toward the online education platform due to the COVID-19 pandemic. To solve this, the authors proposed kids' emotion recognition based on visual cues in this research with a justified reasoning model of explainable AI. The authors used two datasets to work on this problem; the first is the LIRIS Children Spontaneous Facial Expression Video Database, and the second is an author-created novel dataset of emotions displayed by children aged 7 to 10. The authors identified that the LIRIS dataset has achieved only 75% accuracy, and no study has worked further on this dataset in which the authors have achieved the highest accuracy of 89.31% and, in the authors' dataset, an accuracy of 90.98%. The authors also realized that the face construction of children and adults is different, and the way children show emotions is very different and does not always follow the same way of facial expression for a specific emotion as compared with adults. Hence, the authors used 3D 468 landmark points and created two separate versions of the dataset from the original selected datasets, which are LIRIS-Mesh and Authors-Mesh. In total, all four types of datasets were used, namely LIRIS, the authors' dataset, LIRIS-Mesh, and Authors-Mesh, and a comparative analysis was performed by using seven different CNN models. The authors not only compared all dataset types used on different CNN models but also explained for every type of CNN used on every specific dataset type how test images are perceived by the deep-learning models by using explainable artificial intelligence (XAI), which helps in localizing features contributing to particular emotions. The authors used three methods of XAI, namely Grad-CAM, Grad-CAM++, and SoftGrad, which help users further establish the appropriate reason for emotion detection by knowing the contribution of its features in it.

Authors

  • Manish Rathod
    Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International University (Deemed University), Pune 412115, India.
  • Chirag Dalvi
    Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International University (Deemed University), Pune 412115, India.
  • Kulveen Kaur
    Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International University (Deemed University), Pune 412115, India.
  • Shruti Patil
    Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, India.
  • Shilpa Gite
    Symbiosis Institute of Technology, Engineering, Pune, India.
  • Pooja Kamat
    Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International University (Deemed University), Pune 412115, India.
  • Ketan Kotecha
    Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International (Deemed University), Pune, India.
  • Ajith Abraham
    Machine Intelligence Research Labs, Auburn, USA.
  • Lubna Abdelkareim Gabralla
    Department of Computer Science and Information Technology, College of Applied, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia.