Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models.

Journal: Sensors (Basel, Switzerland)
Published Date:

Abstract

This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.

Authors

  • Yifan Bian
    Department of Experimental Psychology, University College London, London WC1H 0AP, UK.
  • Dennis Küster
    Department of Mathematics and Computer Science.
  • Hui Liu
    Institute of Urology and Nephrology, The First Affiliated Hospital of Guangxi Medical University, Nanning, China.
  • Eva G Krumhuber
    Department of Experimental Psychology.