Do Deepfakes Adequately Display Emotions? A Study on Deepfake Facial Emotion Expression.

Journal: Computational intelligence and neuroscience
Published Date:

Abstract

Recent technological advancements in Artificial Intelligence make it easy to create deepfakes and hyper-realistic videos, in which images and video clips are processed to create fake videos that appear authentic. Many of them are based on swapping faces without the consent of the person whose appearance and voice are used. As emotions are inherent in human communication, studying how deepfakes transfer emotional expressions from original to fakes is relevant. In this work, we conduct an in-depth study on facial emotional expression in deepfakes using a well-known face swap-based deepfake database. Firstly, we extracted the photograms from their videos. Then, we analyzed the emotional expression in the original and faked versions of video recordings for all performers in the database. Results show that emotional expressions are not adequately transferred between original recordings and the deepfakes created from them. High variability in emotions and performers detected between original and fake recordings indicates that performer emotion expressiveness should be considered for better deepfake generation or detection.

Authors

  • Juan-Miguel López-Gil
    University of the Basque Country (UPV/EHU), Paseo de Manuel Lardizabal 1, 20018 Donostia-San Sebastián, Spain. juanmiguel.lopez@ehu.eus.
  • Rosa Gil
    Department of Computer Science and Engineering, Universitat de Lleida, Lleida, Spain.
  • Roberto García
    Department of Computer Science and Engineering, Universitat de Lleida, Lleida, Spain.