SmartDetector: Automatic and vision-based approach to point-light display generation for human action perception.

Journal: Behavior research methods
PMID:

Abstract

Over the past four decades, point-light displays (PLD) have been integrated into psychology and psychophysics, providing a valuable means to probe human perceptual skills. Leveraging the inherent kinematic information and controllable display parameters, researchers have utilized this technique to examine the mechanisms involved in learning and rehabilitation. However, classical PLD generation methods (e.g., motion capture) are difficult to apply for behavior analysis in real-world situations, such as patient care or sports activities. Therefore, there is a demand for automated and affordable tools that enable efficient and real-world-compatible generation of PLDs for psychological research. In this paper, we propose SmartDetector, a new artificial intelligence (AI)-based tool for automatic PLD creation from RGB videos. To evaluate humans' perceptual skills for processing PLD building with SmartDetector, 126 participants were randomly assigned to recognition, discrimination, or detection tasks. Results demonstrated that, irrespective of the task, PLDs generated by SmartDetector exhibited commendable perceptual performance in terms of accuracy and response times compared to literature findings. Moreover, to enhance usability and broaden accessibility, we developed an intuitive web interface for our method, making it available to a wider audience. The resulting application is available at https://plavimop.prd.fr/index.php/en/automatic-creation-pld . SmartDetector offers interesting possibilities for using PLD in research and makes the use of PLD more accessible for nonacademic applications.

Authors

  • Christel Bidet-Ildei
    Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Centre de Recherches sur la Cognition et l'Apprentissage (UMR 7295), Poitiers, France. christel.bidet@univ-poitiers.fr.
  • Olfa BenAhmed
    XLIM Research Institute, UMR CNRS 7252, University of Poitiers, Poitiers, France.
  • Diaddin Bouidaine
    XLIM Research Institute, UMR CNRS 7252, University of Poitiers, Poitiers, France.
  • Victor Francisco
    CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France.
  • Arnaud Decatoire
    ISAE-ENSMA, CNRS, PPRIME, Université de Poitiers, Poitiers, France.
  • Yannick Blandin
    CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France.
  • Jean Pylouster
    CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France.
  • Christine Fernandez-Maloigne
    XLIM-ICONES, UMR CNRS 7252, Université de Poitiers, France; Laboratoire commune CNRS/SIEMENS I3M, Poitiers, France.