A Deep Learning Framework for Recognizing Both Static and Dynamic Gestures.

Journal: Sensors (Basel, Switzerland)
Published Date:

Abstract

Intuitive user interfaces are indispensable to interact with the human centric smart environments. In this paper, we propose a unified framework that recognizes both static and dynamic gestures, using simple RGB vision (without depth sensing). This feature makes it suitable for inexpensive human-robot interaction in social or industrial settings. We employ a pose-driven spatial attention strategy, which guides our proposed Static and Dynamic gestures Network-. From the image of the human upper body, we estimate his/her depth, along with the region-of-interest around his/her hands. The Convolutional Neural Network (CNN) in is fine-tuned on a background-substituted hand gestures dataset. It is utilized to detect 10 static gestures for each hand as well as to obtain the hand image-embeddings. These are subsequently fused with the augmented pose vector and then passed to the stacked Long Short-Term Memory blocks. Thus, human-centred frame-wise information from the augmented pose vector and from the left/right hands image-embeddings are aggregated in time to predict the dynamic gestures of the performing person. In a number of experiments, we show that the proposed approach surpasses the state-of-the-art results on the large-scale dataset. Moreover, we transfer the knowledge learned through the proposed methodology to the dataset, and the obtained results also outscore the state-of-the-art on this dataset.

Authors

  • Osama Mazhar
    LIRMM, Université de Montpellier, CNRS, 34392 Montpellier, France.
  • Sofiane Ramdani
    LIRMM, Université de Montpellier, CNRS, 34392 Montpellier, France.
  • Andrea Cherubini