Convolutional neural network for gesture recognition human-computer interaction system design.

Journal: PloS one
PMID:

Abstract

Gesture interaction applications have garnered significant attention from researchers in the field of human-computer interaction due to their inherent convenience and intuitiveness. Addressing the challenge posed by the insufficient feature extraction capability of existing network models, which hampers gesture recognition accuracy and increases model inference time, this paper introduces a novel gesture recognition algorithm based on an enhanced MobileNet network. This innovative design incorporates a multi-scale convolutional module to extract underlying features, thereby augmenting the network's feature extraction capabilities. Moreover, the utilization of an exponential linear unit (ELU) activation function enhances the capture of comprehensive negative feature information. Empirical findings demonstrate that our approach surpasses the accuracy achieved by most lightweight network models on publicly available datasets, all while maintaining real-time gesture interaction capabilities. The accuracy of the proposed model in this paper attains 92.55% and 88.41% on the NUS-II and Creative Senz3D datasets, respectively, and achieves an impressive 98.26% on the ASL-M dataset.

Authors

  • Peixin Niu
    Design Department, Taiyuan Normal University, Jinzhong, Shanxi, China.