Sign language recognition based on dual-channel star-attention convolutional neural network.

Journal: Scientific reports
Published Date:

Abstract

To enhance effective communication between individuals with hearing impairments and those without, numerous researchers have developed a variety of sign language recognition technologies. However, in practical applications, sign language recognition devices must balance portability, energy consumption, cost, and user comfort, while vision-based sign language recognition must confront the challenge of model stability. Addressing these challenges, this study proposes an economical and stable dual-channel star-attention convolutional neural network (SACNN) deep learning network model based on computer vision technology. The model employs a star attention mechanism to enhance gesture features while concurrently diminishing background features, thereby achieving the acquisition of gesture features. Testing on the “ASL Finger Spelling” dataset demonstrated that the model achieved a high accuracy rate of 99.81%. Experimental results indicate that, compared to existing technologies, the SACNN network model proposed in this study exhibits superior generalization performance. You can find our source codes at https://github.com/wang123c/Sign-Language-Recognition.

Authors

  • Jing Qin
    School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China.
  • Mengjiao Wang
    Greenpeace Research Laboratories, Innovation Centre Phase 2, University of Exeter, Exeter, United Kingdom.

Keywords

No keywords available for this article.