AIMC Topic: Sign Language

Clear Filters Showing 1 to 10 of 54 articles

Low-cost computation for isolated sign language video recognition with multiple reservoir computing.

PloS one
Sign language recognition (SLR) has the potential to bridge communication gaps and empower hearing-impaired communities. To ensure the portability and accessibility of the SLR system, its implementation on a portable, server-independent device become...

Self-Assembly MXene/PDA@Cotton Fabric Pressure Sensor Integrated with Deep Learning for Sign Language Recognition.

ACS applied materials & interfaces
In recent years, smart textiles and flexible wearable products have garnered significant attention in fields such as human-computer interaction, medical rehabilitation training, and motion monitoring. Flexible pressure sensors have attracted signific...

A novel model for expanding horizons in sign Language recognition.

Scientific reports
The American Sign Language Recognition Dataset is a pivotal resource for research in visual-gestural languages for American Sign Language and Sign-Language MNIST Dataset. The dataset contains over 64,000 images meticulously labeled with the correspon...

Gesture recognition for hearing impaired people using an ensemble of deep learning models with improving beluga whale optimization-based hyperparameter tuning.

Scientific reports
Sign language (SL) is the linguistics of speech and hearing-impaired individuals. The hand gesture is the primary model employed in SL by speech and hearing-challenged people to talk with themselves and ordinary persons. At present, hand gesture dete...

Attention-based hybrid deep learning model with CSFOA optimization and G-TverskyUNet3+ for Arabic sign language recognition.

Scientific reports
Arabic sign language (ArSL) is a visual-manual language which facilitates communication among Deaf people in the Arabic-speaking nations. Recognizing the ArSL is crucial due to variety of reasons, including its impact on the Deaf populace, education,...

Recognizing American Sign Language gestures efficiently and accurately using a hybrid transformer model.

Scientific reports
Gesture recognition plays a vital role in computer vision, especially for interpreting sign language and enabling human-computer interaction. Many existing methods struggle with challenges like heavy computational demands, difficulty in understanding...

Airbag-like Comb Flexible Pressure Sensor and Its Wearable Applications.

ACS applied materials & interfaces
Flexible wearable devices demonstrate immense potential in healthcare and human-computer interaction, yet the development of high-performance flexible pressure sensors for these applications remains a pressing technical challenge. Inspired by the str...

ASLDetect: Arabic sign language detection using ResNet and U-Net like component.

Scientific reports
Sign languages are essential for communication among over 430 million deaf and hard-of-hearing individuals worldwide. However, recognizing Arabic Sign Language (ArSL) in real-world settings remains challenging due to issues like background noise, lig...

Spanish to Mexican Sign Language glosses corpus for natural language processing tasks.

Scientific data
This work shares a dataset that contains Spanish (SPA) to Mexican Sign Language (MSL) glosses -transcripted MSL- pairs of sentences for a downstream task. The methodology used to prepare the shared dataset considered the construction of SPA-to-MSL co...

Real-Time American Sign Language Interpretation Using Deep Learning and Keypoint Tracking.

Sensors (Basel, Switzerland)
Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveragi...