Deep learning-based eye sign communication system for people with speech impairments.
Journal:
Disability and rehabilitation. Assistive technology
Published Date:
Jul 20, 2025
Abstract
: People with motor difficulties and speech impairments often struggle to communicate their needs and views. Augmentative and Alternative Communication (AAC) offers solutions through gestures, body language, or specialized equipment. However, eye gaze and eye signs remain the sole communication method for some individuals. While existing eye-gaze devices leverage deep learning, their pre-calibration techniques can be unreliable and susceptible to lighting conditions. On the other hand, the research into eye sign-based communication is at a very novice stage. : In this research, we propose an eye sign-based communication system that operates on deep learning principles and accepts eye sign patterns from speech-impaired or paraplegic individuals via a standard webcam. The system converts the eye signs into alphabets, words, or sentences and displays the resulting text visually on the screen. In addition, it provides a vocal prompt for the user and the caretaker. It functions effectively in various lighting conditions without requiring calibration and integrates a text prediction function for user convenience. Impact Experiments conducted with participants aged between 18 and 35 years yielded average accuracy rates of 98%, 99%, and 99% for alphabet, word, and sentence formation, respectively. These results demonstrate the system's robustness and potential to significantly benefit individuals with speech impairments.
Authors
Keywords
No keywords available for this article.