AIMC Topic: Speech Intelligibility

Clear Filters Showing 1 to 10 of 41 articles

End-to-end feature fusion for jointly optimized speech enhancement and automatic speech recognition.

Scientific reports
Speech enhancement (SE) and automatic speech recognition (ASR) in real-time processing involve improving the quality and intelligibility of speech signals on the fly, ensuring accurate transcription as the speech unfolds. SE eliminates unwanted backg...

Endpoint-aware audio-visual speech enhancement utilizing dynamic weight modulation based on SNR estimation.

Neural networks : the official journal of the International Neural Network Society
Integrating visual features has been proven effective for deep learning-based speech quality enhancement, particularly in highly noisy environments. However, these models may suffer from redundant information, resulting in performance deterioration w...

Deep learning restores speech intelligibility in multi-talker interference for cochlear implant users.

Scientific reports
Cochlear implants (CIs) do not offer the same level of effectiveness in noisy environments as in quiet settings. Current single-microphone noise reduction algorithms in hearing aids and CIs only remove predictable, stationary noise, and are ineffecti...

Deep causal speech enhancement and recognition using efficient long-short term memory Recurrent Neural Network.

PloS one
Long short-term memory (LSTM) has been effectively used to represent sequential data in recent years. However, LSTM still struggles with capturing the long-term temporal dependencies. In this paper, we propose an hourglass-shaped LSTM that is able to...

Optical Microphone-Based Speech Reconstruction System With Deep Learning for Individuals With Hearing Loss.

IEEE transactions on bio-medical engineering
OBJECTIVE: Although many speech enhancement (SE) algorithms have been proposed to promote speech perception in hearing-impaired patients, the conventional SE approaches that perform well under quiet and/or stationary noises fail under nonstationary n...

Restoring speech intelligibility for hearing aid users with deep learning.

Scientific reports
Almost half a billion people world-wide suffer from disabling hearing loss. While hearing aids can partially compensate for this, a large proportion of users struggle to understand speech in situations with background noise. Here, we present a deep l...

Regional Language Speech Recognition from Bone-Conducted Speech Signals through Different Deep Learning Architectures.

Computational intelligence and neuroscience
Bone-conducted microphone (BCM) senses vibrations from bones in the skull during speech to electrical audio signal. When transmitting speech signals, bone-conduction microphones (BCMs) capture speech signals based on the vibrations of the speaker's s...

Speech Vision: An End-to-End Deep Learning-Based Dysarthric Automatic Speech Recognition System.

IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society
Dysarthria is a disorder that affects an individual's speech intelligibility due to the paralysis of muscles and organs involved in the articulation process. As the condition is often associated with physically debilitating disabilities, not only do ...

Speech synthesis from neural decoding of spoken sentences.

Nature
Technology that translates neural activity into speech would be transformative for people who are unable to communicate as a result of neurological impairments. Decoding speech from neural activity is challenging because speaking requires very precis...

Automatic prediction of intelligible speaking rate for individuals with ALS from speech acoustic and articulatory samples.

International journal of speech-language pathology
: This research aimed to automatically predict intelligible speaking rate for individuals with Amyotrophic Lateral Sclerosis (ALS) based on speech acoustic and articulatory samples. Twelve participants with ALS and two normal subjects produced a tot...