AIMC Topic: Speech Intelligibility

Clear Filters Showing 11 to 20 of 41 articles

Representation Learning Based Speech Assistive System for Persons With Dysarthria.

IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society
An assistive system for persons with vocal impairment due to dysarthria converts dysarthric speech to normal speech or text. Because of the articulatory deficits, dysarthric speech recognition needs a robust learning technique. Representation learnin...

Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces.

PLoS computational biology
Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligi...

Explicit estimation of magnitude and phase spectra in parallel for high-quality speech enhancement.

Neural networks : the official journal of the International Neural Network Society
Phase information has a significant impact on speech perceptual quality and intelligibility. However, existing speech enhancement methods encounter limitations in explicit phase estimation due to the non-structural nature and wrapping characteristics...

Neural-WDRC: A Deep Learning Wide Dynamic Range Compression Method Combined With Controllable Noise Reduction for Hearing Aids.

Trends in hearing
Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highes...

Speech intelligibility prediction based on a physiological model of the human ear and a hierarchical spiking neural network.

The Journal of the Acoustical Society of America
A speech intelligibility (SI) prediction model is proposed that includes an auditory preprocessing component based on the physiological anatomy and activity of the human ear, a hierarchical spiking neural network, and a decision back-end processing b...

Using deep learning to improve the intelligibility of a target speaker in noisy multi-talker environments for people with normal hearing and hearing loss.

The Journal of the Acoustical Society of America
Understanding speech in noisy environments is a challenging task, especially in communication situations with several competing speakers. Despite their ongoing improvement, assistive listening devices and speech processing approaches still do not per...

Recovering speech intelligibility with deep learning and multiple microphones in noisy-reverberant situations for people using cochlear implants.

The Journal of the Acoustical Society of America
For cochlear implant (CI) listeners, holding a conversation in noisy and reverberant environments is often challenging. Deep-learning algorithms can potentially mitigate these difficulties by enhancing speech in everyday listening environments. This ...

On phase recovery and preserving early reflections for deep-learning speech dereverberation.

The Journal of the Acoustical Society of America
In indoor environments, reverberation often distorts clean speech. Although deep learning-based speech dereverberation approaches have shown much better performance than traditional ones, the inferior speech quality of the dereverberated speech cause...

Progress made in the efficacy and viability of deep-learning-based noise reduction.

The Journal of the Acoustical Society of America
Recent years have brought considerable advances to our ability to increase intelligibility through deep-learning-based noise reduction, especially for hearing-impaired (HI) listeners. In this study, intelligibility improvements resulting from a curre...