AIMC Topic: Speech Perception

Clear Filters Showing 11 to 20 of 114 articles

Machine Learning Recognizes Frequency-Following Responses in American Adults: Effects of Reference Spectrogram and Stimulus Token.

Perceptual and motor skills
Electrophysiological research has been widely utilized to study brain responses to acoustic stimuli. The frequency-following response (FFR), a non-invasive reflection of how the brain encodes acoustic stimuli, is a particularly propitious electrophys...

Exploring the effectiveness of reward-based learning strategies for second-language speech sounds.

Psychonomic bulletin & review
Adults struggle to learn non-native speech categories in many experimental settings (Goto, Neuropsychologia, 9(3), 317-323 1971), but learn efficiently in a video game paradigm where non-native speech sounds have functional significance (Lim & Holt, ...

Deep learning restores speech intelligibility in multi-talker interference for cochlear implant users.

Scientific reports
Cochlear implants (CIs) do not offer the same level of effectiveness in noisy environments as in quiet settings. Current single-microphone noise reduction algorithms in hearing aids and CIs only remove predictable, stationary noise, and are ineffecti...

Deep learning-based auditory attention decoding in listeners with hearing impairment.

Journal of neural engineering
This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-n...

Machine learning decoding of single neurons in the thalamus for speech brain-machine interfaces.

Journal of neural engineering
. Our goal is to decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we aim to charact...

Use of a humanoid robot for auditory psychophysical testing.

PloS one
Tasks in psychophysical tests can at times be repetitive and cause individuals to lose engagement during the test. To facilitate engagement, we propose the use of a humanoid NAO robot, named Sam, as an alternative interface for conducting psychophysi...

Optical Microphone-Based Speech Reconstruction System With Deep Learning for Individuals With Hearing Loss.

IEEE transactions on bio-medical engineering
OBJECTIVE: Although many speech enhancement (SE) algorithms have been proposed to promote speech perception in hearing-impaired patients, the conventional SE approaches that perform well under quiet and/or stationary noises fail under nonstationary n...

Efficient Self-Attention Model for Speech Recognition-Based Assistive Robots Control.

Sensors (Basel, Switzerland)
Assistive robots are tools that people living with upper body disabilities can leverage to autonomously perform Activities of Daily Living (ADL). Unfortunately, conventional control methods still rely on low-dimensional, easy-to-implement interfaces ...

A Speech Recognition Method Based on Domain-Specific Datasets and Confidence Decision Networks.

Sensors (Basel, Switzerland)
This paper proposes a speech recognition method based on a domain-specific language speech network (DSL-Net) and a confidence decision network (CD-Net). The method involves automatically training a domain-specific dataset, using pre-trained model par...

Large-scale electrophysiology and deep learning reveal distorted neural signal dynamics after hearing loss.

eLife
Listeners with hearing loss often struggle to understand speech in noise, even with a hearing aid. To better understand the auditory processing deficits that underlie this problem, we made large-scale brain recordings from gerbils, a common animal mo...