Electrophysiological research has been widely utilized to study brain responses to acoustic stimuli. The frequency-following response (FFR), a non-invasive reflection of how the brain encodes acoustic stimuli, is a particularly propitious electrophys...
Adults struggle to learn non-native speech categories in many experimental settings (Goto, Neuropsychologia, 9(3), 317-323 1971), but learn efficiently in a video game paradigm where non-native speech sounds have functional significance (Lim & Holt, ...
Cochlear implants (CIs) do not offer the same level of effectiveness in noisy environments as in quiet settings. Current single-microphone noise reduction algorithms in hearing aids and CIs only remove predictable, stationary noise, and are ineffecti...
This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-n...
. Our goal is to decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we aim to charact...
Tasks in psychophysical tests can at times be repetitive and cause individuals to lose engagement during the test. To facilitate engagement, we propose the use of a humanoid NAO robot, named Sam, as an alternative interface for conducting psychophysi...
IEEE transactions on bio-medical engineering
Nov 21, 2023
OBJECTIVE: Although many speech enhancement (SE) algorithms have been proposed to promote speech perception in hearing-impaired patients, the conventional SE approaches that perform well under quiet and/or stationary noises fail under nonstationary n...
Assistive robots are tools that people living with upper body disabilities can leverage to autonomously perform Activities of Daily Living (ADL). Unfortunately, conventional control methods still rely on low-dimensional, easy-to-implement interfaces ...
This paper proposes a speech recognition method based on a domain-specific language speech network (DSL-Net) and a confidence decision network (CD-Net). The method involves automatically training a domain-specific dataset, using pre-trained model par...
Listeners with hearing loss often struggle to understand speech in noise, even with a hearing aid. To better understand the auditory processing deficits that underlie this problem, we made large-scale brain recordings from gerbils, a common animal mo...
Join thousands of healthcare professionals staying informed about the latest AI breakthroughs in medicine. Get curated insights delivered to your inbox.