AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Speech Perception

Showing 61 to 70 of 111 articles

Clear Filters

EEG Connectivity Analysis Using Denoising Autoencoders for the Detection of Dyslexia.

International journal of neural systems
The Temporal Sampling Framework (TSF) theorizes that the characteristic phonological difficulties of dyslexia are caused by an atypical oscillatory sampling at one or more temporal rates. The LEEDUCA study conducted a series of Electroencephalography...

Visual Speech Recognition: Improving Speech Perception in Noise through Artificial Intelligence.

Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery
OBJECTIVES: To compare speech perception (SP) in noise for normal-hearing (NH) individuals and individuals with hearing loss (IWHL) and to demonstrate improvements in SP with use of a visual speech recognition program (VSRP).

A talker-independent deep learning algorithm to increase intelligibility for hearing-impaired listeners in reverberant competing talker conditions.

The Journal of the Acoustical Society of America
Deep learning based speech separation or noise reduction needs to generalize to voices not encountered during training and to operate under multiple corruptions. The current study provides such a demonstration for hearing-impaired (HI) listeners. Sen...

Computational framework for fusing eye movements and spoken narratives for image annotation.

Journal of vision
Despite many recent advances in the field of computer vision, there remains a disconnect between how computers process images and how humans understand them. To begin to bridge this gap, we propose a framework that integrates human-elicited gaze and ...

Brain-optimized extraction of complex sound features that drive continuous auditory perception.

PLoS computational biology
Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match ...

Linear versus deep learning methods for noisy speech separation for EEG-informed attention decoding.

Journal of neural engineering
OBJECTIVE: A hearing aid's noise reduction algorithm cannot infer to which speaker the user intends to listen to. Auditory attention decoding (AAD) algorithms allow to infer this information from neural signals, which leads to the concept of neuro-st...

A two-stage deep learning algorithm for talker-independent speaker separation in reverberant conditions.

The Journal of the Acoustical Society of America
Speaker separation is a special case of speech separation, in which the mixture signal comprises two or more speakers. Many talker-independent speaker separation methods have been introduced in recent years to address this problem in anechoic conditi...

Modelling the effects of transcranial alternating current stimulation on the neural encoding of speech in noise.

NeuroImage
Transcranial alternating current stimulation (tACS) can non-invasively modulate neuronal activity in the cerebral cortex, in particular at the frequency of the applied stimulation. Such modulation can matter for speech processing, since the latter in...

Deep Learning for Voice Gender Identification: Proof-of-concept for Gender-Affirming Voice Care.

The Laryngoscope
OBJECTIVES/HYPOTHESIS: The need for gender-affirming voice care has been increasing in the transgender population in the last decade. Currently, objective treatment outcome measurements are lacking to assess the success of these interventions. This s...

μ-law SGAN for generating spectra with more details in speech enhancement.

Neural networks : the official journal of the International Neural Network Society
The goal of monaural speech enhancement is to separate clean speech from noisy speech. Recently, many studies have employed generative adversarial networks (GAN) to deal with monaural speech enhancement tasks. When using generative adversarial networ...