AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Speech Perception

Showing 71 to 80 of 111 articles

Clear Filters

Computational framework for fusing eye movements and spoken narratives for image annotation.

Journal of vision
Despite many recent advances in the field of computer vision, there remains a disconnect between how computers process images and how humans understand them. To begin to bridge this gap, we propose a framework that integrates human-elicited gaze and ...

Brain-optimized extraction of complex sound features that drive continuous auditory perception.

PLoS computational biology
Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match ...

EEG Connectivity Analysis Using Denoising Autoencoders for the Detection of Dyslexia.

International journal of neural systems
The Temporal Sampling Framework (TSF) theorizes that the characteristic phonological difficulties of dyslexia are caused by an atypical oscillatory sampling at one or more temporal rates. The LEEDUCA study conducted a series of Electroencephalography...

A talker-independent deep learning algorithm to increase intelligibility for hearing-impaired listeners in reverberant competing talker conditions.

The Journal of the Acoustical Society of America
Deep learning based speech separation or noise reduction needs to generalize to voices not encountered during training and to operate under multiple corruptions. The current study provides such a demonstration for hearing-impaired (HI) listeners. Sen...

EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.

Cognitive science
Despite the lack of invariance problem (the many-to-many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side-ste...

Machine translation of cortical activity to text with an encoder-decoder framework.

Nature neuroscience
A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech. Here we show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent a...

Cortical Tracking of Surprisal during Continuous Speech Comprehension.

Journal of cognitive neuroscience
Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well...

Learning mechanisms in cue reweighting.

Cognition
Feedback has been shown to be effective in shifting attention across perceptual cues to a phonological contrast in speech perception (Francis, Baldwin & Nusbaum, 2000). However, the learning mechanisms behind this process remain obscure. We compare t...

Biometric identification of listener identity from frequency following responses to speech.

Journal of neural engineering
OBJECTIVE: We investigate the biometric specificity of the frequency following response (FFR), an EEG marker of early auditory processing that reflects phase-locked activity from neural ensembles in the auditory cortex and subcortex (Chandrasekaran a...

Machine Learning Approaches to Analyze Speech-Evoked Neurophysiological Responses.

Journal of speech, language, and hearing research : JSLHR
Purpose Speech-evoked neurophysiological responses are often collected to answer clinically and theoretically driven questions concerning speech and language processing. Here, we highlight the practical application of machine learning (ML)-based appr...