A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech. Here we show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent a...
Neural representation can be induced without external stimulation, such as in mental imagery. Our previous study found that imagined speaking and imagined hearing modulated perceptual neural responses in opposite directions, suggesting motor-to-senso...
To assess whether CI programming by means of a software application using artificial intelligence (AI), FOX®, may improve cochlear implant (CI) performance. Two adult CI recipients who had mixed auditory results with their manual fitting were selec...
Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well...
Emotion recognition plays an important role in human-computer interaction. Previously and currently, many studies focused on speech emotion recognition using several classifiers and feature extraction methods. The majority of such studies, however, a...
OBJECTIVE: We investigate the biometric specificity of the frequency following response (FFR), an EEG marker of early auditory processing that reflects phase-locked activity from neural ensembles in the auditory cortex and subcortex (Chandrasekaran a...
Feedback has been shown to be effective in shifting attention across perceptual cues to a phonological contrast in speech perception (Francis, Baldwin & Nusbaum, 2000). However, the learning mechanisms behind this process remain obscure. We compare t...
We introduce a novel machine learning approach for investigating speech processing with cochlear implants (CIs)-prostheses used to replace a damaged inner ear. Concretely, we use a simple perceptron and a deep convolutional network to classify speech...
IEEE transactions on neural networks and learning systems
Jun 4, 2018
Inspired by the behavior of humans talking in noisy environments, we propose an embodied embedded cognition approach to improve automatic speech recognition (ASR) systems for robots in challenging environments, such as with ego noise, using binaural ...
Join thousands of healthcare professionals staying informed about the latest AI breakthroughs in medicine. Get curated insights delivered to your inbox.