AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Speech Perception

Showing 81 to 90 of 111 articles

Clear Filters

A deep learning algorithm to increase intelligibility for hearing-impaired listeners in the presence of a competing talker and reverberation.

The Journal of the Acoustical Society of America
For deep learning based speech segregation to have translational significance as a noise-reduction tool, it must perform in a wide variety of acoustic environments. In the current study, performance was examined when target speech was subjected to in...

Distinct Mechanisms of Imagery Differentially Influence Speech Perception.

eNeuro
Neural representation can be induced without external stimulation, such as in mental imagery. Our previous study found that imagined speaking and imagined hearing modulated perceptual neural responses in opposite directions, suggesting motor-to-senso...

A comprehensive study on bilingual and multilingual speech emotion recognition using a two-pass classification scheme.

PloS one
Emotion recognition plays an important role in human-computer interaction. Previously and currently, many studies focused on speech emotion recognition using several classifiers and feature extraction methods. The majority of such studies, however, a...

Simulating speech processing with cochlear implants: How does channel interaction affect learning in neural networks?

PloS one
We introduce a novel machine learning approach for investigating speech processing with cochlear implants (CIs)-prostheses used to replace a damaged inner ear. Concretely, we use a simple perceptron and a deep convolutional network to classify speech...

Talker change detection: A comparison of human and machine performance.

The Journal of the Acoustical Society of America
The automatic analysis of conversational audio remains difficult, in part, due to the presence of multiple talkers speaking in turns, often with significant intonation variations and overlapping speech. The majority of prior work on psychoacoustic sp...

Vision-referential speech enhancement of an audio signal using mask information captured as visual data.

The Journal of the Acoustical Society of America
This paper describes a vision-referential speech enhancement of an audio signal using mask information captured as visual data. Smartphones and tablet devices have become popular in recent years. Most of them not only have a microphone but also a cam...

Enhanced Robot Speech Recognition Using Biomimetic Binaural Sound Source Localization.

IEEE transactions on neural networks and learning systems
Inspired by the behavior of humans talking in noisy environments, we propose an embodied embedded cognition approach to improve automatic speech recognition (ASR) systems for robots in challenging environments, such as with ego noise, using binaural ...

Improving the performance of hearing aids in noisy environments based on deep learning technology.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
The performance of a deep-learning-based speech enhancement (SE) technology for hearing aid users, called a deep denoising autoencoder (DDAE), was investigated. The hearing-aid speech perception index (HASPI) and the hearing- aid sound quality index ...

Evaluating automatic speech recognition systems as quantitative models of cross-lingual phonetic category perception.

The Journal of the Acoustical Society of America
Theories of cross-linguistic phonetic category perception posit that listeners perceive foreign sounds by mapping them onto their native phonetic categories, but, until now, no way to effectively implement this mapping has been proposed. In this pape...