AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Hearing

Showing 11 to 20 of 34 articles

Clear Filters

Robotics, automation, active electrode arrays, and new devices for cochlear implantation: A contemporary review.

Hearing research
In the last two decades, cochlear implant surgery has evolved into a minimally invasive, hearing preservation surgical technique. The devices used during surgery have benefited from technological advances that have allowed modification and possible i...

Sound Event Detection by Pseudo-Labeling in Weakly Labeled Dataset.

Sensors (Basel, Switzerland)
Weakly labeled sound event detection (WSED) is an important task as it can facilitate the data collection efforts before constructing a strongly labeled sound event dataset. Recent high performance in deep learning-based WSED's exploited using a segm...

Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model.

Journal of healthcare engineering
In order to compare magnetic resonance imaging (MRI) findings of patients with large vestibular aqueduct syndrome (LVAS) in the stable hearing loss (HL) group and the fluctuating HL group, this paper provides reference for clinicians' early intervent...

A model of speech recognition for hearing-impaired listeners based on deep learning.

The Journal of the Acoustical Society of America
Automatic speech recognition (ASR) has made major progress based on deep machine learning, which motivated the use of deep neural networks (DNNs) as perception models and specifically to predict human speech recognition (HSR). This study investigates...

Towards accurate facial nerve segmentation with decoupling optimization.

Physics in medicine and biology
Robotic cochlear implantation is an effective way to restore the hearing of hearing-impaired patients, and facial nerve recognition is the key to the operation. However, accurate facial nerve segmentation is a challenging task, mainly for two key iss...

A Novel Speech Intelligibility Enhancement Model based on Canonical Correlation and Deep Learning.

Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Current deep learning (DL) based approaches to speech intelligibility enhancement in noisy environments are often trained to minimise the feature distance between noise-free speech and enhanced speech signals. Despite improving the speech quality, su...

Polyphonic Sound Event Detection Using Temporal-Frequency Attention and Feature Space Attention.

Sensors (Basel, Switzerland)
The complexity of polyphonic sounds imposes numerous challenges on their classification. Especially in real life, polyphonic sound events have discontinuity and unstable time-frequency variations. Traditional single acoustic features cannot character...

Artificial Neural Network-Assisted Classification of Hearing Prognosis of Sudden Sensorineural Hearing Loss With Vertigo.

IEEE journal of translational engineering in health and medicine
This study aimed to determine the impact on hearing prognosis of the coherent frequency with high magnitude-squared wavelet coherence (MSWC) in video head impulse test (vHIT) among patients with sudden sensorineural hearing loss with vertigo (SSNHLV)...

Deep Learning Models for Predicting Hearing Thresholds Based on Swept-Tone Stimulus-Frequency Otoacoustic Emissions.

Ear and hearing
OBJECTIVES: This study aims to develop deep learning (DL) models for the quantitative prediction of hearing thresholds based on stimulus-frequency otoacoustic emissions (SFOAEs) evoked by swept tones.