Diagnostic tests for hearing impairment not only determines the presence (or absence) of hearing loss, but also evaluates its degree and type, and provides physicians with essential data for future treatment and rehabilitation. Therefore, accurately ...
In the last two decades, cochlear implant surgery has evolved into a minimally invasive, hearing preservation surgical technique. The devices used during surgery have benefited from technological advances that have allowed modification and possible i...
Weakly labeled sound event detection (WSED) is an important task as it can facilitate the data collection efforts before constructing a strongly labeled sound event dataset. Recent high performance in deep learning-based WSED's exploited using a segm...
In order to compare magnetic resonance imaging (MRI) findings of patients with large vestibular aqueduct syndrome (LVAS) in the stable hearing loss (HL) group and the fluctuating HL group, this paper provides reference for clinicians' early intervent...
The Journal of the Acoustical Society of America
35364918
Automatic speech recognition (ASR) has made major progress based on deep machine learning, which motivated the use of deep neural networks (DNNs) as perception models and specifically to predict human speech recognition (HSR). This study investigates...
Robotic cochlear implantation is an effective way to restore the hearing of hearing-impaired patients, and facial nerve recognition is the key to the operation. However, accurate facial nerve segmentation is a challenging task, mainly for two key iss...
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
36085897
Current deep learning (DL) based approaches to speech intelligibility enhancement in noisy environments are often trained to minimise the feature distance between noise-free speech and enhanced speech signals. Despite improving the speech quality, su...
The complexity of polyphonic sounds imposes numerous challenges on their classification. Especially in real life, polyphonic sound events have discontinuity and unstable time-frequency variations. Traditional single acoustic features cannot character...
IEEE journal of translational engineering in health and medicine
36816096
This study aimed to determine the impact on hearing prognosis of the coherent frequency with high magnitude-squared wavelet coherence (MSWC) in video head impulse test (vHIT) among patients with sudden sensorineural hearing loss with vertigo (SSNHLV)...
OBJECTIVES: This study aims to develop deep learning (DL) models for the quantitative prediction of hearing thresholds based on stimulus-frequency otoacoustic emissions (SFOAEs) evoked by swept tones.