This study aims to explore the feasibility and accuracy of deep learning-based pediatric behavioral audiometry. The research provides a dedicated pediatric posture detection dataset, which contains a large number of video clips from children's behavi...
Understanding speech in noisy environments is a primary challenge for individuals with hearing loss, affecting daily communication and quality of life. Traditional speech-in-noise tests are essential for screening and diagnosing hearing loss but are ...
Tasks in psychophysical tests can at times be repetitive and cause individuals to lose engagement during the test. To facilitate engagement, we propose the use of a humanoid NAO robot, named Sam, as an alternative interface for conducting psychophysi...
OBJECTIVES: This study aims to develop deep learning (DL) models for the quantitative prediction of hearing thresholds based on stimulus-frequency otoacoustic emissions (SFOAEs) evoked by swept tones.
Decision making on the treatment of vestibular schwannoma (VS) is mainly based on the symptoms, tumor size, patient's preference, and experience of the medical team. Here we provide objective tools to support the decision process by answering two que...
Statistical knowledge about many patients could be exploited using machine learning to provide supporting information to otolaryngologists and other hearing health care professionals, but needs to be made accessible. The Common Audiological Function...
OBJECTIVE: The objective of this study was to use machine learning in the form of a deep neural network to objectively classify paired auditory brainstem response waveforms into either: 'clear response', 'inconclusive' or 'response absent'.
OBJECTIVE: As a step towards objectifying audiological rehabilitation and providing comparability between different test batteries and clinics, the Common Audiological Functional Parameters (CAFPAs) were introduced as a common and abstract representa...
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Jul 1, 2018
The performance of a deep-learning-based speech enhancement (SE) technology for hearing aid users, called a deep denoising autoencoder (DDAE), was investigated. The hearing-aid speech perception index (HASPI) and the hearing- aid sound quality index ...
Zhonghua er bi yan hou tou jing wai ke za zhi = Chinese journal of otorhinolaryngology head and neck surgery
Aug 7, 2017
The present study was carried out to explore the tone production ability of the Mandarin-speaking children with cochlear implants (CI) by using an artificial neural network model and to examine the potential contributing factors underlining their to...
Join thousands of healthcare professionals staying informed about the latest AI breakthroughs in medicine. Get curated insights delivered to your inbox.