Journal of the American Medical Informatics Association : JAMIA
Jun 1, 2025
OBJECTIVE: Diagnosis codes documented in electronic health records (EHR) are often relied upon to clinically phenotype patients for biomedical research. However, these diagnoses can be incomplete and inaccurate, leading to false negatives when search...
The Journal of the Acoustical Society of America
May 1, 2023
Recent years have brought considerable advances to our ability to increase intelligibility through deep-learning-based noise reduction, especially for hearing-impaired (HI) listeners. In this study, intelligibility improvements resulting from a curre...
The Journal of the Acoustical Society of America
Nov 1, 2021
The fundamental requirement for real-time operation of a speech-processing algorithm is causality-that it operate without utilizing future time frames. In the present study, the performance of a fully causal deep computational auditory scene analysis...
The Journal of the Acoustical Society of America
Jun 1, 2021
Real-time operation is critical for noise reduction in hearing technology. The essential requirement of real-time operation is causality-that an algorithm does not use future time-frame information and, instead, completes its operation by the end of ...
The Journal of the Acoustical Society of America
Jun 1, 2020
Deep learning based speech separation or noise reduction needs to generalize to voices not encountered during training and to operate under multiple corruptions. The current study provides such a demonstration for hearing-impaired (HI) listeners. Sen...
The Journal of the Acoustical Society of America
Mar 1, 2019
For deep learning based speech segregation to have translational significance as a noise-reduction tool, it must perform in a wide variety of acoustic environments. In the current study, performance was examined when target speech was subjected to in...
This study investigated a method to adjust hearing-aid gain by use of a machine-learning algorithm that estimates the optimal setting of gain parameters based on user preference indicated in an iterative paired-comparison procedure. Twenty hearing-im...
The Journal of the Acoustical Society of America
Sep 1, 2018
Recently, deep learning based speech segregation has been shown to improve human speech intelligibility in noisy environments. However, one important factor not yet considered is room reverberation, which characterizes typical daily environments. The...
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Jul 1, 2018
The performance of a deep-learning-based speech enhancement (SE) technology for hearing aid users, called a deep denoising autoencoder (DDAE), was investigated. The hearing-aid speech perception index (HASPI) and the hearing- aid sound quality index ...
The Journal of the Acoustical Society of America
Jun 1, 2017
Individuals with hearing impairment have particular difficulty perceptually segregating concurrent voices and understanding a talker in the presence of a competing voice. In contrast, individuals with normal hearing perform this task quite well. This...
Join thousands of healthcare professionals staying informed about the latest AI breakthroughs in medicine. Get curated insights delivered to your inbox.