AIMC Topic: Hearing

Clear Filters Showing 1 to 10 of 36 articles

Behavior recognition technology based on deep learning used in pediatric behavioral audiometry.

Scientific reports
This study aims to explore the feasibility and accuracy of deep learning-based pediatric behavioral audiometry. The research provides a dedicated pediatric posture detection dataset, which contains a large number of video clips from children's behavi...

Structural brain pattern abnormalities in tinnitus with and without hearing loss.

Hearing research
OBJECTIVE: Subjective tinnitus often coexists with hearing loss, and they share common pathophysiological mechanisms. This comorbidity induces whole-brain gray matter volume (GMV) alterations, manifesting as distributed structural changes in neural n...

Use of AI methods to assessment of lower limb peak torque in deaf and hearing football players group.

Acta of bioengineering and biomechanics
Monitoring and assessing the level of lower limb motor skills using the Biodex System plays an important role in the training of football players and in post-traumatic rehabilitation. The aim of this study was to build and test an artificial intelli...

Prediction of hearing recovery with deep learning algorithm in sudden sensorineural hearing loss.

Scientific reports
This study aimed to establish a deep learning-based predictive model for the prognosis of idiopathic sudden sensorineural hearing loss (SSNHL). Data from 1108 patients with SSNHL between January 2015 and May 2023 were retrospectively analyzed. Patien...

Synthetic Corpus Generation for Deep Learning-Based Translation of Spanish Sign Language.

Sensors (Basel, Switzerland)
Sign language serves as the primary mode of communication for the deaf community. With technological advancements, it is crucial to develop systems capable of enhancing communication between deaf and hearing individuals. This paper reviews recent sta...

Toward Generalizable Machine Learning Models in Speech, Language, and Hearing Sciences: Estimating Sample Size and Reducing Overfitting.

Journal of speech, language, and hearing research : JSLHR
PURPOSE: Many studies using machine learning (ML) in speech, language, and hearing sciences rely upon cross-validations with single data splitting. This study's first purpose is to provide quantitative evidence that would incentivize researchers to i...

Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions.

PLoS biology
Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive model...

Deep Learning Models for Predicting Hearing Thresholds Based on Swept-Tone Stimulus-Frequency Otoacoustic Emissions.

Ear and hearing
OBJECTIVES: This study aims to develop deep learning (DL) models for the quantitative prediction of hearing thresholds based on stimulus-frequency otoacoustic emissions (SFOAEs) evoked by swept tones.

Artificial Neural Network-Assisted Classification of Hearing Prognosis of Sudden Sensorineural Hearing Loss With Vertigo.

IEEE journal of translational engineering in health and medicine
This study aimed to determine the impact on hearing prognosis of the coherent frequency with high magnitude-squared wavelet coherence (MSWC) in video head impulse test (vHIT) among patients with sudden sensorineural hearing loss with vertigo (SSNHLV)...

Polyphonic Sound Event Detection Using Temporal-Frequency Attention and Feature Space Attention.

Sensors (Basel, Switzerland)
The complexity of polyphonic sounds imposes numerous challenges on their classification. Especially in real life, polyphonic sound events have discontinuity and unstable time-frequency variations. Traditional single acoustic features cannot character...