AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Speech Production Measurement

Showing 1 to 10 of 39 articles

Clear Filters

Automatic GRBAS Scoring of Pathological Voices using Deep Learning and a Small Set of Labeled Voice Data.

Journal of voice : official journal of the Voice Foundation
OBJECTIVES: Auditory-perceptual evaluation frameworks, such as the grade-roughness-breathiness-asthenia-strain (GRBAS) scale, are the gold standard for the quantitative evaluation of pathological voice quality. However, the evaluation is subjective; ...

Different Performances of Machine Learning Models to Classify Dysphonic and Non-Dysphonic Voices.

Journal of voice : official journal of the Voice Foundation
OBJECTIVE: To analyze the performance of 10 different machine learning (ML) classifiers for discrimination between dysphonic and non-dysphonic voices, using a variance threshold as a method for the selection and reduction of acoustic measurements use...

TranStutter: A Convolution-Free Transformer-Based Deep Learning Method to Classify Stuttered Speech Using 2D Mel-Spectrogram Visualization and Attention-Based Feature Representation.

Sensors (Basel, Switzerland)
Stuttering, a prevalent neurodevelopmental disorder, profoundly affects fluent speech, causing involuntary interruptions and recurrent sound patterns. This study addresses the critical need for the accurate classification of stuttering types. The res...

Computing nasalance with MFCCs and Convolutional Neural Networks.

PloS one
Nasalance is a valuable clinical biomarker for hypernasality. It is computed as the ratio of acoustic energy emitted through the nose to the total energy emitted through the mouth and nose (eNasalance). A new approach is proposed to compute nasalance...

Evaluating the consistency of lenition measures: Neural networks' posterior probability, intensity velocity, and duration.

The Journal of the Acoustical Society of America
Predictions of gradient degree of lenition of voiceless and voiced stops in a corpus of Argentine Spanish are evaluated using three acoustic measures (minimum and maximum intensity velocity and duration) and two recurrent neural network (Phonet) meas...

Artificial Intelligence-Assisted Speech Therapy for /ɹ/: A Single-Case Experimental Study.

American journal of speech-language pathology
PURPOSE: This feasibility trial describes changes in rhotic production in residual speech sound disorder following ten 40-min sessions including artificial intelligence (AI)-assisted motor-based intervention with ChainingAI, a version of Speech Motor...

Accuracy of Speech Sound Analysis: Comparison of an Automatic Artificial Intelligence Algorithm With Clinician Assessment.

Journal of speech, language, and hearing research : JSLHR
PURPOSE: Automatic speech analysis (ASA) and automatic speech recognition systems are increasingly being used in the treatment of speech sound disorders (SSDs). When utilized as a home practice tool or in the absence of the clinician, the ASA system ...

Using articulatory feature detectors in progressive networks for multilingual low-resource phone recognitiona).

The Journal of the Acoustical Society of America
Systems inspired by progressive neural networks, transferring information from end-to-end articulatory feature detectors to similarly structured phone recognizers, are described. These networks, connecting the corresponding recurrent layers of pre-tr...

The machine learning-based prediction of the sound pressure level from pathological and healthy speech signals.

The Journal of the Acoustical Society of America
Vocal intensity is quantified by sound pressure level (SPL). The SPL can be measured by either using a sound level meter or by comparing the energy of the recorded speech signal with the energy of the recorded calibration tone of a known SPL. Neither...