AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Sound Spectrography

Showing 51 to 60 of 68 articles

Clear Filters

Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces.

PLoS computational biology
Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligi...

Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species.

PloS one
Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the ...

Speaker-dependent multipitch tracking using deep neural networks.

The Journal of the Acoustical Society of America
Multipitch tracking is important for speech and signal processing. However, it is challenging to design an algorithm that achieves accurate pitch estimation and correct speaker assignment at the same time. In this paper, deep neural networks (DNNs) a...

An intelligent artificial throat with sound-sensing ability based on laser induced graphene.

Nature communications
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integrat...

Predicting the perception of performed dynamics in music audio with ensemble learning.

The Journal of the Acoustical Society of America
By varying the dynamics in a musical performance, the musician can convey structure and different expressions. Spectral properties of most musical instruments change in a complex way with the performed dynamics, but dedicated audio features for model...

Restoring speech following total removal of the larynx by a learned transformation from sensor data to acoustics.

The Journal of the Acoustical Society of America
Total removal of the larynx may be required to treat laryngeal cancer: speech is lost. This article shows that it may be possible to restore speech by sensing movement of the remaining speech articulators and use machine learning algorithms to derive...

Estimating the spectral tilt of the glottal source from telephone speech using a deep neural network.

The Journal of the Acoustical Society of America
Estimation of the spectral tilt of the glottal source has several applications in speech analysis and modification. However, direct estimation of the tilt from telephone speech is challenging due to vocal tract resonances and distortion caused by spe...

Pavement type and wear condition classification from tire cavity acoustic measurements with artificial neural networks.

The Journal of the Acoustical Society of America
Tire road noise is the major contributor to traffic noise, which leads to general annoyance, speech interference, and sleep disturbances. Standardized methods to measure tire road noise are expensive, sophisticated to use, and they cannot be applied ...

Auditory feature representation using convolutional restricted Boltzmann machine and Teager energy operator for speech recognition.

The Journal of the Acoustical Society of America
In this letter, authors propose an auditory feature representation technique with the filterbank learned using an annealing dropout convolutional restricted Boltzmann machine (ConvRBM) and noise-robust energy estimation using the Teager energy operat...