AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Sound Spectrography

Showing 11 to 20 of 68 articles

Clear Filters

A Speech Recognition Method Based on Domain-Specific Datasets and Confidence Decision Networks.

Sensors (Basel, Switzerland)
This paper proposes a speech recognition method based on a domain-specific language speech network (DSL-Net) and a confidence decision network (CD-Net). The method involves automatically training a domain-specific dataset, using pre-trained model par...

Using deep learning to track time × frequency whistle contours of toothed whales without human-annotated training data.

The Journal of the Acoustical Society of America
Many odontocetes produce whistles that feature characteristic contour shapes in spectrogram representations of their calls. Automatically extracting the time × frequency tracks of whistle contours has numerous subsequent applications, including speci...

Prediction of Arteriovenous Access Dysfunction by Mel Spectrogram-based Deep Learning Model.

International journal of medical sciences
The early detection of arteriovenous (AV) access dysfunction is crucial for maintaining the patency of vascular access. This study aimed to use deep learning to predict AV access malfunction necessitating further vascular management. This prospecti...

Deep transfer learning-based bird species classification using mel spectrogram images.

PloS one
The classification of bird species is of significant importance in the field of ornithology, as it plays an important role in assessing and monitoring environmental dynamics, including habitat modifications, migratory behaviors, levels of pollution, ...

Accelerated construction of stress relief music datasets using CNN and the Mel-scaled spectrogram.

PloS one
Listening to music is a crucial tool for relieving stress and promoting relaxation. However, the limited options available for stress-relief music do not cater to individual preferences, compromising its effectiveness. Traditional methods of curating...

Spectro-temporal acoustical markers differentiate speech from song across cultures.

Nature communications
Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro...

A Robust Deep Learning Framework Based on Spectrograms for Heart Sound Classification.

IEEE/ACM transactions on computational biology and bioinformatics
Heart sound analysis plays an important role in early detecting heart disease. However, manual detection requires doctors with extensive clinical experience, which increases uncertainty for the task, especially in medically underdeveloped areas. This...

Bird song comparison using deep learning trained from avian perceptual judgments.

PLoS computational biology
Our understanding of bird song, a model system for animal communication and the neurobiology of learning, depends critically on making reliable, validated comparisons between the complex multidimensional syllables that are used in songs. However, mos...

Automated detection of Bornean white-bearded gibbon (Hylobates albibarbis) vocalizations using an open-source framework for deep learning.

The Journal of the Acoustical Society of America
Passive acoustic monitoring is a promising tool for monitoring at-risk populations of vocal species, yet, extracting relevant information from large acoustic datasets can be time-consuming, creating a bottleneck at the point of analysis. To address t...

The development of deep convolutional generative adversarial network to synthesize odontocetes' clicks.

The Journal of the Acoustical Society of America
Odontocetes are capable of dynamically changing their echolocation clicks to efficiently detect targets, and learning their clicking strategy can facilitate the design of man-made detecting signals. In this study, we developed deep convolutional gene...