AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Vocalization, Animal

Showing 31 to 40 of 60 articles

Clear Filters

Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks.

PloS one
Ultrasonic vocalizations (USVs) analysis is a well-recognized tool to investigate animal communication. It can be used for behavioral phenotyping of murine models of different disorders. The USVs are usually recorded with a microphone sensitive to ul...

Analysis of ultrasonic vocalizations from mice using computer vision and machine learning.

eLife
Mice emit ultrasonic vocalizations (USVs) that communicate socially relevant information. To detect and classify these USVs, here we describe VocalMat. VocalMat is a software that uses image-processing and differential geometry approaches to detect U...

Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires.

eLife
Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from ...

Comparing recurrent convolutional neural networks for large scale bird species classification.

Scientific reports
We present a deep learning approach towards the large-scale prediction and analysis of bird acoustics from 100 different bird species. We use spectrograms constructed on bird audio recordings from the Cornell Bird Challenge (CBC)2020 dataset, which i...

Bioacoustic classification of avian calls from raw sound waveforms with an open-source deep learning architecture.

Scientific reports
The use of autonomous recordings of animal sounds to detect species is a popular conservation tool, constantly improving in fidelity as audio hardware and software evolves. Current classification algorithms utilise sound features extracted from the r...

Function of a multimodal signal: A multiple hypothesis test using a robot frog.

The Journal of animal ecology
Multimodal communication may evolve because different signals may convey information about the signaller (content-based selection), increase efficacy of signal processing or transmission through the environment (efficacy-based selection), or modify t...

Measuring context dependency in birdsong using artificial neural networks.

PLoS computational biology
Context dependency is a key feature in sequential structures of human language, which requires reference between words far apart in the produced sequence. Assessing how long the past context has an effect on the current status provides crucial inform...

BioCPPNet: automatic bioacoustic source separation with deep neural networks.

Scientific reports
We introduce the Bioacoustic Cocktail Party Problem Network (BioCPPNet), a lightweight, modular, and robust U-Net-based machine learning architecture optimized for bioacoustic source separation across diverse biological taxa. Employing learnable or h...

Automated annotation of birdsong with a neural network that segments spectrograms.

eLife
Songbirds provide a powerful model system for studying sensory-motor learning. However, many analyses of birdsong require time-consuming, manual annotation of its elements, called syllables. Automated methods for annotation have been proposed, but th...

Design of a robotic zebra finch for experimental studies on developmental song learning.

The Journal of experimental biology
Birdsong learning has been consolidated as the model system of choice for exploring the biological substrates of vocal learning. In the zebra finch (Taeniopygia guttata), only males sing and they develop their song during a sensitive period in early ...