AI Medical Compendium Topic

Explore the latest research on artificial intelligence and machine learning in medicine.

Acoustic Stimulation

Showing 21 to 30 of 60 articles

Clear Filters

Responses of midbrain auditory neurons to two different environmental sounds-A new approach on cross-sound modeling.

Bio Systems
When modeling auditory responses to environmental sounds, results are satisfactory if both training and testing are restricted to datasets of one type of sound. To predict 'cross-sound' responses (i.e., to predict the response to one type of sound e....

Detection of early reflections from a binaural activity map using neural networks.

The Journal of the Acoustical Society of America
Human listeners localize sounds to their sources despite competing directional cues from early room reflections. Binaural activity maps computed from a running signal can provide useful information about the presence of room reflections, but must be ...

Evaluating the Potential Gain of Auditory and Audiovisual Speech-Predictive Coding Using Deep Learning.

Neural computation
Sensory processing is increasingly conceived in a predictive framework in which neurons would constantly process the error signal resulting from the comparison of expected and observed stimuli. Surprisingly, few data exist on the accuracy of predicti...

Artificial Neural Networks Solve Musical Problems With Fourier Phase Spaces.

Scientific reports
How does the brain represent musical properties? Even with our growing understanding of the cognitive neuroscience of music, the answer to this question remains unclear. One method for conceiving possible representations is to use artificial neural n...

Linear versus deep learning methods for noisy speech separation for EEG-informed attention decoding.

Journal of neural engineering
OBJECTIVE: A hearing aid's noise reduction algorithm cannot infer to which speaker the user intends to listen to. Auditory attention decoding (AAD) algorithms allow to infer this information from neural signals, which leads to the concept of neuro-st...

DMMAN: A two-stage audio-visual fusion framework for sound separation and event localization.

Neural networks : the official journal of the International Neural Network Society
Videos are used widely as the media platforms for human beings to touch the physical change of the world. However, we always receive the mixed sound from the multiple sound objects, and cannot distinguish and localize the sounds as the separate entit...

Hierarchical Learning of Statistical Regularities over Multiple Timescales of Sound Sequence Processing: A Dynamic Causal Modeling Study.

Journal of cognitive neuroscience
Our understanding of the sensory environment is contextualized on the basis of prior experience. Measurement of auditory ERPs provides insight into automatic processes that contextualize the relevance of sound as a function of how sequences change ov...

Predicting subclinical psychotic-like experiences on a continuum using machine learning.

NeuroImage
Previous studies applying machine learning methods to psychosis have primarily been concerned with the binary classification of chronic schizophrenia patients and healthy controls. The aim of this study was to use electroencephalographic (EEG) data a...

Interpreting wide-band neural activity using convolutional neural networks.

eLife
Rapid progress in technologies such as calcium imaging and electrophysiology has seen a dramatic increase in the size and extent of neural recordings. Even so, interpretation of this data requires considerable knowledge about the nature of the repres...

Deep neural network models reveal interplay of peripheral coding and stimulus statistics in pitch perception.

Nature communications
Perception is thought to be shaped by the environments for which organisms are optimized. These influences are difficult to test in biological organisms but may be revealed by machine perceptual systems optimized under different conditions. We invest...