AIMC Topic: Acoustic Stimulation

Clear Filters Showing 11 to 20 of 62 articles

Convolutional neural networks can identify brain interactions involved in decoding spatial auditory attention.

PLoS computational biology
Human listeners have the ability to direct their attention to a single speaker in a multi-talker environment. The neural correlates of selective attention can be decoded from a single trial of electroencephalography (EEG) data. In this study, leverag...

DGSD: Dynamical graph self-distillation for EEG-based auditory spatial attention detection.

Neural networks : the official journal of the International Neural Network Society
Auditory Attention Detection (AAD) aims to detect the target speaker from brain signals in a multi-speaker environment. Although EEG-based AAD methods have shown promising results in recent years, current approaches primarily rely on traditional conv...

Classification of Targets and Distractors in an Audiovisual Attention Task Based on Electroencephalography.

Sensors (Basel, Switzerland)
Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an...

Deep Learning Models for Predicting Hearing Thresholds Based on Swept-Tone Stimulus-Frequency Otoacoustic Emissions.

Ear and hearing
OBJECTIVES: This study aims to develop deep learning (DL) models for the quantitative prediction of hearing thresholds based on stimulus-frequency otoacoustic emissions (SFOAEs) evoked by swept tones.

Deep neural network models of sound localization reveal how perception is adapted to real-world environments.

Nature human behaviour
Mammals localize sounds using information from their two ears. Localization in real-world conditions is challenging, as echoes provide erroneous information and noises mask parts of target sounds. To better understand real-world localization, we equi...

Deep neural network models reveal interplay of peripheral coding and stimulus statistics in pitch perception.

Nature communications
Perception is thought to be shaped by the environments for which organisms are optimized. These influences are difficult to test in biological organisms but may be revealed by machine perceptual systems optimized under different conditions. We invest...

Interpreting wide-band neural activity using convolutional neural networks.

eLife
Rapid progress in technologies such as calcium imaging and electrophysiology has seen a dramatic increase in the size and extent of neural recordings. Even so, interpretation of this data requires considerable knowledge about the nature of the repres...

Predicting subclinical psychotic-like experiences on a continuum using machine learning.

NeuroImage
Previous studies applying machine learning methods to psychosis have primarily been concerned with the binary classification of chronic schizophrenia patients and healthy controls. The aim of this study was to use electroencephalographic (EEG) data a...

DMMAN: A two-stage audio-visual fusion framework for sound separation and event localization.

Neural networks : the official journal of the International Neural Network Society
Videos are used widely as the media platforms for human beings to touch the physical change of the world. However, we always receive the mixed sound from the multiple sound objects, and cannot distinguish and localize the sounds as the separate entit...

Linear versus deep learning methods for noisy speech separation for EEG-informed attention decoding.

Journal of neural engineering
OBJECTIVE: A hearing aid's noise reduction algorithm cannot infer to which speaker the user intends to listen to. Auditory attention decoding (AAD) algorithms allow to infer this information from neural signals, which leads to the concept of neuro-st...