AIMC Topic: Attention

Clear Filters Showing 11 to 20 of 587 articles

Attention-based hybrid deep learning model with CSFOA optimization and G-TverskyUNet3+ for Arabic sign language recognition.

Scientific reports
Arabic sign language (ArSL) is a visual-manual language which facilitates communication among Deaf people in the Arabic-speaking nations. Recognizing the ArSL is crucial due to variety of reasons, including its impact on the Deaf populace, education,...

Shared and distinct neural signatures of feature and spatial attention.

NeuroImage
The debate on whether feature attention (FA) and spatial attention (SA) share a common neural mechanism remains unresolved. Previous neuroimaging studies have identified fronto-parietal-temporal attention-related regions that exhibited consistent act...

Smartphone eye-tracking with deep learning: Data quality and field testing.

Behavior research methods
Eye-tracking is widely used to measure human attention in research, commercial, and clinical applications. With the rapid advancements in artificial intelligence and mobile computing, deep learning algorithms for computer vision-based eye tracking ha...

Emergence of human-like attention and distinct head clusters in self-supervised vision transformers: A comparative eye-tracking study.

Neural networks : the official journal of the International Neural Network Society
Visual attention models aim to predict human gaze behavior, yet traditional saliency models and deep gaze prediction networks face limitations. Saliency models rely on handcrafted low-level visual features, often failing to capture human gaze dynamic...

S2LIC: Learned image compression with the SwinV2 block, Adaptive Channel-wise and Global-inter attention Context.

Neural networks : the official journal of the International Neural Network Society
Recently, deep learning technology has been successfully applied in the field of image compression, leading to superior rate-distortion performance. It is crucial to design an effective and efficient entropy model to estimate the probability distribu...

Multi-agent self-attention reinforcement learning for multi-USV hunting target.

Neural networks : the official journal of the International Neural Network Society
A reinforcement learning (RL) method based on the multi-head self-attention (MSA) mechanism is proposed to solve the challenge of multiple unmanned surface vehicles (multi-USV) hunting target at the surface. The kinematic, dynamic, and environmental ...

Single-microphone deep envelope separation based auditory attention decoding for competing speech and music.

Journal of neural engineering
In this study, we introduce an end-to-end single microphone deep learning system for source separation and auditory attention decoding (AAD) in a competing speech and music setup. Deep source separation is applied directly on the envelope of the obse...

SMANet: A Model Combining SincNet, Multi-Branch Spatial-Temporal CNN, and Attention Mechanism for Motor Imagery BCI.

IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society
Building a brain-computer interface (BCI) based on motor imagery (MI) requires accurately decoding MI tasks, which poses a significant challenge due to individual discrepancy among subjects and low signal-to-noise ratio of EEG signals. We propose an ...

Self-supervised spatial-temporal contrastive network for EEG-based brain network classification.

Neural networks : the official journal of the International Neural Network Society
Electroencephalogram (EEG)-based brain network analysis has shown promise in brain disease research by revealing the complex connectivity among brain regions. However, existing methods struggle to fully utilize the large amounts of unlabeled data to ...

MSCViT: A small-size ViT architecture with multi-scale self-attention mechanism for tiny datasets.

Neural networks : the official journal of the International Neural Network Society
Vision Transformer (ViT) has demonstrated significant potential in various vision tasks due to its strong ability in modeling long-range dependencies. However, such success is largely fueled by training on massive samples. In real applications, the l...