AIMC Topic: Attention

Clear Filters Showing 121 to 130 of 573 articles

Preparatory activity of anterior insula predicts conflict errors: integrating convolutional neural networks and neural mass models.

Scientific reports
Preparatory brain activity is a cornerstone of proactive cognitive control, a top-down process optimizing attention, perception, and inhibition, fostering cognitive flexibility and adaptive attention control in the human brain. In this study, we prop...

Pre-gating and contextual attention gate - A new fusion method for multi-modal data tasks.

Neural networks : the official journal of the International Neural Network Society
Multi-modal representation learning has received significant attention across diverse research domains due to its ability to model a scenario comprehensively. Learning the cross-modal interactions is essential to combining multi-modal data into a joi...

Cognitive and behavioral markers for human detection error in AI-assisted bridge inspection.

Applied ergonomics
Integrating Artificial Intelligence (AI) and drone technology into bridge inspections offers numerous advantages, including increased efficiency and enhanced safety. However, it is essential to recognize that this integration changes the cognitive er...

LGGNet: Learning From Local-Global-Graph Representations for Brain-Computer Interface.

IEEE transactions on neural networks and learning systems
Neuropsychological studies suggest that co-operative activities among different brain functional areas drive high-level cognitive processes. To learn the brain activities within and among different functional areas of the brain, we propose local-glob...

Temporal-spatial cross attention network for recognizing imagined characters.

Scientific reports
Previous research has primarily employed deep learning models such as Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs) for decoding imagined character signals. These approaches have treated the temporal and spatial features ...

AutoAMS: Automated attention-based multi-modal graph learning architecture search.

Neural networks : the official journal of the International Neural Network Society
Multi-modal attention mechanisms have been successfully used in multi-modal graph learning for various tasks. However, existing attention-based multi-modal graph learning (AMGL) architectures heavily rely on manual design, requiring huge effort and e...

Attention Analysis in Robotic-Assistive Therapy for Children With Autism.

IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society
Children with Autism Spectrum Disorder (ASD) show severe attention deficits, hindering their capacity to acquire new skills. The automatic assessment of their attention response would provide the therapists with an important biomarker to better quant...

Omnidirectional image super-resolution via position attention network.

Neural networks : the official journal of the International Neural Network Society
For convenient transmission, omnidirectional images (ODIs) usually follow the equirectangular projection (ERP) format and are low-resolution. To provide better immersive experience, omnidirectional image super resolution (ODISR) is essential. However...

GRAM: An interpretable approach for graph anomaly detection using gradient attention maps.

Neural networks : the official journal of the International Neural Network Society
Detecting unusual patterns in graph data is a crucial task in data mining. However, existing methods face challenges in consistently achieving satisfactory performance and often lack interpretability, which hinders our understanding of anomaly detect...

The attentive reconstruction of objects facilitates robust object recognition.

PLoS computational biology
Humans are extremely robust in our ability to perceive and recognize objects-we see faces in tea stains and can recognize friends on dark streets. Yet, neurocomputational models of primate object recognition have focused on the initial feed-forward p...