AIMC Topic: Visual Perception

Clear Filters Showing 1 to 10 of 348 articles

Keypoint-based modeling reveals fine-grained body pose tuning in superior temporal sulcus neurons.

Nature communications
Body pose and orientation serve as vital visual signals in primate non-verbal social communication. Leveraging deep learning algorithms that extract body poses from videos of behaving monkeys, applied to a monkey avatar, we investigated neural tuning...

Visual reasoning in object-centric deep neural networks: A comparative cognition approach.

Neural networks : the official journal of the International Neural Network Society
Achieving visual reasoning is a long-term goal of artificial intelligence. In the last decade, several studies have applied deep neural networks (DNNs) to the task of learning visual relations from images, with modest results in terms of generalizati...

Developmental coordination disorder and cerebral visual impairment: What is the association?

Research in developmental disabilities
INTRODUCTION: Children with Developmental Coordination Disorder (DCD) experience impairments beyond motor planning, affecting visual perceptual and visual-motor integration abilities, similar to children with Cerebral Visual Impairment (CVI), making ...

Brain-guided convolutional neural networks reveal task-specific representations in scene processing.

Scientific reports
Scene categorization is the dominant proxy for visual understanding, yet humans can perform a large number of visual tasks within any scene. Consequently, we know little about how different tasks change how a scene is processed, represented, and its ...

Using machine learning to simultaneously quantify multiple cognitive components of episodic memory.

Nature communications
Why do we remember some events but forget others? Previous studies attempting to decode successful vs. unsuccessful brain states to investigate this question have met with limited success, potentially due, in part, to assessing episodic memory as a u...

High-level visual processing in the lateral geniculate nucleus revealed using goal-driven deep learning.

Journal of neuroscience methods
BACKGROUND: The Lateral Geniculate Nucleus (LGN) is an essential contributor to high-level visual processing despite being an early subcortical area in the visual system. Current LGN computational models focus on its basic properties, with less empha...

Improving Acceptance to Sensory Substitution: A Study on the V2A-SS Learning Model Based on Information Processing Learning Theory.

IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society
The visual sensory organ (VSO) serves as the primary channel for transmitting external information to the brain; therefore, damage to the VSO can severely limit daily activities. Visual-to-Auditory Sensory Substitution (V2A-SS), an innovative approac...

Frequency-Assisted Local Attention in Lower Layers of Visual Transformers.

International journal of neural systems
Since vision transformers excel at establishing global relationships between features, they play an important role in current vision tasks. However, the global attention mechanism restricts the capture of local features, making convolutional assistan...

Machine learning analysis of cortical activity in visual associative learning tasks with differing stimulus complexity.

Physiology international
Associative learning tests are cognitive assessments that evaluate the ability of individuals to learn and remember relationships between pairs of stimuli. The Rutgers Acquired Equivalence Test (RAET) is an associative learning test that utilizes ima...

Event-driven figure-ground organisation model for the humanoid robot iCub.

Nature communications
Figure-ground organisation is a perceptual grouping mechanism for detecting objects and boundaries, essential for an agent interacting with the environment. Current figure-ground segmentation methods rely on classical computer vision or deep learning...