AIMC Topic: Fixation, Ocular

Clear Filters Showing 31 to 40 of 85 articles

Visual prototypes in the ventral stream are attuned to complexity and gaze behavior.

Nature communications
Early theories of efficient coding suggested the visual system could compress the world by learning to represent features where information was concentrated, such as contours. This view was validated by the discovery that neurons in posterior visual ...

The relevance of signal timing in human-robot collaborative manipulation.

Science robotics
To achieve a seamless human-robot collaboration, it is crucial that robots express their intentions without perturbating or interrupting the task that a human partner is performing at that moment. Although it has not received much attention so far, t...

Using Eye Gaze to Enhance Generalization of Imitation Networks to Unseen Environments.

IEEE transactions on neural networks and learning systems
Vision-based autonomous driving through imitation learning mimics the behavior of human drivers by mapping driver view images to driving actions. This article shows that performance can be enhanced via the use of eye gaze. Previous research has shown...

FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments.

IEEE transactions on visualization and computer graphics
Human visual attention in immersive virtual reality (VR) is key for many important applications, such as content design, gaze-contingent rendering, or gaze-based interaction. However, prior works typically focused on free-viewing conditions that have...

Towards Robust Robot Control in Cartesian Space Using an Infrastructureless Head- and Eye-Gaze Interface.

Sensors (Basel, Switzerland)
This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual s...

Saliency Prediction on Omnidirectional Image With Generative Adversarial Imitation Learning.

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
When watching omnidirectional images (ODIs), subjects can access different viewports by moving their heads. Therefore, it is necessary to predict subjects' head fixations on ODIs. Inspired by generative adversarial imitation learning (GAIL), this pap...

Making eye contact with a robot: Psychophysiological responses to eye contact with a human and with a humanoid robot.

Biological psychology
Previous research has shown that eye contact, in human-human interaction, elicits increased affective and attention related psychophysiological responses. In the present study, we investigated whether eye contact with a humanoid robot would elicit th...

Gravitational Laws of Focus of Attention.

IEEE transactions on pattern analysis and machine intelligence
The understanding of the mechanisms behind focus of attention in a visual scene is a problem of great interest in visual perception and computer vision. In this paper, we describe a model of scanpath as a dynamic process which can be interpreted as a...

Eye Gaze Based 3D Triangulation for Robotic Bionic Eyes.

Sensors (Basel, Switzerland)
Three-dimensional (3D) triangulation based on active binocular vision has increasing amounts of applications in computer vision and robotics. An active binocular vision system with non-fixed cameras needs to calibrate the stereo extrinsic parameters ...

Computational discrimination between natural images based on gaze during mental imagery.

Scientific reports
When retrieving image from memory, humans usually move their eyes spontaneously as if the image were in front of them. Such eye movements correlate strongly with the spatial layout of the recalled image content and function as memory cues facilitatin...