Event-driven figure-ground organisation model for the humanoid robot iCub.

Journal: Nature communications
PMID:

Abstract

Figure-ground organisation is a perceptual grouping mechanism for detecting objects and boundaries, essential for an agent interacting with the environment. Current figure-ground segmentation methods rely on classical computer vision or deep learning, requiring extensive computational resources, especially during training. Inspired by the primate visual system, we developed a bio-inspired perception system for the neuromorphic robot iCub. The model uses a hierarchical, biologically plausible architecture and event-driven vision to distinguish foreground objects from the background. Unlike classical approaches, event-driven cameras reduce data redundancy and computation. The system has been qualitatively and quantitatively assessed in simulations and with event-driven cameras on iCub in various scenarios. It successfully segments items in diverse real-world settings, showing comparable results to its frame-based version on simple stimuli and the Berkeley Segmentation dataset. This model enhances hybrid systems, complementing conventional deep learning models by processing only relevant data in Regions of Interest (ROI), enabling low-latency autonomous robotic applications.

Authors

  • Giulia D'Angelo
    Event Driven Perception for Robotics, Istituto Italiano di Tecnologia, 16163, Genoa, Italy.
  • Simone Voto
    Istituto Italiano di Tecnologia, Event Driven Perception for Robotics, Genoa, Italy.
  • Massimiliano Iacono
    Event Driven Perception for Robotics, Istituto Italiano di Tecnologia, 16163, Genoa, Italy.
  • Arren Glover
    Event Driven Perception for Robotics, Istituto Italiano di Tecnologia, 16163, Genoa, Italy.
  • Ernst Niebur
    Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218.
  • Chiara Bartolozzi
    Event-Driven Perception for Robotics, Italian Institute of Technology, via San Quirico 19D, 16163, Genova, Italy.