NeuroBayesSLAM: Neurobiologically inspired Bayesian integration of multisensory information for robot navigation.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Spatial navigation depends on the combination of multiple sensory cues from idiothetic and allothetic sources. The computational mechanisms of mammalian brains in integrating different sensory modalities under uncertainty for navigation is enlightening for robot navigation. We propose a Bayesian attractor network model to integrate visual and vestibular inputs inspired by the spatial memory systems of mammalian brains. In the model, the pose of the robot is encoded separately by two sub-networks, namely head direction network for angle representation and grid cell network for position representation, using similar neural codes of head direction cells and grid cells observed in mammalian brains. The neural codes in each of the sub-networks are updated in a Bayesian manner by a population of integrator cells for vestibular cue integration, as well as a population of calibration cells for visual cue calibration. The conflict between vestibular cue and visual cue is resolved by the competitive dynamics between the two populations. The model, implemented on a monocular visual simultaneous localization and mapping (SLAM) system, termed NeuroBayesSLAM, successfully builds semi-metric topological maps and self-localizes in outdoor and indoor environments of difference characteristics, achieving comparable performance as previous neurobiologically inspired navigation systems but with much less computation complexity. The proposed multisensory integration method constitutes a concise yet robust and biologically plausible method for robot navigation in large environments. The model provides a viable Bayesian mechanism for multisensory integration that may pertain to other neural subsystems beyond spatial cognition.

Authors

  • Taiping Zeng
    Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, China; State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China. Electronic address: zengtaiping@fudan.edu.cn.
  • Fengzhen Tang
    Shenyang Institute of Automation, Chinese Academy of Sciences, No.114, Nanta Street, Shenyang, Liaoning Province, 110016, China; School of Computer Science, The University of Birmingham, Edgbaston, Birmingham B15 2TT, UK. Electronic address: tangfengzhen87@hotmail.com.
  • Daxiong Ji
    Ocean College, Zhejiang University, Zhoushan, 316021, Zhejiang, China. Electronic address: jidaxiong@zju.edu.cn.
  • Bailu Si
    State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, Shenyang, P.R.C. sibailu@sia.ac.cn.