Sample-efficient and occlusion-robust reinforcement learning for robotic manipulation via multimodal fusion dualization and representation normalization.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Recent advances in visual reinforcement learning (visual RL), which learns from high-dimensional image observations, have narrowed the gap between state-based and image-based training. However, visual RL continues to face significant challenges in robotic manipulation tasks involving occlusions, such as lifting obscured objects. Although high-resolution tactile sensors have shown promise in addressing these occlusion issues through visuotactile manipulation, their high cost and complexity limit widespread adoption. In this paper, we propose a novel RL approach that introduces multimodal fusion dualization and representation normalization to enhance sample efficiency and robustness in robotic manipulation tasks involving occlusions - without relying on tactile feedback. Our multimodal fusion dualization technique separates the fusion process into two distinct modules, each optimized individually for the actor and the critic, resulting in tailored representations for each network. Additionally, representation normalization techniques, including LayerNorm and SimplexNorm, are incorporated into the representation learning process to stabilize training and prevent issues such as gradient explosion. We demonstrate that our method not only effectively tackles challenging robotic manipulation tasks involving occlusions but also outperforms state-of-the-art visual RL and state-based RL methods in both sample efficiency and task performance. Notably, this is achieved without relying on tactile sensors or prior knowledge, such as predefined low-dimensional coordinate states or pre-trained representations, making our approach both cost-effective and scalable for real-world robotic applications.

Authors

  • Samyeul Noh
    ETRI, Daejeon, 34129, Republic of Korea; School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea. Electronic address: samuel@etri.re.kr.
  • Wooju Lee
    School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea. Electronic address: dnwn24@kaist.ac.kr.
  • Hyun Myung
    Urban Robotics Laboratory (URL), Dept. Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon 305-338, Korea. hmyung@kaist.ac.kr.