Real-world humanoid locomotion with reinforcement learning.

Journal: Science robotics
Published Date:

Abstract

Humanoid robots that can autonomously operate in diverse environments have the potential to help address labor shortages in factories, assist elderly at home, and colonize new planets. Although classical controllers for humanoid robots have shown impressive results in a number of settings, they are challenging to generalize and adapt to new environments. Here, we present a fully learning-based approach for real-world humanoid locomotion. Our controller is a causal transformer that takes the history of proprioceptive observations and actions as input and predicts the next action. We hypothesized that the observation-action history contains useful information about the world that a powerful transformer model can use to adapt its behavior in context, without updating its weights. We trained our model with large-scale model-free reinforcement learning on an ensemble of randomized environments in simulation and deployed it to the real-world zero-shot. Our controller could walk over various outdoor terrains, was robust to external disturbances, and could adapt in context.

Authors

  • Ilija Radosavovic
    University of California, Berkeley CA, USA.
  • Tete Xiao
    University of California, Berkeley CA, USA.
  • Bike Zhang
    University of California, Berkeley CA, USA.
  • Trevor Darrell
  • Jitendra Malik
    Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720; malik@eecs.berkeley.edu esther.yuh@ucsf.edu.
  • Koushil Sreenath
    University of California, Berkeley CA, USA.