Mastering diverse control tasks through world models.

Journal: Nature
PMID:

Abstract

Developing a general algorithm that learns to solve tasks across a wide range of applications has been a fundamental challenge in artificial intelligence. Although current reinforcement-learning algorithms can be readily applied to tasks similar to what they have been developed for, configuring them for new application domains requires substantial human expertise and experimentation. Here we present the third generation of Dreamer, a general algorithm that outperforms specialized methods across over 150 diverse tasks, with a single configuration. Dreamer learns a model of the environment and improves its behaviour by imagining future scenarios. Robustness techniques based on normalization, balancing and transformations enable stable learning across domains. Applied out of the box, Dreamer is, to our knowledge, the first algorithm to collect diamonds in Minecraft from scratch without human data or curricula. This achievement has been posed as a substantial challenge in artificial intelligence that requires exploring farsighted strategies from pixels and sparse rewards in an open world. Our work allows solving challenging control problems without extensive experimentation, making reinforcement learning broadly applicable.

Authors

  • Danijar Hafner
    Google Brain, Mountain View, CA, USA.
  • Jurgis Pasukonis
    Google DeepMind, San Francisco, CA, USA.
  • Jimmy Ba
    University of Toronto, Toronto, Ontario, Canada.
  • Timothy Lillicrap
    Google DeepMind, 5 New Street Square, London EC4A 3TW, UK.