Deep Deterministic Policy Gradient-Based Autonomous Driving for Mobile Robots in Sparse Reward Environments.

Journal: Sensors (Basel, Switzerland)
Published Date:

Abstract

In this paper, we propose a deep deterministic policy gradient (DDPG)-based path-planning method for mobile robots by applying the hindsight experience replay (HER) technique to overcome the performance degradation resulting from sparse reward problems occurring in autonomous driving mobile robots. The mobile robot in our analysis was a robot operating system-based TurtleBot3, and the experimental environment was a virtual simulation based on Gazebo. A fully connected neural network was used as the DDPG network based on the actor-critic architecture. Noise was added to the actor network. The robot recognized an unknown environment by measuring distances using a laser sensor and determined the optimized policy to reach its destination. The HER technique improved the learning performance by generating three new episodes with normal experience from a failed episode. The proposed method demonstrated that the HER technique could help mitigate the sparse reward problem; this was further corroborated by the successful autonomous driving results obtained after applying the proposed method to two reward systems, as well as actual experimental results.

Authors

  • Minjae Park
    Department of Electronic Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea.
  • Seok Young Lee
    Department of Electronic Engineering, Soonchunhyang University, Asan 31538, Republic of Korea.
  • Jin Seok Hong
    Department of Electronic Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea.
  • Nam Kyu Kwon
    Department of Electronic Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea.