Dynamic event-triggered controller design for nonlinear systems: Reinforcement learning strategy.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

The current investigation aims at the optimal control problem for discrete-time nonstrict-feedback nonlinear systems by invoking the reinforcement learning-based backstepping technique and neural networks. The dynamic-event-triggered control strategy introduced in this paper can alleviate the communication frequency between the actuator and controller. Based on the reinforcement learning strategy, actor-critic neural networks are employed to implement the n-order backstepping framework. Then, a neural network weight-updated algorithm is developed to minimize the computational burden and avoid the local optimal problem. Furthermore, a novel dynamic-event-triggered strategy is introduced, which can remarkably outperform the previously studied static-event-triggered strategy. Moreover, combined with the Lyapunov stability theory, all signals in the closed-loop system are strictly proven to be semiglobal uniformly ultimately bounded. Finally, the practicality of the offered control algorithms is further elucidated by the numerical simulation examples.

Authors

  • Zichen Wang
    Department of Pharmacology and Systems Therapeutics, Department of Genetics and Genomic Sciences, BD2K-LINCS Data Coordination and Integration Center (DCIC), Mount Sinai's Knowledge Management Center for Illuminating the Druggable Genome (KMC-IDG), Icahn School of Medicine at Mount Sinai, New York, NY, USA.
  • Xin Wang
    Key Laboratory of Bio-based Material Science & Technology (Northeast Forestry University), Ministry of Education, Harbin 150040, China.
  • Ning Pang
    College of Westa, Southwest University, Chongqing, 400715, China.