Event-Triggered Optimal Bipartite Consensus Control for Constrained Multiagent Systems via Internal Reinforce Q-Learning.

Journal: IEEE transactions on cybernetics
Published Date:

Abstract

In this article, the event-triggered optimal bipartite consensus control problem is investigated for second-order discrete-time multiagent systems (MASs) with control input saturation and unknown system models. First, an instant reward signal with nonquadratic functions dealing with the control input saturation is defined, based on which a novel internal reinforce reward function is defined to facilitates agents to learn more intrinsic information from the local environment. Then, a novel event-triggered internal reinforce Q-learning (IrQL) algorithm is introduced. In contrast to conventional time-triggering Q-learning methods, the proposed event-triggered IrQL algorithm can not only fully exploit environment but also save the data computation and transmission resources. Based on elegant functional analysis techniques and Lyapunov stability theory, the internal reinforce reward function can be proved to be bounded and the tracking error dynamics of MASs are ensured asymptotic stability under the proposed event-triggered control policies. Then, data-driven reinforce-critic-actor neural networks are constructed to implement the event-triggered IrQL algorithm online with the proof of convergence. Finally, simulation examples show the validity and better performance over existing researches.

Authors

  • Meng Wang
    State Key Laboratory of Urban Water Resource and Environment, School of Environment, Harbin Institute of Technology, Harbin 150001, China.
  • Xueqian Gui
  • Huaicheng Yan
  • Cong Bi

Keywords

No keywords available for this article.