Two-timescale recurrent neural networks for distributed minimax optimization.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

In this paper, we present two-timescale neurodynamic optimization approaches to distributed minimax optimization. We propose four multilayer recurrent neural networks for solving four different types of generally nonlinear convex-concave minimax problems subject to linear equality and nonlinear inequality constraints. We derive sufficient conditions to guarantee the stability and optimality of the neural networks. We demonstrate the viability and efficiency of the proposed neural networks in two specific paradigms for Nash-equilibrium seeking in a zero-sum game and distributed constrained nonlinear optimization.

Authors

  • Zicong Xia
    School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China.
  • Yang Liu
    Department of Computer Science, Hong Kong Baptist University, Hong Kong, China.
  • Jiasen Wang
    National Clinical Research Center for Otolaryngologic Diseases, College of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China.
  • Jun Wang
    Department of Speech, Language, and Hearing Sciences and the Department of Neurology, The University of Texas at Austin, Austin, TX 78712, USA.