Distributed nonconvex optimization subject to globally coupled constraints via collaborative neurodynamic optimization.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

In this paper, a recurrent neural network is proposed for distributed nonconvex optimization subject to globally coupled (in)equality constraints and local bound constraints. Two distributed optimization models, including a resource allocation problem and a consensus-constrained optimization problem, are established, where the objective functions are not necessarily convex, or the constraints do not guarantee a convex feasible set. To handle the nonconvexity, an augmented Lagrangian function is designed, based on which a recurrent neural network is developed for solving the optimization models in a distributed manner, and the convergence to a local optimal solution is proven. For the search of global optimal solutions, a collaborative neurodynamic optimization method is established by utilizing multiple proposed recurrent neural networks and a meta-heuristic rule. A numerical example, a simulation involving an electricity market, and a distributed cooperative control problem are provided to verify and demonstrate the characteristics of the main results.

Authors

  • Zicong Xia
    School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China.
  • Yang Liu
    Department of Computer Science, Hong Kong Baptist University, Hong Kong, China.
  • Cheng Hu
    College of Mathematics and System Sciences, Xinjiang University, Urumqi, 830046, Xinjiang, PR China.
  • Haijun Jiang
    College of Mathematics and System Sciences, Xinjiang University, Urumqi, 830046, Xinjiang, PR China. Electronic address: jianghaijunxju@163.com.