A second-order accelerated neurodynamic approach for distributed convex optimization.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Based on the theories of inertial systems, a second-order accelerated neurodynamic approach is designed to solve a distributed convex optimization with inequality and set constraints. Most of the existing approaches for distributed convex optimization problems are usually first-order ones, and it is usually hard to analyze the convergence rate for the state solution of those first-order approaches. Due to the control design for the acceleration, the second-order neurodynamic approaches can often achieve faster convergence rate. Moreover, the existing second-order approaches are mostly designed to solve unconstrained distributed convex optimization problems, and are not suitable for solving constrained distributed convex optimization problems. It is acquired that the state solution of the designed neurodynamic approach in this paper converges to the optimal solution of the considered distributed convex optimization problem. An error function which demonstrates the performance of the designed neurodynamic approach, has a superquadratic convergence. Several numerical examples are provided to show the effectiveness of the presented second-order accelerated neurodynamic approach.

Authors

  • Xinrui Jiang
    College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China.
  • Sitian Qin
    Department of Mathematics, Harbin Institute of Technology at Weihai, Weihai 264209, PR China. Electronic address: qinsitian@163.com.
  • Xiaoping Xue
    Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China. Electronic address: xiaopingxue@263.net.
  • Xinzhi Liu
    Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1. Electronic address: xzliu@uwaterloo.ca.