A smooth gradient approximation neural network for general constrained nonsmooth nonconvex optimization problems.

Journal: Neural networks : the official journal of the International Neural Network Society
PMID:

Abstract

Nonsmooth nonconvex optimization problems are pivotal in engineering practice due to the inherent nonsmooth and nonconvex characteristics of many real-world complex systems and models. The nonsmoothness and nonconvexity of the objective and constraint functions bring great challenges to the design and convergence analysis of the optimization algorithms. This paper presents a smooth gradient approximation neural network for such optimization problems, in which a smooth approximation technique with time-varying control parameter is introduced for handling nonsmooth nonregular objective functions. In addition, a hard comparator function is introduced to ensure that the state solution of the proposed neural network remains within the nonconvex inequality constraint sets. Any accumulation point of the state solution of the proposed neural network is proved to be a stationary point of the nonconvex optimization under consideration. Furthermore, the neural network demonstrates the ability to find optimal solutions for some generalized convex optimization problems. Compared with the related neural networks, the constructed neural network has weaker convergence conditions and simpler algorithm structure. Simulation results and an application in optimizing condition number verify the practical applicability of the presented algorithm.

Authors

  • Na Liu
  • Wenwen Jia
    Department of Mathematics, Harbin Institute of Technology, Weihai, PR China. Electronic address: lejww123@163.com.
  • Sitian Qin
    Department of Mathematics, Harbin Institute of Technology at Weihai, Weihai 264209, PR China. Electronic address: qinsitian@163.com.