Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included.

Authors

  • Wei Bian
    Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China; Institute of Advanced Study in Mathematics, Harbin Institute of Technology, Harbin 150001, China. Electronic address: bianweilvse520@163.com.
  • Litao Ma
    Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China; School of Science, Hebei University of Engineering, Handan 056038, China. Electronic address: ltma1821@163.com.
  • Sitian Qin
    Department of Mathematics, Harbin Institute of Technology at Weihai, Weihai 264209, PR China. Electronic address: qinsitian@163.com.
  • Xiaoping Xue
    Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China. Electronic address: xiaopingxue@263.net.