A non-penalty recurrent neural network for solving a class of constrained optimization problems.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

In this paper, we explain a methodology to analyze convergence of some differential inclusion-based neural networks for solving nonsmooth optimization problems. For a general differential inclusion, we show that if its right hand-side set valued map satisfies some conditions, then solution trajectory of the differential inclusion converges to optimal solution set of its corresponding in optimization problem. Based on the obtained methodology, we introduce a new recurrent neural network for solving nonsmooth optimization problems. Objective function does not need to be convex on R(n) nor does the new neural network model require any penalty parameter. We compare our new method with some penalty-based and non-penalty based models. Moreover for differentiable cases, we implement circuit diagram of the new neural network.

Authors

  • Alireza Hosseini
    Department of Mathematics, Statistics and Computer sciences, University of Tehran, P.O. Box 14115-175, Tehran, Iran; School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran. Electronic address: a.r_hosseini@khayam.ut.ac.ir.