Batch gradient based smoothing L regularization for training pi-sigma higher-order networks.
Journal:
Scientific reports
Published Date:
Jul 8, 2025
Abstract
A Pi-Sigma neural network (PSNN) is a kind of neural network architecture that blends the structure of conventional neural networks with the ideas of polynomial approximation. Training a PSNN requires modifying the weights and coefficients of the polynomial functions to reduce the error between the expected and actual outputs. It is a generalization of the conventional feedforward neural network and is especially helpful for function approximation applications. Eliminating superfluous connections from enormous networks is a well-liked and practical method of figuring out the right size for a neural network. We have acknowledged the benefit of L regularization for sparse modeling. However, an oscillation phenomenon could result from L regularization's nonsmoothness. This study suggests a smoothing L regularization method for a PSNN in order to make the models more sparse and help them learn more quickly. The new smoothing L regularizer eliminates the oscillation. Additionally, it enables us to show the PSNN's weak and strong convergence findings. In order to guarantee convergence, we also link the learning rate parameter and the penalty parameter. Results of the simulation are provided. We present the simulation results, which demonstrate that the smoothing L regularization performs significantly better than the original L regularization, thereby supporting the theoretical conclusions are offered as well.
Authors
Keywords
No keywords available for this article.