Tensor neural networks for high-dimensional Fokker-Planck equations.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

We solve high-dimensional steady-state Fokker-Planck equations on the whole space by applying tensor neural networks. The tensor networks are a linear combination of tensor products of one-dimensional feedforward networks or a linear combination of several selected radial basis functions. The use of tensor feedforward networks allows us to efficiently exploit auto-differentiation (in physical variables) in major Python packages while using radial basis functions can fully avoid auto-differentiation, which is rather expensive in high dimensions. We then use the physics-informed neural networks and stochastic gradient descent methods to learn the tensor networks. One essential step is to determine a proper bounded domain or numerical support for the Fokker-Planck equation. To better train the tensor radial basis function networks, we impose some constraints on parameters, which lead to relatively high accuracy. We demonstrate numerically that the tensor neural networks in physics-informed machine learning are efficient for steady-state Fokker-Planck equations from two to ten dimensions.

Authors

  • Taorui Wang
    Department of Mathematical Sciences, Worcester Polytechnic Institute, Worcester, MA, USA. Electronic address: twang13@wpi.edu.
  • Zheyuan Hu
    National University of Singapore, 21 Lower Kent Ridge Road, 119077, Singapore. Electronic address: e0792494@u.nus.edu.
  • Kenji Kawaguchi
    MIT, Cambridge, MA 02139, U.S.A. kawaguch@mit.edu.
  • Zhongqiang Zhang
    Department of Mathematical Sciences, Worcester Polytechnic Institute, Worcester, MA, USA. Electronic address: zzhang7@wpi.edu.
  • George Em Karniadakis
    Division of Applied Mathematics, Brown University, Providence, RI 02912, USA.