Desynchronous learning in a physics-driven learning network.

Journal: The Journal of chemical physics
Published Date:

Abstract

In a neuron network, synapses update individually using local information, allowing for entirely decentralized learning. In contrast, elements in an artificial neural network are typically updated simultaneously using a central processor. Here, we investigate the feasibility and effect of desynchronous learning in a recently introduced decentralized, physics-driven learning network. We show that desynchronizing the learning process does not degrade the performance for a variety of tasks in an idealized simulation. In experiment, desynchronization actually improves the performance by allowing the system to better explore the discretized state space of solutions. We draw an analogy between desynchronization and mini-batching in stochastic gradient descent and show that they have similar effects on the learning process. Desynchronizing the learning process establishes physics-driven learning networks as truly fully distributed learning machines, promoting better performance and scalability in deployment.

Authors

  • J F Wycoff
    Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.
  • S Dillavou
    Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.
  • M Stern
    Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.
  • A J Liu
    Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.
  • D J Durian
    Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.