Learn to synchronize, synchronize to learn.

Journal: Chaos (Woodbury, N.Y.)
Published Date:

Abstract

In recent years, the artificial intelligence community has seen a continuous interest in research aimed at investigating dynamical aspects of both training procedures and machine learning models. Of particular interest among recurrent neural networks, we have the Reservoir Computing (RC) paradigm characterized by conceptual simplicity and a fast training scheme. Yet, the guiding principles under which RC operates are only partially understood. In this work, we analyze the role played by Generalized Synchronization (GS) when training a RC to solve a generic task. In particular, we show how GS allows the reservoir to correctly encode the system generating the input signal into its dynamics. We also discuss necessary and sufficient conditions for the learning to be feasible in this approach. Moreover, we explore the role that ergodicity plays in this process, showing how its presence allows the learning outcome to apply to multiple input trajectories. Finally, we show that satisfaction of the GS can be measured by means of the mutual false nearest neighbors index, which makes effective to practitioners theoretical derivations.

Authors

  • Pietro Verzelli
    Faculty of Informatics, Università della Svizzera Italiana, Lugano 69000, Switzerland.
  • Cesare Alippi
    Department of Electronics, Information, and Bioengineering, Politecnico di Milano, 20133 Milan, Italy.
  • Lorenzo Livi
    Department of Computer Science, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter EX4 4QF, United Kingdom.