On robust learning of memory attractors with noisy deep associative memory networks.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Apr 21, 2025
Abstract
Developing the computational mechanism for memory systems is a long-standing focus in machine learning and neuroscience. Recent studies have shown that overparameterized autoencoders (OAEs) implement associative memory (AM) by encoding training data as attractors. However, the learning of memory attractors requires that the norms of all eigenvalues of the input-output Jacobian matrix are strictly less than one. Motivated by the observed strong negative correlation between the attractor robustness and the largest singular value of the Jacobian matrix, we develop the noisy overparameterized autoencoders (NOAEs) for learning robust attractors by injecting random noises into their inputs during the training procedure. Theoretical demonstrations show that the training objective of the NOAE approximately minimizes the upper bound of the weighted sum of the reconstruction error and the square of the largest singular value. Extensive experiments in terms of numerical and image-based datasets show that NOAEs not only increase the success rate of the training samples becoming attractors, but also improve the attractor robustness. Codes are available at https://github.com/RaoXuan-1998/neural-netowrk-journal-NOAE.