Videos

Capacity-resolution trade-off in the optimal learning of multiple low-dimensional manifolds by attractor neural networks

Presenter
November 18, 2019
Abstract
Rémi Monasson - Centre National de la Recherche Scientifique (CNRS) Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ~ N^2 pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D « N. We show that the capacity, i.e. the maximal ratio L/N, decreases as |loge|^-D, where e is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner’s classical theory of learning to the case of patterns with strong spatial correlations.