Capacity-Resolution Trade-Off in the Optimal Learning of Multiple Low-Dimensional Manifolds by Attractor Neural Networks

Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ∼N^{2} pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D≪N. We show that the capacity, i.e., the maximal ratio...

Full description

Saved in:
Bibliographic Details
Published in:Physical review letters Vol. 124; no. 4; p. 048302
Main Authors: Battista, Aldo, Monasson, Rémi
Format: Journal Article
Language:English
Published: United States American Physical Society 31.01.2020
Subjects:
ISSN:0031-9007, 1079-7114, 1079-7114
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ∼N^{2} pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D≪N. We show that the capacity, i.e., the maximal ratio L/N, decreases as |logε|^{-D}, where ε is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner's classical theory of learning to the case of patterns with strong spatial correlations.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0031-9007
1079-7114
1079-7114
DOI:10.1103/PhysRevLett.124.048302