Capacity-Resolution Trade-Off in the Optimal Learning of Multiple Low-Dimensional Manifolds by Attractor Neural Networks

Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ∼N^{2} pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D≪N. We show that the capacity, i.e., the maximal ratio...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Physical review letters Ročník 124; číslo 4; s. 048302
Hlavní autoři: Battista, Aldo, Monasson, Rémi
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States American Physical Society 31.01.2020
Témata:
ISSN:0031-9007, 1079-7114, 1079-7114
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ∼N^{2} pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D≪N. We show that the capacity, i.e., the maximal ratio L/N, decreases as |logε|^{-D}, where ε is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner's classical theory of learning to the case of patterns with strong spatial correlations.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0031-9007
1079-7114
1079-7114
DOI:10.1103/PhysRevLett.124.048302