Complexity-calibrated benchmarks for machine learning reveal when prediction algorithms succeed and mislead

Recurrent neural networks are used to forecast time series in finance, climate, language, and from many other domains. Reservoir computers are a particularly easily trainable form of recurrent neural network. Recently, a “next-generation” reservoir computer was introduced in which the memory trace i...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Scientific reports Ročník 14; číslo 1; s. 8727 - 11
Hlavní autoři: Marzen, Sarah E., Riechers, Paul M., Crutchfield, James P.
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 16.04.2024
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2045-2322, 2045-2322
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recurrent neural networks are used to forecast time series in finance, climate, language, and from many other domains. Reservoir computers are a particularly easily trainable form of recurrent neural network. Recently, a “next-generation” reservoir computer was introduced in which the memory trace involves only a finite number of previous symbols. We explore the inherent limitations of finite-past memory traces in this intriguing proposal. A lower bound from Fano’s inequality shows that, on highly non-Markovian processes generated by large probabilistic state machines, next-generation reservoir computers with reasonably long memory traces have an error probability that is at least ∼ 60 % higher than the minimal attainable error probability in predicting the next observation. More generally, it appears that popular recurrent neural networks fall far short of optimally predicting such complex processes. These results highlight the need for a new generation of optimized recurrent neural network architectures. Alongside this finding, we present concentration-of-measure results for randomly-generated but complex processes. One conclusion is that large probabilistic state machines—specifically, large ϵ -machines—are key to generating challenging and structurally-unbiased stimuli for ground-truthing recurrent neural network architectures.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
USDOE Office of Science (SC)
SC0017324; FA9550-19-1-0411; TWCF0570; FQXI-RFP-CPW-2007; W911NF-21-1-0048; W911NF-18-1-0028
US Army Research Laboratory (USARL)
US Army Research Office (ARO)
USDOE
US Air Force Office of Scientific Research (AFOSR)
Templeton World Charity Foundation
Foundational Questions Institute and Fetzer Franklin Fund
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-58814-0