Refined Kolmogorov complexity of analog, evolving and stochastic recurrent neural networks

Kolmogorov complexity measures the compressibility of real numbers. We provide a refined characterization of the hypercomputational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respective...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences Jg. 711; S. 122104
Hauptverfasser: Cabessa, Jérémie, Strozecki, Yann
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.09.2025
Schlagworte:
ISSN:0020-0255
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Kolmogorov complexity measures the compressibility of real numbers. We provide a refined characterization of the hypercomputational power of analog, evolving, and stochastic neural networks based on the Kolmogorov complexity of their real weights, evolving weights, and real probabilities, respectively. First, we retrieve the infinite hierarchy of complexity classes of analog networks, defined in terms of the Kolmogorov complexity of their real weights. This hierarchy lies between the complexity classes P and P/poly. Next, using a natural identification between real numbers and infinite sequences of bits, we generalize this result to evolving networks, obtaining a similar hierarchy of complexity classes within the same bounds. Finally, we extend these results to stochastic networks that employ real probabilities as randomness, deriving a new infinite hierarchy of complexity classes situated between BPP and BPP/log⁎. Beyond providing examples of such hierarchies, we describe a generic method for constructing them based on classes of functions of increasing complexity. As a practical application, we show that the predictive capabilities of recurrent neural networks are strongly impacted by the quantization applied to their weights. Overall, these results highlight the relationship between the computational power of neural networks and the intrinsic information contained by their parameters.
ISSN:0020-0255
DOI:10.1016/j.ins.2025.122104