A note on computing with Kolmogorov Superpositions without iterations

We extend Kolmogorov’s Superpositions to approximating arbitrary continuous functions with a noniterative approach that can be used by any neural network that uses these superpositions. Our approximation algorithm uses a modified dimension reducing function that allows for an increased number of sum...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks Jg. 144; S. 438 - 442
Hauptverfasser: Demb, Robert, Sprecher, David
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Ltd 01.12.2021
Schlagworte:
ISSN:0893-6080, 1879-2782, 1879-2782
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We extend Kolmogorov’s Superpositions to approximating arbitrary continuous functions with a noniterative approach that can be used by any neural network that uses these superpositions. Our approximation algorithm uses a modified dimension reducing function that allows for an increased number of summands to achieve an error bound commensurate with that of r iterations for any r. This new variant of Kolmogorov’s Superpositions improves upon the original parallelism inherent in them by performing highly distributed parallel computations without synchronization. We note that this approach makes implementation much easier and more efficient on networks of modern parallel hardware, and thus makes it a more practical tool.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2021.07.006