Deep ReLU neural networks in high-dimensional approximation

We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for the approximation of functions from the Hölder–Zygmund space of mixed smoothness defined on the d-dimensional unit cube when the dimension d may be very large. The approximation error is measured in the norm...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural networks Ročník 142; s. 619 - 635
Hlavní autoři: Dũng, Dinh, Nguyen, Van Kien
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.10.2021
Témata:
ISSN:0893-6080, 1879-2782, 1879-2782
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for the approximation of functions from the Hölder–Zygmund space of mixed smoothness defined on the d-dimensional unit cube when the dimension d may be very large. The approximation error is measured in the norm of isotropic Sobolev space. For every function f from the Hölder–Zygmund space of mixed smoothness, we explicitly construct a deep ReLU neural network having an output that approximates f with a prescribed accuracy ɛ, and prove tight dimension-dependent upper and lower bounds of the computation complexity of the approximation, characterized as the size and depth of this deep ReLU neural network, explicitly in d and ɛ. The proof of these results in particular, relies on the approximation by sparse-grid sampling recovery based on the Faber series.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2021.07.027