Deep ReLU neural networks in high-dimensional approximation
We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for the approximation of functions from the Hölder–Zygmund space of mixed smoothness defined on the d-dimensional unit cube when the dimension d may be very large. The approximation error is measured in the norm...
Uložené v:
| Vydané v: | Neural networks Ročník 142; s. 619 - 635 |
|---|---|
| Hlavní autori: | , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Elsevier Ltd
01.10.2021
|
| Predmet: | |
| ISSN: | 0893-6080, 1879-2782, 1879-2782 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for the approximation of functions from the Hölder–Zygmund space of mixed smoothness defined on the d-dimensional unit cube when the dimension d may be very large. The approximation error is measured in the norm of isotropic Sobolev space. For every function f from the Hölder–Zygmund space of mixed smoothness, we explicitly construct a deep ReLU neural network having an output that approximates f with a prescribed accuracy ɛ, and prove tight dimension-dependent upper and lower bounds of the computation complexity of the approximation, characterized as the size and depth of this deep ReLU neural network, explicitly in d and ɛ. The proof of these results in particular, relies on the approximation by sparse-grid sampling recovery based on the Faber series. |
|---|---|
| Bibliografia: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ISSN: | 0893-6080 1879-2782 1879-2782 |
| DOI: | 10.1016/j.neunet.2021.07.027 |