The universal approximation theorem for complex-valued neural networks

We generalize the classical universal approximation theorem for neural networks to the case of complex-valued neural networks. Precisely, we consider feedforward networks with a complex activation function σ:C→C in which each neuron performs the operation CN→C,z↦σ(b+wTz) with weights w∈CN and a bias...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Applied and computational harmonic analysis Ročník 64; s. 33 - 61
Hlavný autor: Voigtlaender, Felix
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Inc 01.05.2023
Predmet:
ISSN:1063-5203, 1096-603X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We generalize the classical universal approximation theorem for neural networks to the case of complex-valued neural networks. Precisely, we consider feedforward networks with a complex activation function σ:C→C in which each neuron performs the operation CN→C,z↦σ(b+wTz) with weights w∈CN and a bias b∈C. We completely characterize those activation functions σ for which the associated complex networks have the universal approximation property, meaning that they can uniformly approximate any continuous function on any compact subset of Cd arbitrarily well. Unlike the classical case of real networks, the set of “good activation functions”—which give rise to networks with the universal approximation property—differs significantly depending on whether one considers deep networks or shallow networks: For deep networks with at least two hidden layers, the universal approximation property holds as long as σ is neither a polynomial, a holomorphic function, nor an antiholomorphic function. Shallow networks, on the other hand, are universal if and only if the real part or the imaginary part of σ is not a polyharmonic function.
ISSN:1063-5203
1096-603X
DOI:10.1016/j.acha.2022.12.002