Stability of Neural Networks for Slightly Perturbed Training Data Sets

In learning models of artificial neural networks, that randomness comes from the distribution of the training data. We show individual observations do not affect excessively for a neutral network modeling, provided that it has adequate nodes on the hidden layer and proves that the empirical error of...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Communications in statistics. Theory and methods Ročník 33; číslo 9; s. 2259 - 2270
Hlavní autoři: Berhane, Indrias, Srinivasan, C.
Médium: Journal Article
Jazyk:angličtina
Vydáno: Philadelphia, PA Taylor & Francis Group 31.12.2004
Taylor & Francis
Témata:
ISSN:0361-0926, 1532-415X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In learning models of artificial neural networks, that randomness comes from the distribution of the training data. We show individual observations do not affect excessively for a neutral network modeling, provided that it has adequate nodes on the hidden layer and proves that the empirical error of a neural network with p number of weights converges to the expected error when where m is the size of the perturbed training data.
ISSN:0361-0926
1532-415X
DOI:10.1081/STA-200026629