Stability of Neural Networks for Slightly Perturbed Training Data Sets

In learning models of artificial neural networks, that randomness comes from the distribution of the training data. We show individual observations do not affect excessively for a neutral network modeling, provided that it has adequate nodes on the hidden layer and proves that the empirical error of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Communications in statistics. Theory and methods Jg. 33; H. 9; S. 2259 - 2270
Hauptverfasser: Berhane, Indrias, Srinivasan, C.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Philadelphia, PA Taylor & Francis Group 31.12.2004
Taylor & Francis
Schlagworte:
ISSN:0361-0926, 1532-415X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In learning models of artificial neural networks, that randomness comes from the distribution of the training data. We show individual observations do not affect excessively for a neutral network modeling, provided that it has adequate nodes on the hidden layer and proves that the empirical error of a neural network with p number of weights converges to the expected error when where m is the size of the perturbed training data.
ISSN:0361-0926
1532-415X
DOI:10.1081/STA-200026629