Convergence of a Batch Gradient Algorithm with Adaptive Momentum for Neural Networks

In this paper, a batch gradient algorithm with adaptive momentum is considered and a convergence theorem is presented when it is used for two-layer feedforward neural networks training. Simple but necessary sufficient conditions are offered to guarantee both weak and strong convergence. Compared wit...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Neural processing letters Ročník 34; číslo 3; s. 221 - 228
Hlavní autori: Shao, Hongmei, Xu, Dongpo, Zheng, Gaofeng
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Boston Springer US 01.12.2011
Springer
Springer Nature B.V
Predmet:
ISSN:1370-4621, 1573-773X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this paper, a batch gradient algorithm with adaptive momentum is considered and a convergence theorem is presented when it is used for two-layer feedforward neural networks training. Simple but necessary sufficient conditions are offered to guarantee both weak and strong convergence. Compared with existing general requirements, we do not restrict the error function to be quadratic or uniformly convex. A numerical example is supplied to illustrate the performance of the algorithm and support our theoretical finding.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1370-4621
1573-773X
DOI:10.1007/s11063-011-9193-x