Improving the convergence of the backpropagation algorithm using learning rate adaptation methods

This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estima...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural computation Ročník 11; číslo 7; s. 1769
Hlavní autoři: Magoulas, G D, Vrahatis, M N, Androulakis, G S
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States 01.10.1999
Témata:
ISSN:0899-7667
On-line přístup:Zjistit podrobnosti o přístupu
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0899-7667
DOI:10.1162/089976699300016223