Convergence of a Batch Gradient Algorithm with Adaptive Momentum for Neural Networks

In this paper, a batch gradient algorithm with adaptive momentum is considered and a convergence theorem is presented when it is used for two-layer feedforward neural networks training. Simple but necessary sufficient conditions are offered to guarantee both weak and strong convergence. Compared wit...

Full description

Saved in:
Bibliographic Details
Published in:Neural processing letters Vol. 34; no. 3; pp. 221 - 228
Main Authors: Shao, Hongmei, Xu, Dongpo, Zheng, Gaofeng
Format: Journal Article
Language:English
Published: Boston Springer US 01.12.2011
Springer
Springer Nature B.V
Subjects:
ISSN:1370-4621, 1573-773X
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first