Parallel nonlinear optimization techniques for training neural networks

In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a s...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on neural networks Vol. 14; no. 6; pp. 1460 - 1468
Main Authors: Phua, P.K.H., Daohua Ming
Format: Journal Article
Language:English
Published: United States IEEE 01.11.2003
Subjects:
ISSN:1045-9227
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first