Back-propagation learning algorithm and parallel computers: The CLEPSYDRA mapping scheme

This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training exam...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) Vol. 31; no. 1; pp. 67 - 85
Main Author: d'Acierno, Antonio
Format: Journal Article
Language:English
Published: Elsevier B.V 01.03.2000
Subjects:
ISSN:0925-2312, 1872-8286
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training examples parallelism (if any). The proposed mapping scheme allows to describe the back-propagation algorithm as a collection of SIMD processes, so that both SIMD and MIMD machines can be used. The main feature of the obtained parallel algorithm is the absence of point-to-point communication; in fact, for each training pattern, an all-to-one broadcasting with an associative operator (combination) and an one-to-all broadcasting (that can be both realized in log P time) are needed. A performance model is proposed and tested on a ring-connected MIMD parallel computer. Simulation results on MIMD and SIMD parallel machines are also shown and commented.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0925-2312
1872-8286
DOI:10.1016/S0925-2312(99)00151-4