Back-propagation learning algorithm and parallel computers: The CLEPSYDRA mapping scheme

This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training exam...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) Jg. 31; H. 1; S. 67 - 85
1. Verfasser: d'Acierno, Antonio
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier B.V 01.03.2000
Schlagworte:
ISSN:0925-2312, 1872-8286
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training examples parallelism (if any). The proposed mapping scheme allows to describe the back-propagation algorithm as a collection of SIMD processes, so that both SIMD and MIMD machines can be used. The main feature of the obtained parallel algorithm is the absence of point-to-point communication; in fact, for each training pattern, an all-to-one broadcasting with an associative operator (combination) and an one-to-all broadcasting (that can be both realized in log P time) are needed. A performance model is proposed and tested on a ring-connected MIMD parallel computer. Simulation results on MIMD and SIMD parallel machines are also shown and commented.
AbstractList This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training examples parallelism (if any). The proposed mapping scheme allows to describe the back-propagation algorithm as a collection of SIMD processes, so that both SIMD and MIMD machines can be used. The main feature of the obtained parallel algorithm is the absence of point-to-point communication; in fact, for each training pattern, an all-to-one broadcasting with an associative operator (combination) and an one-to-all broadcasting (that can be both realized in log P time) are needed. A performance model is proposed and tested on a ring-connected MIMD parallel computer. Simulation results on MIMD and SIMD parallel machines are also shown and commented.
This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training examples parallelism (if any). The proposed mapping scheme allows to describe the back-propagation algorithm as a collection of SIMD processes, so that both SIMD and MIMD machines can be used. The main feature of the obtained parallel algorithm is the absence of point-to-point communication; in fact, for each training pattern, an all-to-one broadcasting with an associative operator (combination) and an one-to-all broadcasting (that can be both realized in log P time) are needed. A performance model is proposed and tested on a ring-connected MIMD parallel computer. Simulation results on MIMD and SIMD parallel machines are also shown and commented.
Author d'Acierno, Antonio
Author_xml – sequence: 1
  givenname: Antonio
  surname: d'Acierno
  fullname: d'Acierno, Antonio
  email: dacierno.a@irsip.na.cnr.it
  organization: I.R.S.I.P.-C.N.R. Via P. Castellino, 111-80131 Napoli-Italy
BookMark eNqFkEtLAzEQgIMoWKs_QdiT6GE1yW6ziR6k1icUFB-gpzDNzrbR7MNkK_jv3bbiwUtPc_m-YebbIZtVXSEh-4weM8rEyRNVfBDzhPFDpY4oZQMWpxukx2TGY8ml2CS9P2Sb7ITw3kEZ46pHXi_AfMSNrxuYQmvrKnIIvrLVNAI3rb1tZ2UEVR414ME5dJGpy2beog-n0fMMo9H46uHp7fJxGJXQNAsvmBmWuEu2CnAB935nn7xcXz2PbuPx_c3daDiOTZLINhYyE5gKzCYgBc8yqbgxecIFKzKR00IZpN2lCQXJzIRTNuESCkgM7WAhZdInB6u93Q-fcwytLm0w6BxUWM-D5lknpYp34NkKNL4OwWOhjW2XL7cerNOM6kVNvaypF6m0UnpZU6edPfhnN96W4L_XeucrD7sGXxa9DsZiZTC3Hk2r89qu2fAD7XWNtA
CitedBy_id crossref_primary_10_3745_KIPSTB_2004_11B_6_735
crossref_primary_10_1016_j_neucom_2016_06_014
crossref_primary_10_1016_j_neucom_2010_03_021
crossref_primary_10_1007_s13369_017_2907_2
Cites_doi 10.1016/0167-8191(87)90060-3
10.1016/0925-2312(92)90043-O
10.1016/0743-7315(92)90068-X
10.1109/71.313123
10.1109/72.165590
10.7551/mitpress/5236.001.0001
10.1109/EMPDP.1995.389157
10.1016/0167-8191(90)90084-M
10.1016/0167-8191(90)90083-L
10.1016/0743-7315(89)90065-8
10.1109/71.577255
10.1038/323533a0
10.1109/ICNN.1988.23925
10.1109/ICNN.1988.23922
10.1016/0167-8191(90)90085-N
ContentType Journal Article
Copyright 2000 Elsevier Science B.V.
Copyright_xml – notice: 2000 Elsevier Science B.V.
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1016/S0925-2312(99)00151-4
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts

DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1872-8286
EndPage 85
ExternalDocumentID 10_1016_S0925_2312_99_00151_4
S0925231299001514
GroupedDBID ---
--K
--M
.DC
.~1
0R~
123
1B1
1~.
1~5
29N
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AADPK
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXLA
AAXUO
AAYFN
ABBOA
ABCQJ
ABFNM
ABJNI
ABMAC
ABXDB
ABYKQ
ACDAQ
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
AEBSH
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
IHE
J1W
KOM
LG9
M41
MO0
MOBAO
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SDF
SDG
SDP
SES
SEW
SPC
SPCBC
SSN
SSV
SSZ
T5K
WUQ
XPP
ZMT
~G-
9DU
AATTM
AAXKI
AAYWO
AAYXX
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
CITATION
EFKBS
~HD
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c338t-6876e46e7ba86277892ccd3261f76d0f9ce071230a81cb201b28afa3c02776883
ISICitedReferencesCount 5
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000086007800005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0925-2312
IngestDate Wed Oct 01 15:00:05 EDT 2025
Sat Nov 29 04:38:12 EST 2025
Tue Nov 18 20:39:55 EST 2025
Fri Feb 23 02:31:16 EST 2024
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords MIMD parallel computers
SIMD parallel computers
Mapping scheme
Back-propagation
Language English
License https://www.elsevier.com/tdm/userlicense/1.0
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c338t-6876e46e7ba86277892ccd3261f76d0f9ce071230a81cb201b28afa3c02776883
Notes ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
PQID 27201492
PQPubID 23500
PageCount 19
ParticipantIDs proquest_miscellaneous_27201492
crossref_citationtrail_10_1016_S0925_2312_99_00151_4
crossref_primary_10_1016_S0925_2312_99_00151_4
elsevier_sciencedirect_doi_10_1016_S0925_2312_99_00151_4
PublicationCentury 2000
PublicationDate 2000-03-01
PublicationDateYYYYMMDD 2000-03-01
PublicationDate_xml – month: 03
  year: 2000
  text: 2000-03-01
  day: 01
PublicationDecade 2000
PublicationTitle Neurocomputing (Amsterdam)
PublicationYear 2000
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References D.E. Rumelhart, J.L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, MIT Press, Cambridge, MA, 1986.
D.A. Pomerleau, G.L. Gusciora, D.S. Touretzky, H.T. Kung, Neural network simulation at warp speed: how we got 17 million connections per second, Proceedings of the ICNN, San Diego, CA, 1988, vol. 2, pp. 581–586.
Kumar, Shekhar, Amin (BIB9) 1994; 5
Zhang, Mckenna, Mesirov, Waltz (BIB19) 1990; 14
S.Y. Kung, J.N. Hwang, Parallel architectures for artificial neural nets, Proceedings of the ICNN, San Diego, CA, 1988, vol. 2, pp. 165–172.
Kung, Hwang (BIB11) 1989; 6
Nordstrom, Svensson (BIB12) 1992; 14
Rumelhart, Hinton, Williams (BIB15) 1986; 323
Witbrock, Zagha (BIB18) 1990; 14
U. De Carlini, U. Villano, Transputers and Parallel Architectures, Ellis Horwood, Chichester, England, 1991.
A. Petrowski, L. Personnaz, L., G. Dreyfus, C. Girault, Parallel implementations of neural network simulations, Hypercube and Distributed Computers, Elsevier Science Publishers B. V., North-Holland, 1989, pp. 205–218.
Fujimoto, Fukuda, Akabane (BIB6) 1992; 3
Ahmed, Priyalal (BIB1) 1997; 8
M. Besch, H.W. Pohl, Flexible data parallel training of neural networks using MIMD-computers, in Third Euromicro Workshop on Parallel and Distributed Processing, Sanremo, Italy, January 1995.
J. Bourrely, Parallelization of a neural learning algorithm on a hypercube, in: F. Andre, J.P. Verjus, (Eds.), Hypercube and Distributed Computers, Elsevier Science Publishers B. V., North-Holland, 1989, pp. 219–229.
Kerckhoffs, Wedman, Frietman (BIB7) 1992; 4
Singer (BIB17) 1990; 14
G. Fox, S. Otto, A. Hey, Matrix algorithm on a hypercube: matrix multiplication, Parallel Comput. 4 (1987) 17–31.
Klapuri, Hamalainen, Saarinen, Kaski (BIB8) 1996; 20
Zhang (10.1016/S0925-2312(99)00151-4_BIB19) 1990; 14
10.1016/S0925-2312(99)00151-4_BIB14
Kumar (10.1016/S0925-2312(99)00151-4_BIB9) 1994; 5
10.1016/S0925-2312(99)00151-4_BIB13
10.1016/S0925-2312(99)00151-4_BIB10
Ahmed (10.1016/S0925-2312(99)00151-4_BIB1) 1997; 8
10.1016/S0925-2312(99)00151-4_BIB2
10.1016/S0925-2312(99)00151-4_BIB16
Witbrock (10.1016/S0925-2312(99)00151-4_BIB18) 1990; 14
10.1016/S0925-2312(99)00151-4_BIB5
10.1016/S0925-2312(99)00151-4_BIB3
Fujimoto (10.1016/S0925-2312(99)00151-4_BIB6) 1992; 3
Klapuri (10.1016/S0925-2312(99)00151-4_BIB8) 1996; 20
10.1016/S0925-2312(99)00151-4_BIB4
Nordstrom (10.1016/S0925-2312(99)00151-4_BIB12) 1992; 14
Kerckhoffs (10.1016/S0925-2312(99)00151-4_BIB7) 1992; 4
Rumelhart (10.1016/S0925-2312(99)00151-4_BIB15) 1986; 323
Kung (10.1016/S0925-2312(99)00151-4_BIB11) 1989; 6
Singer (10.1016/S0925-2312(99)00151-4_BIB17) 1990; 14
References_xml – volume: 20
  start-page: 267
  year: 1996
  end-page: 276
  ident: BIB8
  publication-title: Mapping artificial neural networks to a tree shape neurocomputer, Microprocessors Microsystems
– volume: 14
  start-page: 260
  year: 1992
  end-page: 285
  ident: BIB12
  article-title: Using and designing massively parallel computers for artificial neural networks
  publication-title: J. Parallel Distrib. Comput.
– reference: D.A. Pomerleau, G.L. Gusciora, D.S. Touretzky, H.T. Kung, Neural network simulation at warp speed: how we got 17 million connections per second, Proceedings of the ICNN, San Diego, CA, 1988, vol. 2, pp. 581–586.
– reference: U. De Carlini, U. Villano, Transputers and Parallel Architectures, Ellis Horwood, Chichester, England, 1991.
– reference: G. Fox, S. Otto, A. Hey, Matrix algorithm on a hypercube: matrix multiplication, Parallel Comput. 4 (1987) 17–31.
– volume: 323
  start-page: 533
  year: 1986
  end-page: 536
  ident: BIB15
  article-title: Learning representation by back-propagation of errors
  publication-title: Nature
– reference: S.Y. Kung, J.N. Hwang, Parallel architectures for artificial neural nets, Proceedings of the ICNN, San Diego, CA, 1988, vol. 2, pp. 165–172.
– reference: M. Besch, H.W. Pohl, Flexible data parallel training of neural networks using MIMD-computers, in Third Euromicro Workshop on Parallel and Distributed Processing, Sanremo, Italy, January 1995.
– volume: 14
  start-page: 329
  year: 1990
  end-page: 346
  ident: BIB18
  article-title: An implementation of back-propagation learning on GF11, a large SIMD parallel computer
  publication-title: Parallel Comput.
– volume: 8
  start-page: 130
  year: 1997
  end-page: 136
  ident: BIB1
  article-title: Algorithmic mapping of feedforward neural networks onto multiple bus systems
  publication-title: IEEE Trans. Parallel Distrib. Systems
– volume: 14
  start-page: 305
  year: 1990
  end-page: 315
  ident: BIB17
  article-title: Implementations of artificial neural networks on the connection machine
  publication-title: Parallel Comput.
– volume: 14
  start-page: 317
  year: 1990
  end-page: 327
  ident: BIB19
  article-title: The back-propagation algorithm on grid and hypercube architectures
  publication-title: Parallel Comput.
– volume: 3
  start-page: 876
  year: 1992
  end-page: 888
  ident: BIB6
  article-title: Massively parallel architectures for large scale neural networks simulations
  publication-title: IEEE Trans. Neural Networks
– reference: J. Bourrely, Parallelization of a neural learning algorithm on a hypercube, in: F. Andre, J.P. Verjus, (Eds.), Hypercube and Distributed Computers, Elsevier Science Publishers B. V., North-Holland, 1989, pp. 219–229.
– reference: D.E. Rumelhart, J.L. McClelland, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, MIT Press, Cambridge, MA, 1986.
– volume: 6
  start-page: 358
  year: 1989
  end-page: 387
  ident: BIB11
  article-title: A unified systolic architecture for artificial neural networks
  publication-title: J. Parallel Distrib. Comput.
– volume: 5
  start-page: 1073
  year: 1994
  end-page: 1090
  ident: BIB9
  article-title: A scalable parallel formulation of the backpropagation algorithm for hypercubes and related architectures
  publication-title: IEEE Trans. Parallel Distrib. Systems
– volume: 4
  start-page: 43
  year: 1992
  end-page: 63
  ident: BIB7
  article-title: Speeding up back-propagation training on a hypercube computer
  publication-title: Neurocomputing
– reference: A. Petrowski, L. Personnaz, L., G. Dreyfus, C. Girault, Parallel implementations of neural network simulations, Hypercube and Distributed Computers, Elsevier Science Publishers B. V., North-Holland, 1989, pp. 205–218.
– ident: 10.1016/S0925-2312(99)00151-4_BIB5
  doi: 10.1016/0167-8191(87)90060-3
– volume: 4
  start-page: 43
  year: 1992
  ident: 10.1016/S0925-2312(99)00151-4_BIB7
  article-title: Speeding up back-propagation training on a hypercube computer
  publication-title: Neurocomputing
  doi: 10.1016/0925-2312(92)90043-O
– volume: 14
  start-page: 260
  issue: 3
  year: 1992
  ident: 10.1016/S0925-2312(99)00151-4_BIB12
  article-title: Using and designing massively parallel computers for artificial neural networks
  publication-title: J. Parallel Distrib. Comput.
  doi: 10.1016/0743-7315(92)90068-X
– volume: 5
  start-page: 1073
  issue: 10
  year: 1994
  ident: 10.1016/S0925-2312(99)00151-4_BIB9
  article-title: A scalable parallel formulation of the backpropagation algorithm for hypercubes and related architectures
  publication-title: IEEE Trans. Parallel Distrib. Systems
  doi: 10.1109/71.313123
– volume: 3
  start-page: 876
  issue: 6
  year: 1992
  ident: 10.1016/S0925-2312(99)00151-4_BIB6
  article-title: Massively parallel architectures for large scale neural networks simulations
  publication-title: IEEE Trans. Neural Networks
  doi: 10.1109/72.165590
– volume: 20
  start-page: 267
  issue: 5
  year: 1996
  ident: 10.1016/S0925-2312(99)00151-4_BIB8
  publication-title: Mapping artificial neural networks to a tree shape neurocomputer, Microprocessors Microsystems
– ident: 10.1016/S0925-2312(99)00151-4_BIB16
  doi: 10.7551/mitpress/5236.001.0001
– ident: 10.1016/S0925-2312(99)00151-4_BIB2
  doi: 10.1109/EMPDP.1995.389157
– volume: 14
  start-page: 317
  year: 1990
  ident: 10.1016/S0925-2312(99)00151-4_BIB19
  article-title: The back-propagation algorithm on grid and hypercube architectures
  publication-title: Parallel Comput.
  doi: 10.1016/0167-8191(90)90084-M
– volume: 14
  start-page: 305
  year: 1990
  ident: 10.1016/S0925-2312(99)00151-4_BIB17
  article-title: Implementations of artificial neural networks on the connection machine
  publication-title: Parallel Comput.
  doi: 10.1016/0167-8191(90)90083-L
– volume: 6
  start-page: 358
  year: 1989
  ident: 10.1016/S0925-2312(99)00151-4_BIB11
  article-title: A unified systolic architecture for artificial neural networks
  publication-title: J. Parallel Distrib. Comput.
  doi: 10.1016/0743-7315(89)90065-8
– volume: 8
  start-page: 130
  issue: 2
  year: 1997
  ident: 10.1016/S0925-2312(99)00151-4_BIB1
  article-title: Algorithmic mapping of feedforward neural networks onto multiple bus systems
  publication-title: IEEE Trans. Parallel Distrib. Systems
  doi: 10.1109/71.577255
– volume: 323
  start-page: 533
  year: 1986
  ident: 10.1016/S0925-2312(99)00151-4_BIB15
  article-title: Learning representation by back-propagation of errors
  publication-title: Nature
  doi: 10.1038/323533a0
– ident: 10.1016/S0925-2312(99)00151-4_BIB10
  doi: 10.1109/ICNN.1988.23925
– ident: 10.1016/S0925-2312(99)00151-4_BIB14
  doi: 10.1109/ICNN.1988.23922
– ident: 10.1016/S0925-2312(99)00151-4_BIB3
– ident: 10.1016/S0925-2312(99)00151-4_BIB4
– volume: 14
  start-page: 329
  year: 1990
  ident: 10.1016/S0925-2312(99)00151-4_BIB18
  article-title: An implementation of back-propagation learning on GF11, a large SIMD parallel computer
  publication-title: Parallel Comput.
  doi: 10.1016/0167-8191(90)90085-N
– ident: 10.1016/S0925-2312(99)00151-4_BIB13
SSID ssj0017129
Score 1.5899086
Snippet This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the...
SourceID proquest
crossref
elsevier
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 67
SubjectTerms Back-propagation
Mapping scheme
MIMD parallel computers
SIMD parallel computers
Title Back-propagation learning algorithm and parallel computers: The CLEPSYDRA mapping scheme
URI https://dx.doi.org/10.1016/S0925-2312(99)00151-4
https://www.proquest.com/docview/27201492
Volume 31
WOSCitedRecordID wos000086007800005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  customDbUrl:
  eissn: 1872-8286
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017129
  issn: 0925-2312
  databaseCode: AIEXJ
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1bb9MwFLZg44EX7og7fgAEQoHEudjmLYxOgKoysQ6VJ8txnG0iS0rbof18jmM7UQeo8MBLVEV1nPp8_XzO8bkg9MQk50pCaFCUWQYGSlQEvJJlEKakzEpZFGkXIPtlTCcTNpvxPde_c9m1E6BNw87O-Py_ihrugbBN6uw_iLt_KNyAzyB0uILY4fpXgn8r1bcAeBGYwgq39t4PWR-2i-PVke2KYYp-17Wuu6hy09hh6SMwdsajvf2v7z7nL0_kvMunAhNYr1c16Ip62JHO2ZCfmJoLpQFY71yA305zYI9F07pKBUAh7ZqnIRxCrbzLkKQB6INr7BlHv6DEUqHtsuE31fS3dG09B_v9g58a8w_e0mhykcv-WSuRPfkkdg_GYzEdzabP5t8D0z3MnLK7VioX0TahKQeC3s4_jGYf-_MkGhFbddFNNORyvR5mf875Czfzn7SUc_t1p4RMr6ErznrAuZX6dXRBNzfQVd-ZAzuivolm50GAPQhwDwIMIMAeBLgHwRsMEMA9BLCDALYQuIUOdkfTnfeB66ERqDhmqyCD3U4nmaaFBNuVUsaJUiXo7FFFszKsuNKgZIIdKlmkCtAGC8JkJWNlzvYzxuLbaKtpG30H4VBWmpY6TLLU5OslTIVMMkmoSoyml9xFiV8zoVyBedPnpBZDJCEstTBLLTgX3VILGPaqHza3FVY2DWBeIMKpiVb9EwCrTUMfewEKoFFzNiYb3Z4uhQlHiBJO7m38xn10efhzPEBbq8WpfoguqR-r4-XikQPeTwyBiyQ
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Back-propagation+learning+algorithm+and+parallel+computers%3A+The+CLEPSYDRA+mapping+scheme&rft.jtitle=Neurocomputing+%28Amsterdam%29&rft.au=d%27Acierno%2C+Antonio&rft.date=2000-03-01&rft.issn=0925-2312&rft.volume=31&rft.issue=1&rft.spage=67&rft.epage=85&rft_id=info:doi/10.1016%2FS0925-2312%2899%2900151-4&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0925-2312&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0925-2312&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0925-2312&client=summon