Decomposition Techniques for Multilayer Perceptron Training

In this paper, we consider the learning problem of multilayer perceptrons (MLPs) formulated as the problem of minimizing a smooth error function. As well known, the learning problem of MLPs can be a difficult nonlinear nonconvex optimization problem. Typical difficulties can be the presence of exten...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transaction on neural networks and learning systems Ročník 27; číslo 11; s. 2146 - 2159
Hlavní autoři: Grippo, Luigi, Manno, Andrea, Sciandrone, Marco
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.11.2016
Témata:
ISSN:2162-237X, 2162-2388, 2162-2388
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In this paper, we consider the learning problem of multilayer perceptrons (MLPs) formulated as the problem of minimizing a smooth error function. As well known, the learning problem of MLPs can be a difficult nonlinear nonconvex optimization problem. Typical difficulties can be the presence of extensive flat regions and steep sided valleys in the error surface, and the possible large number of training data and of free network parameters. We define a wide class of batch learning algorithms for MLP, based on the use of block decomposition techniques in the minimization of the error function. The learning problem is decomposed into a sequence of smaller and structured minimization problems in order to advantageously exploit the structure of the objective function. Theoretical convergence results are established, and a specific algorithm is constructed and evaluated through an extensive numerical experimentation. The comparisons with the state-of-the-art learning algorithms show the effectiveness of the proposed techniques.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2015.2475621