A New Fast Training Algorithm for Autoencoder Neural Networks based on Extreme Learning Machine

Autoencoders are neural networks that are characterized by having the same inputs and outputs. This kind of Neural Networks aim to estimate a nonlinear transformation whose parameters allow to represent the input patterns to the network. The Extreme Learning Machine (ELM-AE) Autoencoders have random...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2022 IEEE International Conference on Automation/XXV Congress of the Chilean Association of Automatic Control (ICA-ACCA) s. 1 - 7
Hlavní autoři: Vasquez-Coronel, Jose A., Mora, Marco, Vilches, Karina, Silva-Pavez, Fabian, Torres-Gonzalez, Italo, Barria-Valdevenito, Pedro
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 24.10.2022
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Autoencoders are neural networks that are characterized by having the same inputs and outputs. This kind of Neural Networks aim to estimate a nonlinear transformation whose parameters allow to represent the input patterns to the network. The Extreme Learning Machine (ELM-AE) Autoencoders have random weights and biases in the hidden layer, and compute the output parameters by solving an overdetermined linear system using the Moore-Penrose Pseudoinverse. ELM-AE training is based on the Fast Iterative Shrinkage-Thresholding (FISTA). In this paper, we propose to improve the convergence speed obtained by FISTA considering the use of two algorithms of the Shrinkage-Thresholding class, namely Greedy FISTA and Linearly-Convergent FISTA. 6 frequently used public machine learning datasets were considered: MNIST, NORB, CIFAR10, UMist, Caltech256, Stanford Cars. Experiments were carried out varying the number of neurons in the hidden layer of the Autoencoders, considering the 3 algorithms, for all the databases. The experimental results showed that Greedy FISTA and Linearly-Convergent FISTA presented higher convergence speed, increasing the speed of ELM-Autoencoder training, maintaining a comparable generalization error between the three Shrinkage-Thresholding algorithms.
DOI:10.1109/ICA-ACCA56767.2022.10006276