A New Framework to Train Autoencoders Through Non-Smooth Regularization

Deep structures consisting of many layers of nonlinearities have a high potential of expressing complex relations if properly initialized. Autoencoders play a complementary role in training a deep structure by initializing each layer in a greedy unsupervised manner. Due to the high capacity presente...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on signal processing Jg. 67; H. 7; S. 1860 - 1874
Hauptverfasser: Amini, Sajjad, Ghaemmaghami, Shahrokh
Format: Journal Article
Sprache:Englisch
Veröffentlicht: IEEE 01.04.2019
Schlagworte:
ISSN:1053-587X, 1941-0476
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep structures consisting of many layers of nonlinearities have a high potential of expressing complex relations if properly initialized. Autoencoders play a complementary role in training a deep structure by initializing each layer in a greedy unsupervised manner. Due to the high capacity presented by autoencoders, these structures need to be regularized. While mathematical regularizers (based on weight decay, sparsity, etc.) and structural ones (by way of, e.g., denoising and dropout) have been well studied in the literature, quite a few papers have addressed the problem of training autoencoder with non-smooth regularization. In this paper, we address the problem of training autoencoder with non-smooth regularization. We propose an efficient algorithm and mathematically prove that it is convergent, where the regularizer needs to be proximable. As one of major applications of the proposed method, we get focused on the problem of sparse autoencoders and show that the new training method leads to better disentangling of factors of variation.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2019.2899294