A New Framework to Train Autoencoders Through Non-Smooth Regularization

Deep structures consisting of many layers of nonlinearities have a high potential of expressing complex relations if properly initialized. Autoencoders play a complementary role in training a deep structure by initializing each layer in a greedy unsupervised manner. Due to the high capacity presente...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on signal processing Ročník 67; číslo 7; s. 1860 - 1874
Hlavní autoři: Amini, Sajjad, Ghaemmaghami, Shahrokh
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 01.04.2019
Témata:
ISSN:1053-587X, 1941-0476
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Deep structures consisting of many layers of nonlinearities have a high potential of expressing complex relations if properly initialized. Autoencoders play a complementary role in training a deep structure by initializing each layer in a greedy unsupervised manner. Due to the high capacity presented by autoencoders, these structures need to be regularized. While mathematical regularizers (based on weight decay, sparsity, etc.) and structural ones (by way of, e.g., denoising and dropout) have been well studied in the literature, quite a few papers have addressed the problem of training autoencoder with non-smooth regularization. In this paper, we address the problem of training autoencoder with non-smooth regularization. We propose an efficient algorithm and mathematically prove that it is convergent, where the regularizer needs to be proximable. As one of major applications of the proposed method, we get focused on the problem of sparse autoencoders and show that the new training method leads to better disentangling of factors of variation.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2019.2899294