A New Framework to Train Autoencoders Through Non-Smooth Regularization

Deep structures consisting of many layers of nonlinearities have a high potential of expressing complex relations if properly initialized. Autoencoders play a complementary role in training a deep structure by initializing each layer in a greedy unsupervised manner. Due to the high capacity presente...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on signal processing Vol. 67; no. 7; pp. 1860 - 1874
Main Authors: Amini, Sajjad, Ghaemmaghami, Shahrokh
Format: Journal Article
Language:English
Published: IEEE 01.04.2019
Subjects:
ISSN:1053-587X, 1941-0476
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep structures consisting of many layers of nonlinearities have a high potential of expressing complex relations if properly initialized. Autoencoders play a complementary role in training a deep structure by initializing each layer in a greedy unsupervised manner. Due to the high capacity presented by autoencoders, these structures need to be regularized. While mathematical regularizers (based on weight decay, sparsity, etc.) and structural ones (by way of, e.g., denoising and dropout) have been well studied in the literature, quite a few papers have addressed the problem of training autoencoder with non-smooth regularization. In this paper, we address the problem of training autoencoder with non-smooth regularization. We propose an efficient algorithm and mathematically prove that it is convergent, where the regularizer needs to be proximable. As one of major applications of the proposed method, we get focused on the problem of sparse autoencoders and show that the new training method leads to better disentangling of factors of variation.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2019.2899294