Stacked Wasserstein Autoencoder

•A novel stacked Wasserstein autoencoder (SWAE) is proposed to approximate high-dimensional data distribution.•The transport is minimized at two stages to approximate the data space while learning the encoded latent distribution.•Experiments show that the SWAE model learns semantically meaningful la...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neurocomputing (Amsterdam) Ročník 363; s. 195 - 204
Hlavní autoři: Xu, Wenju, Keshmiri, Shawn, Wang, Guanghui
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 21.10.2019
Témata:
ISSN:0925-2312, 1872-8286
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•A novel stacked Wasserstein autoencoder (SWAE) is proposed to approximate high-dimensional data distribution.•The transport is minimized at two stages to approximate the data space while learning the encoded latent distribution.•Experiments show that the SWAE model learns semantically meaningful latent variables of the observed data.•The proposed SWAE model enables the interpolation of the latent representation and semantic manipulation. Approximating distributions over complicated manifolds, such as natural images, are conceptually attractive. The deep latent variable model, trained using variational autoencoders and generative adversarial networks, is now a key technique for representation learning. However, it is difficult to unify these two models for exact latent-variable inference and parallelize both reconstruction and sampling, partly due to the regularization under the latent variables, to match a simple explicit prior distribution. These approaches are prone to be oversimplified, and can only characterize a few modes of the true distribution. Based on the recently proposed Wasserstein autoencoder (WAE) with a new regularization as an optimal transport. The paper proposes a stacked Wasserstein autoencoder (SWAE) to learn a deep latent variable model. SWAE is a hierarchical model, which relaxes the optimal transport constraints at two stages. At the first stage, the SWAE flexibly learns a representation distribution, i.e., the encoded prior; and at the second stage, the encoded representation distribution is approximated with a latent variable model under the regularization encouraging the latent distribution to match the explicit prior. This model allows us to generate natural textual outputs as well as perform manipulations in the latent space to induce changes in the output space. Both quantitative and qualitative results demonstrate the superior performance of SWAE compared with the state-of-the-art approaches in terms of faithful reconstruction and generation quality.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2019.06.096