Information preservation with wasserstein autoencoders: generation consistency and adversarial robustness

Amongst the numerous variants Variational Autoencoder (VAE) has inspired, the Wasserstein Autoencoder (WAE) stands out due to its heightened generative quality and intriguing theoretical properties. WAEs consist of an encoding and a decoding network— forming a bottleneck— with the prime objective of...

Full description

Saved in:
Bibliographic Details
Published in:Statistics and computing Vol. 35; no. 5
Main Authors: Chakrabarty, Anish, Basu, Arkaprabha, Das, Swagatam
Format: Journal Article
Language:English
Published: Dordrecht Springer Nature B.V 01.10.2025
Subjects:
ISSN:0960-3174, 1573-1375
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Amongst the numerous variants Variational Autoencoder (VAE) has inspired, the Wasserstein Autoencoder (WAE) stands out due to its heightened generative quality and intriguing theoretical properties. WAEs consist of an encoding and a decoding network— forming a bottleneck— with the prime objective of generating new samples resembling the ones it was catered to. In the process, they aim to achieve a target latent representation of the encoded data. Our work offers a comprehensive theoretical understanding of the machinery behind WAEs. From a statistical viewpoint, we pose the problem as concurrent density estimation tasks based on neural network-induced transformations. This allows us to establish deterministic upper bounds on the realized errors WAEs commit, supported by simulations on real and synthetic data sets. We also analyze the propagation of these stochastic errors in the presence of adversaries. As a result, both the large sample properties of the reconstructed distribution and the resilience of WAE models are explored.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0960-3174
1573-1375
DOI:10.1007/s11222-025-10657-z