Information preservation with wasserstein autoencoders: generation consistency and adversarial robustness

Amongst the numerous variants Variational Autoencoder (VAE) has inspired, the Wasserstein Autoencoder (WAE) stands out due to its heightened generative quality and intriguing theoretical properties. WAEs consist of an encoding and a decoding network— forming a bottleneck— with the prime objective of...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Statistics and computing Ročník 35; číslo 5
Hlavní autoři: Chakrabarty, Anish, Basu, Arkaprabha, Das, Swagatam
Médium: Journal Article
Jazyk:angličtina
Vydáno: Dordrecht Springer Nature B.V 01.10.2025
Témata:
ISSN:0960-3174, 1573-1375
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Amongst the numerous variants Variational Autoencoder (VAE) has inspired, the Wasserstein Autoencoder (WAE) stands out due to its heightened generative quality and intriguing theoretical properties. WAEs consist of an encoding and a decoding network— forming a bottleneck— with the prime objective of generating new samples resembling the ones it was catered to. In the process, they aim to achieve a target latent representation of the encoded data. Our work offers a comprehensive theoretical understanding of the machinery behind WAEs. From a statistical viewpoint, we pose the problem as concurrent density estimation tasks based on neural network-induced transformations. This allows us to establish deterministic upper bounds on the realized errors WAEs commit, supported by simulations on real and synthetic data sets. We also analyze the propagation of these stochastic errors in the presence of adversaries. As a result, both the large sample properties of the reconstructed distribution and the resilience of WAE models are explored.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0960-3174
1573-1375
DOI:10.1007/s11222-025-10657-z