Variations in Variational Autoencoders - A Comparative Evaluation

Variational Auto-Encoders (VAEs) are deep latent space generative models which have been immensely successful in many applications such as image generation, image captioning, protein design, mu-tation prediction, and language models among others. The fundamental idea in VAEs is to learn the distribu...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE access Ročník 8; s. 1
Hlavní autoři: Wei, Ruoqi, Garcia, Cesar, El-Sayed, Ahmed, Peterson, Viyaleta, Mahmood, Ausif
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2169-3536, 2169-3536
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Variational Auto-Encoders (VAEs) are deep latent space generative models which have been immensely successful in many applications such as image generation, image captioning, protein design, mu-tation prediction, and language models among others. The fundamental idea in VAEs is to learn the distribu-tion of data in such a way that new meaningful data can be generated from the encoded distribution. This concept has led to tremendous research and variations in the design of VAEs in the last few years creating a field of its own, referred to as unsupervised representation learning. This paper provides a much-needed comprehensive evaluation of the variations of the VAEs based on their end goals and resulting architectures. It further provides intuition as well as mathematical formulation and quantitative results of each popular variation, presents a concise comparison of these variations, and concludes with challenges and future oppor-tunities for research in VAEs.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3018151