Quaternion Vector Quantized Variational Autoencoder

Vector quantized variational autoencoders, as variants of variational autoencoders, effectively capture discrete representations by quantizing continuous latent spaces and are widely used in generative tasks. However, these models still face limitations in handling complex image reconstruction, part...

Full description

Saved in:
Bibliographic Details
Published in:IEEE signal processing letters Vol. 32; pp. 151 - 155
Main Authors: Luo, Hui, Liu, Xin, Sun, Jian, Zhang, Yang
Format: Journal Article
Language:English
Published: New York IEEE 01.01.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1070-9908, 1558-2361
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Vector quantized variational autoencoders, as variants of variational autoencoders, effectively capture discrete representations by quantizing continuous latent spaces and are widely used in generative tasks. However, these models still face limitations in handling complex image reconstruction, particularly in preserving high-quality details. Moreover, quaternion neural networks have shown unique advantages in handling multi-dimensional data, indicating that integrating quaternion approaches could potentially improve the performance of these autoencoders. To this end, we propose QVQ-VAE, a lightweight network in the quaternion domain that introduces a quaternion-based quantization layer and training strategy to improve reconstruction precision. By fully leveraging quaternion operations, QVQ-VAE reduces the number of model parameters, thereby lowering computational resource demands. Extensive evaluations on face and general object reconstruction tasks show that QVQ-VAE consistently outperforms existing methods while using significantly fewer parameters.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2024.3504374