Learning Product Codebooks Using Vector-Quantized Autoencoders for Image Retrieval

Vector-Quantized Variational Autoencoders (VQ-VAE)[1] provide an unsupervised model for learning discrete representations by combining vector quantization and autoencoders. In this paper, we study the use of VQ-VAE for representation learning of downstream tasks, such as image retrieval. First, we d...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP) s. 1 - 5
Hlavní autoři: Wu, Hanwei, Flierl, Markus
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.11.2019
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Vector-Quantized Variational Autoencoders (VQ-VAE)[1] provide an unsupervised model for learning discrete representations by combining vector quantization and autoencoders. In this paper, we study the use of VQ-VAE for representation learning of downstream tasks, such as image retrieval. First, we describe the VQ-VAE in the context of an information-theoretic framework. Then, we show that the regularization effect on the learned representation is determined by the size of the embedded codebook before the training. As a result, we introduce a hyperparameter to balance the strength of the vector quantizer and the reconstruction error. By tuning the hyperparameter, the embedded bottleneck quantizer is used as a regularizer that forces the output of the encoder to share a constrained coding space. With that, the learned latent features better preserve the similarity relations of the data space. Finally, we incorporate the product quantizer into the bottleneck stage of VQ-VAE and use it as an end-to-end unsupervised learning model for image retrieval tasks. The product quantizer has the advantage of generating large and structured codebooks. Fast retrieval can be achieved by using lookup tables that store the distance between any pair of sub-codewords. State-of-the-art retrieval results are achieved by the proposed codebooks.
DOI:10.1109/GlobalSIP45357.2019.8969272