Multiscale Augmented Normalizing Flows for Image Compression

Most learning-based image compression methods lack efficiency for high image quality due to their non-invertible design. The decoding function of the frequently applied compressive autoencoder architecture is only an approximated inverse of the encoding transform. This issue can be resolved by using...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) S. 3030 - 3034
Hauptverfasser: Windsheimer, Marc, Brand, Fabian, Kaup, Andre
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 14.04.2024
Schlagworte:
ISSN:2379-190X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Most learning-based image compression methods lack efficiency for high image quality due to their non-invertible design. The decoding function of the frequently applied compressive autoencoder architecture is only an approximated inverse of the encoding transform. This issue can be resolved by using invertible latent variable models, which allow a perfect reconstruction if no quantization is performed. Furthermore, many traditional image and video coders apply dynamic block partitioning to vary the compression of certain image regions depending on their content. Inspired by this approach, hierarchical latent spaces have been applied to learning-based compression networks. In this paper, we present a novel concept, which adapts the hierarchical latent space for augmented normalizing flows, an invertible latent variable model. Our best performing model achieves significant rate savings of more than 7% over comparable single-scale models.
ISSN:2379-190X
DOI:10.1109/ICASSP48485.2024.10446147