Lossless Coding of Multimodal Image Pairs Based on Image-To-Image Translation

Multimodal image coding often uses standard encoding algorithms, which do not exploit multimodality characteristics. This paper proposes a new cross-modality prediction approach for lossless coding of multimodal images, based on a Generative Adversarial Network (GAN). The GAN is added to the predict...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:European Workshop on Visual Information Processing S. 1 - 6
Hauptverfasser: Parracho, Joao O., Thomaz, Lucas A., Tavora, Luis M. N., Assuncao, Pedro A. A., Faria, Sergio M. M.
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 11.09.2022
Schlagworte:
ISSN:2471-8963
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multimodal image coding often uses standard encoding algorithms, which do not exploit multimodality characteristics. This paper proposes a new cross-modality prediction approach for lossless coding of multimodal images, based on a Generative Adversarial Network (GAN). The GAN is added to the prediction loop of the Versatile Video Coding (VVC) lossless encoder to perform cross-modality translation of an image to its counterpart modality. Then, such synthesized image is used as reference for inter prediction, followed by further optimization that includes rescaling and brightness adjustment. A publicly available dataset of Positron Emission Tomography (PET) and Computed Tomography (CT) image pairs is used to assess the performance of the proposed multimodal lossless image coding framework. In comparison with single modality coding using the VVC standard, average coding gains of 6.83% are achieved for the inter-coded PET images.
ISSN:2471-8963
DOI:10.1109/EUVIP53989.2022.9922726