Lossless Coding of Multimodal Image Pairs Based on Image-To-Image Translation

Multimodal image coding often uses standard encoding algorithms, which do not exploit multimodality characteristics. This paper proposes a new cross-modality prediction approach for lossless coding of multimodal images, based on a Generative Adversarial Network (GAN). The GAN is added to the predict...

Full description

Saved in:
Bibliographic Details
Published in:European Workshop on Visual Information Processing pp. 1 - 6
Main Authors: Parracho, Joao O., Thomaz, Lucas A., Tavora, Luis M. N., Assuncao, Pedro A. A., Faria, Sergio M. M.
Format: Conference Proceeding
Language:English
Published: IEEE 11.09.2022
Subjects:
ISSN:2471-8963
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multimodal image coding often uses standard encoding algorithms, which do not exploit multimodality characteristics. This paper proposes a new cross-modality prediction approach for lossless coding of multimodal images, based on a Generative Adversarial Network (GAN). The GAN is added to the prediction loop of the Versatile Video Coding (VVC) lossless encoder to perform cross-modality translation of an image to its counterpart modality. Then, such synthesized image is used as reference for inter prediction, followed by further optimization that includes rescaling and brightness adjustment. A publicly available dataset of Positron Emission Tomography (PET) and Computed Tomography (CT) image pairs is used to assess the performance of the proposed multimodal lossless image coding framework. In comparison with single modality coding using the VVC standard, average coding gains of 6.83% are achieved for the inter-coded PET images.
ISSN:2471-8963
DOI:10.1109/EUVIP53989.2022.9922726