Lossless Coding of Multimodal Image Pairs Based on Image-To-Image Translation

Multimodal image coding often uses standard encoding algorithms, which do not exploit multimodality characteristics. This paper proposes a new cross-modality prediction approach for lossless coding of multimodal images, based on a Generative Adversarial Network (GAN). The GAN is added to the predict...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:European Workshop on Visual Information Processing s. 1 - 6
Hlavní autori: Parracho, Joao O., Thomaz, Lucas A., Tavora, Luis M. N., Assuncao, Pedro A. A., Faria, Sergio M. M.
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 11.09.2022
Predmet:
ISSN:2471-8963
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Multimodal image coding often uses standard encoding algorithms, which do not exploit multimodality characteristics. This paper proposes a new cross-modality prediction approach for lossless coding of multimodal images, based on a Generative Adversarial Network (GAN). The GAN is added to the prediction loop of the Versatile Video Coding (VVC) lossless encoder to perform cross-modality translation of an image to its counterpart modality. Then, such synthesized image is used as reference for inter prediction, followed by further optimization that includes rescaling and brightness adjustment. A publicly available dataset of Positron Emission Tomography (PET) and Computed Tomography (CT) image pairs is used to assess the performance of the proposed multimodal lossless image coding framework. In comparison with single modality coding using the VVC standard, average coding gains of 6.83% are achieved for the inter-coded PET images.
ISSN:2471-8963
DOI:10.1109/EUVIP53989.2022.9922726