Multi-view coding for image-based rendering using 3-D scene geometry

To store and transmit the large amount of image data necessary for Image-based Rendering (IBR), efficient coding schemes are required. This paper presents two different approaches which exploit three-dimensional scene geometry for multi-view compression. In texture-based coding, images are converted...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology Vol. 13; no. 11; pp. 1092 - 1106
Main Authors: Magnor, M., Ramanathan, P., Girod, B.
Format: Journal Article
Language:English
Published: New York IEEE 01.11.2003
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1051-8215, 1558-2205
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To store and transmit the large amount of image data necessary for Image-based Rendering (IBR), efficient coding schemes are required. This paper presents two different approaches which exploit three-dimensional scene geometry for multi-view compression. In texture-based coding, images are converted to view-dependent texture maps for compression. In model-aided predictive coding, scene geometry is used for disparity compensation and occlusion detection between images. While both coding strategies are able to attain compression ratios exceeding 2000:1, individual coding performance is found to depend on the accuracy of the available geometry model. Experiments with real-world as well as synthetic image sets show that texture-based coding is more sensitive to geometry inaccuracies than predictive coding. A rate-distortion theoretical analysis of both schemes supports these findings. For reconstructed approximate geometry models, model-aided predictive coding performs best, while texture-based coding yields superior coding results if scene geometry is exactly known.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
content type line 23
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2003.817630