New Lookup Tables and Searching Algorithms for Fast H.264/AVC CAVLC Decoding

In this paper, new codeword structures, tables, and searching methods for fast and efficient coeff_token , total_zeros , and run_before decoding are developed. This new achievement is mainly based on the fact that the context-adaptive variable length coding (CAVLC) decoding can be modeled as a finit...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology Vol. 20; no. 7; pp. 1007 - 1017
Main Authors: LEE, Jun-Young, LEE, Jae-Jin, PARK, SeongMo
Format: Journal Article
Language:English
Published: New York, NY IEEE 01.07.2010
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1051-8215, 1558-2205
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, new codeword structures, tables, and searching methods for fast and efficient coeff_token , total_zeros , and run_before decoding are developed. This new achievement is mainly based on the fact that the context-adaptive variable length coding (CAVLC) decoding can be modeled as a finite state machine. In order to quantitatively evaluate the proposed method in terms of decoding speed and complexity, we define the iteration bound (1/τ̃) and the complexity ratio ( CR ). Using these gauge variables, we show that the new algorithms reduce τ̃ to about one third and complexity ratio to 0.95. This means that the proposed techniques reduce the decoding time to about one third and memory access count by 90% compared to those of the conventional methods without implementation overheads. Multiple-symbol parallel decoding method for run_before syntax element is proposed based on a bit-positioning with the critical path latency of only one multiplexer for the post-combination process. The proposed methods make it possible to implement a fast and efficient CAVLC decoding without losing video quality on any environments.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2010.2051278