Entropy encoding and decoding using direct level and run-length/level context-adaptive arithmetic coding/decoding modes

Saved in:
Bibliographic Details
Title: Entropy encoding and decoding using direct level and run-length/level context-adaptive arithmetic coding/decoding modes
Patent Number: 8,712,783
Publication Date: April 29, 2014
Appl. No: 13/306761
Application Filed: November 29, 2011
Abstract: An encoder performs context-adaptive arithmetic encoding of transform coefficient data. For example, an encoder switches between coding of direct levels of quantized transform coefficient data and run-level coding of run lengths and levels of quantized transform coefficient data. The encoder can determine when to switch between coding modes based on a pre-determined switch point or by counting consecutive coefficients having a predominant value (e.g., zero). A decoder performs corresponding context-adaptive arithmetic decoding.
Inventors: Mehrotra, Sanjeev (Kirkland, WA, US); Chen, Wei-Ge (Sammamish, WA, US)
Assignees: Microsoft Corporation (Redmond, WA, US)
Claim: 1. A computer-readable memory or storage device storing computer-executable instructions for causing a computing device that implements an encoder to perform a method of encoding audio or video data, the method comprising: performing a frequency transform on a block of plural samples to produce plural transform coefficients; quantizing the plural transform coefficients; and entropy coding the plural quantized transform coefficients, wherein the entropy coding includes: encoding one or more of the plural quantized transform coefficients using a direct level encoding mode, including performing first context-adaptive arithmetic coding of a level value of a given coefficient of the plural quantized transform coefficients, wherein the first context-adaptive arithmetic coding uses a first set of plural contexts, and wherein the first context-adaptive arithmetic coding includes selecting one of the first set of plural contexts based at least in part on level values of two previously encoded quantized transform coefficients; switching to a run-level encoding mode for remaining coefficients of the plural quantized transform coefficients; encoding the remaining quantized transform coefficients using the run-level encoding mode, including: performing second context-adaptive arithmetic coding of a non-zero level value of one of the remaining quantized transform coefficients, wherein the second context-adaptive arithmetic coding uses a second set of plural contexts different than the first set of plural contexts, and wherein the second context-adaptive arithmetic coding includes selecting one of the second set of plural contexts based at least in part on a level value of a previously encoded quantized transform coefficient; and performing third context-adaptive arithmetic coding of a run value, the run value indicating a count of consecutive zero-value remaining quantized transform coefficients adjacent the remaining quantized transform coefficient with the non-zero level value, wherein the third context-adaptive arithmetic coding uses a third set of plural contexts different than the first set of plural contexts and different than the second set of plural contexts.
Claim: 2. The computer-readable memory or storage device of claim 1 wherein the entropy coding further includes: selecting the first context-adaptive arithmetic coding from among multiple encoding techniques available for the direct level encoding mode, wherein the multiple encoding techniques available for the direct level encoding mode further include variable length coding of level values; and selecting the second context-adaptive arithmetic coding and the third context-adaptive arithmetic coding from among multiple encoding techniques available for the run-level encoding mode, wherein the multiple encoding techniques available for the run-level encoding mode further include variable length coding of run values and level values.
Claim: 3. The computer-readable memory or storage device of claim 1 wherein the switching from the direct level encoding mode to the run-level encoding mode happens at a pre-determined switch point.
Claim: 4. The computer-readable memory or storage device of claim 1 wherein the selected context from the first set of plural contexts changes depending on (1) whether the level value of a first of the two previously encoded quantized transform coefficients is zero or non-zero and (2) whether the level value of a second of the two previously encoded quantized transform coefficients is zero or non-zero.
Claim: 5. The computer-readable memory or storage device of claim 1 wherein the encoding the remaining ones of the plural quantized transform coefficients using the run-level encoding mode includes repeating the second context-adaptive arithmetic coding and the third context-adaptive arithmetic coding for each of one or more other pairs of non-zero level value and run value.
Claim: 6. The computer-readable memory or storage device of claim 5 wherein: for a first non-zero level value in the run-level encoding mode, the selection of one of the second set of plural contexts considers the level value of the given quantized transform coefficient from the direct level encoding mode; and for a subsequent non-zero level value in the run-level encoding mode, the selection of one of the second set of plural contexts considers the first non-zero level value.
Claim: 7. A computer-readable memory or storage device storing computer-executable instructions for causing a computing device that implements a decoder to perform a method of decoding audio or video data, the method comprising: entropy decoding plural quantized transform coefficients in a block, wherein the entropy decoding the encoded information includes: decoding one or more of the plural quantized transform coefficients using a direct level decoding mode, including performing first context-adaptive arithmetic decoding of a level value of a given coefficient of the plural quantized transform coefficients, wherein the first context-adaptive arithmetic decoding uses a first set of plural contexts, and wherein the first context-adaptive arithmetic decoding includes selecting one of the first set of plural contexts based at least in part on level values of two previously decoded quantized transform coefficients; switching to a run-level decoding mode for remaining coefficients of the plural quantized transform coefficients in the block; decoding the remaining quantized transform coefficients using the run-level decoding mode, including: performing second context-adaptive arithmetic decoding of a non-zero level value of one of the remaining quantized transform coefficients, wherein the second context-adaptive arithmetic decoding uses a second set of plural contexts different than the first set of plural contexts, and wherein the second context-adaptive arithmetic decoding includes selecting one of the second set of plural contexts based at least in part on a level value of a previously decoded quantized transform coefficient; and performing third context-adaptive arithmetic decoding of a run value, the run value indicating a count of consecutive zero-value remaining quantized transform coefficients adjacent the remaining quantized transform coefficient with the non-zero level value, wherein the third context-adaptive arithmetic decoding uses a third set of plural contexts different than the first set of plural contexts and different than the second set of plural contexts; and inverse quantizing the plural transform coefficients in the block; and performing an inverse frequency transform on the plural transform coefficients to produce a block of the plural samples.
Claim: 8. The computer-readable memory or storage device of claim 7 wherein the entropy decoding further includes: selecting the first context-adaptive arithmetic decoding from among multiple decoding techniques available for the direct level decoding mode, wherein the multiple decoding techniques available for the direct level decoding mode further include variable length decoding of level values; and selecting the second context-adaptive arithmetic decoding and the third context-adaptive arithmetic decoding from among multiple decoding techniques available for the run-level decoding mode, wherein the multiple decoding techniques available for the run-level decoding mode further include variable length decoding of run values and level values.
Claim: 9. The computer-readable memory or storage device of claim 7 wherein the switching from the direct level decoding mode to the run-level decoding mode happens at a pre-determined switch point.
Claim: 10. The computer-readable memory or storage device of claim 7 wherein the selected context from the first set of plural contexts changes depending on (1) whether the level value of a first of the two previously decoded quantized transform coefficients is zero or non-zero and (2) whether the level value of a second of the two previously decoded quantized transform coefficients is zero or non-zero.
Claim: 11. The computer-readable memory or storage device of claim 7 wherein the decoding the remaining ones of the plural quantized transform coefficients using the run-level decoding mode includes repeating the second context-adaptive arithmetic decoding and the third context-adaptive arithmetic decoding for each of one or more other pairs of non-zero level value and run value.
Claim: 12. The computer-readable memory or storage device of claim 11 wherein: for a first non-zero level value in the run-level decoding mode, the selection of one of the second set of plural contexts considers the level value of the given quantized transform coefficient from the direct level decoding mode; and for a subsequent non-zero level value in the run-level decoding mode, the selection of one of the second set of plural contexts considers the first non-zero level value.
Claim: 13. A computing device that implements a decoder, the computing device comprising: one or more processors; memory; and one or more storage media storing instructions for causing the computing device to perform a method of decoding audio or video data, the method comprising: entropy decoding plural quantized transform coefficients in a block, wherein the entropy decoding includes: decoding one or more of the plural quantized transform coefficients using a first decoding mode, including performing first context-adaptive arithmetic decoding of a level value of a given coefficient of the plural quantized transform coefficients, wherein the first context-adaptive arithmetic decoding uses a first set of plural contexts, and wherein the first context-adaptive arithmetic decoding includes selecting one of the first set of plural contexts based at least in part on level values of two previously decoded quantized transform coefficients; switching to a second decoding mode for remaining coefficients of the plural quantized transform coefficients in the block; and decoding the remaining quantized transform coefficients using the second decoding mode, including: performing second context-adaptive arithmetic decoding of a first level value and a second level value of a first remaining coefficient and second remaining coefficient, respectively, of the remaining quantized transform coefficients, wherein the second context-adaptive arithmetic decoding uses a second set of plural contexts different than the first set of plural contexts, and wherein: for the first level value in the second decoding mode, the selection of one of the second set of plural contexts considers the level value of the given quantized transform coefficient from the first decoding mode; and for the second level value in the second decoding mode, the selection of one of the second set of plural contexts considers the first level value; and inverse quantizing the plural transform coefficients in the block; and performing an inverse frequency transform on the plural transform coefficients to produce on a block of the plural samples.
Claim: 14. The computing device of claim 13 wherein the first decoding mode is a direct level decoding mode, wherein the second decoding mode is a run-level decoding mode, and wherein the decoding the remaining coefficients using the second decoding mode further includes: performing third context-adaptive arithmetic decoding of a run value, the run value indicating a count of consecutive zero-value remaining quantized transform coefficients adjacent the first or second level value in the second decoding mode, wherein the third context-adaptive arithmetic decoding uses a third set of plural contexts different than the first set of plural contexts and different than the second set of plural contexts.
Claim: 15. The computing device of claim 14 wherein the entropy decoding further includes: selecting the first context-adaptive arithmetic decoding from among multiple decoding techniques available for the direct level decoding mode, wherein the multiple decoding techniques available for the direct level decoding mode further include variable length decoding of level values; and selecting the second context-adaptive arithmetic decoding and the third context-adaptive arithmetic decoding from among multiple decoding techniques available for the run-level decoding mode, wherein the multiple decoding techniques available for the run-level decoding mode further include variable length decoding of run values and level values.
Claim: 16. The computing device of claim 13 wherein the switching from the direct level decoding mode to the run-level decoding mode happens at a pre-determined switch point.
Claim: 17. The computing device of claim 13 wherein the selected context from the first set of plural contexts changes depending on (1) whether the level value of a first of the two previously decoded quantized transform coefficients is zero or non-zero and (2) whether the level value of a second of the two previously decoded quantized transform coefficients is zero or non-zero.
Claim: 18. The computing device of claim 13 wherein the computing device includes a display and a wireless communication connection, and wherein the method further comprises receiving, over the wireless communication connection, a bit stream comprising the audio or video data.
Current U.S. Class: 704/500
Patent References Cited: 4420771 December 1983 Pirsch
4698672 October 1987 Chen
4730348 March 1988 MacCrisken
4792981 December 1988 Cahill et al.
4813056 March 1989 Fedele
4901075 February 1990 Vogel
4968135 November 1990 Wallace et al.
5043919 August 1991 Callaway et al.
5089818 February 1992 Mahieux et al.
5109451 April 1992 Aono et al.
5128758 July 1992 Azadegan
5146324 September 1992 Miller et al.
5179442 January 1993 Azadegan
5227788 July 1993 Johnston
5227878 July 1993 Puri et al.
5253053 October 1993 Chu et al.
5266941 November 1993 Akeley et al.
5270832 December 1993 Balkanski et al.
5367629 November 1994 Chu et al.
5373513 December 1994 Howe et al.
5376968 December 1994 Wu et al.
5381144 January 1995 Wilson et al.
5394170 February 1995 Akeley et al.
5400075 March 1995 Savatier
5408234 April 1995 Chu
5457495 October 1995 Hartung
5461421 October 1995 Moon
5467134 November 1995 Laney
5473376 December 1995 Auyeung
5481553 January 1996 Suzuki
5493407 February 1996 Takahara
5504591 April 1996 Dujari
5508816 April 1996 Ueda et al.
5533140 July 1996 Sirat et al.
5535305 July 1996 Acero et al.
5544286 August 1996 Laney
5559557 September 1996 Kato et al.
5559831 September 1996 Keith
5568167 October 1996 Galbi et al.
5574449 November 1996 Golin
5579430 November 1996 Grill et al.
5592584 January 1997 Ferreira et al.
5627938 May 1997 Johnston
5654702 August 1997 Ran
5654706 August 1997 Jeong et al.
5661755 August 1997 Van de Kerkhof
5664057 September 1997 Crossman et al.
5675332 October 1997 Limberg
5714950 February 1998 Jeong et al.
5717821 February 1998 Tsutsui
5732156 March 1998 Watanabe et al.
5734340 March 1998 Kennedy
5748789 May 1998 Lee et al.
5793897 August 1998 Jo et al.
5801648 September 1998 Satoh et al.
5802213 September 1998 Gardos
5812971 September 1998 Herre
5819215 October 1998 Dobson et al.
5825830 October 1998 Kopf
5825979 October 1998 Tsutsui et al.
5828426 October 1998 Yu
5831559 November 1998 Agarwal et al.
5835030 November 1998 Tsutsui et al.
5835144 November 1998 Matsumura
5844508 December 1998 Murashita et al.
5850482 December 1998 Meany et al.
5883633 March 1999 Gill et al.
5884269 March 1999 Cellier et al.
5889891 March 1999 Gersho et al.
5903231 May 1999 Emelko
5946043 August 1999 Lee et al.
5969650 October 1999 Wilson
5974184 October 1999 Eifrig et al.
5974380 October 1999 Smyth et al.
5982437 November 1999 Okazaki
5983172 November 1999 Takashima et al.
5990960 November 1999 Murakami
5991451 November 1999 Keith et al.
5995670 November 1999 Zabinsky
6002439 December 1999 Murakami
6009387 December 1999 Ramaswamy et al.
6026195 February 2000 Eifrig et al.
6038536 March 2000 Haroun et al.
6041302 March 2000 Bruekers
6049630 April 2000 Wang et al.
6054943 April 2000 Lawrence
6078691 June 2000 Luttmer
6097759 August 2000 Murakami et al.
6097880 August 2000 Koyata
6100825 August 2000 Sedluk
6111914 August 2000 Bist
6140944 October 2000 Toyoyama
6148109 November 2000 Boon et al.
6154572 November 2000 Chaddha
6195465 February 2001 Zandi et al.
6205256 March 2001 Chaddha
6208274 March 2001 Taori et al.
6215910 April 2001 Chaddha
6223162 April 2001 Chen
6226407 May 2001 Zabih et al.
6233017 May 2001 Chaddha
6233359 May 2001 Ratnaker et al.
6253165 June 2001 Malvar
6259810 July 2001 Gill et al.
6272175 August 2001 Sriram et al.
6292588 September 2001 Shen
6300888 October 2001 Chen
6304928 October 2001 Mairs et al.
6337881 January 2002 Chaddha
6341165 January 2002 Gbur
6345123 February 2002 Boon
6349152 February 2002 Chaddha
6360019 March 2002 Chaddha
6373411 April 2002 Shoham
6373412 April 2002 Mitchell et al.
6377930 April 2002 Chen
6392705 May 2002 Chaddha
6404931 June 2002 Chen
6408029 June 2002 McVeigh et al.
6420980 July 2002 Ejima
6421738 July 2002 Ratan et al.
6424939 July 2002 Herre et al.
6441755 August 2002 Dietz et al.
6477280 November 2002 Malvar
6487535 November 2002 Smyth et al.
6493385 December 2002 Sekiguchi et al.
6542631 April 2003 Ishikawa
6542863 April 2003 Surucu
6567781 May 2003 Lafe
6573915 June 2003 Sivan et al.
6577681 June 2003 Kimura
6580834 June 2003 Li et al.
6587057 July 2003 Scheuermann
6606039 August 2003 Hirano
6608935 August 2003 Nagumo et al.
6636168 October 2003 Ohashi et al.
6646578 November 2003 Au
6650784 November 2003 Thyagarajan
6653952 November 2003 Hayami et al.
6678419 January 2004 Malvar
6704360 March 2004 Haskell et al.
6721700 April 2004 Yin
6728317 April 2004 Demos
6735339 May 2004 Ubale
6766293 July 2004 Herre et al.
6771777 August 2004 Gbur et al.
6795584 September 2004 Karczewicz et al.
6825847 November 2004 Molnar et al.
6829299 December 2004 Chujoh et al.
6856701 February 2005 Karczewicz et al.
6934677 August 2005 Chen et al.
6959116 October 2005 Sezer et al.
7016547 March 2006 Smirnov
7043088 May 2006 Chiu
7076104 July 2006 Keith et al.
7107212 September 2006 Van Der Vleuten et al.
7139703 November 2006 Acero et al.
7143030 November 2006 Chen et al.
7165028 January 2007 Gong
7215707 May 2007 Lee et al.
7266149 September 2007 Holcomb
7274671 September 2007 Hu
7328150 February 2008 Chen et al.
7433824 October 2008 Mehrotra et al.
7454076 November 2008 Chen et al.
7460990 December 2008 Mehrotra et al.
7502743 March 2009 Thumpudi et al.
7536305 May 2009 Chen et al.
7546240 June 2009 Mehrotra et al.
7562021 July 2009 Mehrotra et al.
7599840 October 2009 Mehrotra et al.
7630882 December 2009 Mehrotra et al.
7684981 March 2010 Thumpudi et al.
7693709 April 2010 Thumpudi et al.
7756350 July 2010 Vos et al.
7761290 July 2010 Koishida et al.
7822601 October 2010 Mehrotra et al.
7840403 November 2010 Mehrotra et al.
8090574 January 2012 Mehrotra et al.
2002/0009145 January 2002 Natarajan et al.
2002/0031185 March 2002 Webb
2002/0111780 August 2002 Sy
2002/0141422 October 2002 Hu
2003/0006917 January 2003 Ohashi et al.
2003/0033143 February 2003 Aronowitz
2003/0085822 May 2003 Scheuermann
2003/0115055 June 2003 Gong
2003/0138150 July 2003 Srinivasan
2003/0156648 August 2003 Holcomb et al.
2003/0210163 November 2003 Yang
2004/0044521 March 2004 Chen et al.
2004/0044534 March 2004 Chen et al.
2004/0049379 March 2004 Thumpudi et al.
2004/0114810 June 2004 Boliek
2004/0136457 July 2004 Funnell et al.
2004/0184537 September 2004 Geiger et al.
2004/0196903 October 2004 Kottke et al.
2005/0015249 January 2005 Mehrotra et al.
2005/0021317 January 2005 Weng et al.
2005/0052294 March 2005 Liang et al.
2005/0286634 December 2005 Duvivier
2006/0023792 February 2006 Cho et al.
2006/0078208 April 2006 Malvar
2006/0088222 April 2006 Han et al.
2006/0104348 May 2006 Chen et al.
2006/0153304 July 2006 Prakash et al.
2006/0176959 August 2006 Lu et al.
2006/0268990 November 2006 Lin et al.
2006/0285760 December 2006 Malvar
2006/0290539 December 2006 Tomic
2007/0016406 January 2007 Thumpudi et al.
2007/0016415 January 2007 Thumpudi et al.
2007/0016418 January 2007 Mehrotra et al.
2007/0116369 May 2007 Zandi et al.
2007/0126608 June 2007 Sriram
2007/0200737 August 2007 Gao et al.
2007/0242753 October 2007 Jeon et al.
2008/0043030 February 2008 Huang et al.
2008/0089421 April 2008 Je-Chang et al.
2008/0228476 September 2008 Mehrotra et al.
2008/0262855 October 2008 Mehrotra et al.
2008/0317364 December 2008 Gou et al.
2011/0035225 February 2011 Mehrotra et al.
0540350 May 1993
0910927 January 1998
0966793 September 1998
0931386 January 1999
1 142 130 April 2003
1 400 954 March 2004
1 142 129 June 2004
1809046 May 2009
2 372 918 September 2002
2 388 502 November 2003
01-091587 April 1989
01-125028 May 1989
03-108824 May 1991
5-199422 June 1993
5-292481 November 1993
06-021830 January 1994
6-217110 August 1994
07-087331 March 1995
07-273658 October 1995
7-274171 October 1995
08-116263 May 1996
08-167852 June 1996
08-190764 July 1996
08-205169 August 1996
08-237138 September 1996
10-229340 August 1998
11-041573 February 1999
2000-338998 December 2000
2001-007707 January 2001
2002-158589 May 2002
2002-198822 July 2002
2002 204170 July 2002
2002-204170 July 2002
2002-540711 November 2002
2007-300389 November 2007
WO 88/01811 March 1988
WO 91/14340 September 1991
WO 98/00924 January 1998
WO 98/00977 January 1998
WO 02/35849 May 2002


























































Other References: U.S. Appl. No. 60/341,674, filed Dec. 17, 2001, Lee et al. cited by applicant
U.S. Appl. No. 60/488,710, filed Jul. 18, 2003, Srinivasan et al. cited by applicant
AAC Standard, ISO/IEC 13818-7, 1993. cited by applicant
Advanced Television Systems Committee, ATSC Standard: Digital Audio Compression (AC-3), Revision A, 140 pp. (1995). cited by applicant
Bell et al., “Text Compression,” Prentice Hall, pp. 105-107, 1990. cited by applicant
Bosi et al., “ISO/IEC MPEG-2 Advance Audio Coding,” J. Audio Eng. Soc., vol. 45, No. 10, pp. 789-812 (1997). cited by applicant
Brandenburg, “ASPEC CODING”, AES 10th International Conference, pp. 81-90 (Sep. 1991). cited by applicant
Brandenburg et al., “ASPEC: Adaptive Spectral Entropy Coding of High Quality Music Signals,” Proc. AES, 12 pp. (Feb. 1991). cited by applicant
Brandenburg, “OCF: Coding High Quality Audio with Data Rates of 64 kbit/sec,” Proc. AES, 17 pp. (Mar. 1988). cited by applicant
Brandenburg, “High Quality Sound Coding at 2.5 Bits/Sample,” Proc. AES, 15 pp. (Mar. 1988). cited by applicant
Brandenburg et al., “Low Bit Rate Codecs for Audio Signals: Implementations in Real Time,” Proc. AES, 12 pp. (Nov. 1988). cited by applicant
Brandenburg et al., “Low Bit Rate Coding of High-quality Digital Audio: Algorithms and Evaluation of Quality,” Proc. AES, pp. 201-209 (May 1989). cited by applicant
Brandenburg, “OCF—A New Coding Algorithm for High Quality Sound Signals,” Proc. ICASSP, pp. 5.1.1-5.1.4 (May 1987). cited by applicant
Brandenburg et al, “Second Generation Perceptual Audio Coding: the Hybrid Coder,” AES Preprint, 13 pp. (Mar. 1990). cited by applicant
Chiang et al., “A Radix-2 Non-Restoring 32-b/32-b Ring Divider with Asynchronous Control Scheme,” Tamkang Journal of Science and Engineering, vol. 2, No. 1, pp. 37-43 (1999). cited by applicant
Chung et al., “A Novel Memory-efficient Huffman Decoding Algorithm and its Implementation,” Signal Processing 62, pp. 207-213 (1997). cited by applicant
Costa et al., “Efficient Run-Length Encoding of Binary Sources with Unknown Statistics”, Technical Report No. MSR-TR-2003-95, pp. 1-10, Microsoft Research, Microsoft Corporation (Dec. 2003). cited by applicant
Cui et al., “A novel VLC based on second-run-level coding and dynamic truncation,” Proc. SPIE, vol. 6077, pp. 607726-1 to 607726-9 (2006). cited by applicant
Davidson et al., “Still Image Coding Standard—JPEG,” Helsinki University of Technology, Chapter 7, 24 pages, downloaded from the World Wide Web (2004). cited by applicant
Davis, “The AC-3 Multichannel Coder,” Dolby Laboratories Inc., Audio Engineering Study, Inc., Oct. 1993. cited by applicant
De Agostino et al., “Parallel Algorithms for Optimal Compression using Dictionaries with the Prefix Property,” in Proc. Data Compression Conference '92, IEEE Computer Society Press, pp. 52-62 (1992). cited by applicant
Alberto Del Bimbo, “Progettazione e Produzione Multimediale,” Univ. degli Studi di Firenze, <http://www.dsi.unifi.it/˜delbimbo/documents/ppmm/image—encoding.pdf>, 46 pages (accessed Oct. 19, 2010). cited by applicant
Duhamel et al., “A Fast Algorithm for the Implementation of Filter Banks Based on Time Domain Aliasing Cancellation,” Proc. Int'l Conf. Acous., Speech, and Sig. Process, pp. 2209-2212 (May 1991). cited by applicant
European Search Report, Application No. 10180949.9, 6 pages, Nov. 22, 2010. cited by applicant
Gailly, “comp.compression Frequently Asked Questions (part 1/3),” 64 pp., document marked Sep. 5, 1999 [Downloaded from the World Wide Web on Sep. 5, 2007]. cited by applicant
Gersho et al., “Vector Quantization and Signal Compression,” pp. 259-305, 1992. cited by applicant
Gibson et al., “Digital Compression for Multimedia,” “Chapter 7: Frequency Domain Coding,” Morgan Kaufmann Publishers, pp. 227-262, 1998. cited by applicant
Gibson et al., Digital Compression for Multimedia, “Chapter 2: Lossless Source Coding,” Morgan Kaufmann Publishers, Inc., San Francisco, pp. 17-61 (1998). cited by applicant
Gill et al., “Creating High-Quality Content with Microsoft Windows Media Encoder 7,” <http://msdn.microsoft.com/library/en-us/dnwmt/html/contcreation.asp?frame=true> 4 pp. (2000). cited by applicant
Hui et al., “Matsushita Algorithm for Coding of Moving Picture Information,” ISO/IEC-JTC1/SC29/WG11, MPEG91/217, 76 pages, Nov. 1991. cited by applicant
Ishii et al., “Parallel Variable Length Decoding with Inverse Quantization for Software MPEG-2 Decoders,” IEEE Signal Processing Systems, pp. 500-509 (1997). cited by applicant
“ISO/IEC 11172-3, Information Technology—Coding of Moving Pictures and Associated Audio for Digital Storage Media at Up to About 1.5 Mbit/s—Part 3: Audio,” 157 pp. (1993). cited by applicant
“ISO/IEC 13818-7, Information Technology—Generic Coding of Moving Pictures and Associated Audio Information—Part 7: Advanced Audio Coding (AAC),” 152 pp. (1997). cited by applicant
ISO/IEC 14496-2, “Coding of Audio-Visual Object—Part 2: Visual,” Third Edition, pp. 1-727, (Jun. 2004). cited by applicant
ITU-T Recommendation H.264, “Series H: Audiovisual and Multimedia Systems, Infrastructure of Audiovisual Services—Coding of Moving Video,” International Telecommunication Union, pp. 1-262 (May 2003). cited by applicant
ITU-T Recommendation T.800, “Series T: Terminals for Telematic Services,” International Telecommunication Union, pp. 1-194 (Aug. 2002). cited by applicant
Iwadare et al., “A 128 kb/s Hi-Fi Audio CODEC Based on Adaptive Transform Coding with Adaptive Block Size MDCT,” IEEE. J. Sel. Areas in Comm., pp. 138-144 (Jan. 1992). cited by applicant
Jeong et al., “Adaptive Huffman Coding of 2-D DCT Coefficients for Image Sequence Compression,” Signal Processing: Image Communication, vol. 7, 11 pp. (1995). cited by applicant
Johnston, “Perceptual Transform Coding of Wideband Stereo Signals,” Proc. ICASSP, pp. 1993-1996 (May 1989). cited by applicant
Johnston, “Transform Coding of Audio Signals Using Perceptual Noise Criteria,” IEEE J. Sel. Areas in Comm., pp. 314-323 (Feb. 1988). cited by applicant
Mahieux et al., “Transform Coding of Audio Signals at 64 kbits/sec,” Proc. Globecom, pp. 405.2.1-405.2.5 (Nov. 1990). cited by applicant
Malvar, “Fast Progressive Image Coding without Wavelets”, IEEE Data Compression Conference, Snowbird, Utah, 10 pp. (Mar. 2000). cited by applicant
Marpe et al., “Adaptive Codes for H.26L,” ITU Study Group 16—Video Coding Experts Group—ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q6), No. VCEG-L13, 7 pages, Jan. 8, 2001. cited by applicant
Murray and Van Ryper, “JPEG compression,” Encyclopedia of Graphics File Formats, 2nd Edition, Chapter 9, Section 6, 10 pp., downloaded from the World Wide Web (1996). cited by applicant
Najafzadeh-Azghandi, “Perceptual Coding of Narrowband Audio Signals,” Thesis, 139pp. (Apr. 2000). cited by applicant
Nelson, “The Data Compression Book,” M&T Books, pp. 123-165, 1992. cited by applicant
Princen et al., “Analysis/Synthesis Filter Bank Design Based on Time Domain Aliasing Cancellation,” IEEE Trans. ASSP, pp. 1153-1161 (Oct. 1986). cited by applicant
Reader, “History of MPEG Video Compression—Ver. 4.0,” 99 pp., document marked Dec. 16, 2003. cited by applicant
Schroder et al., “High Quality Digital Audio Encoding with 3.0 Bits/Semple using Adaptive Transform Coding,” Proc. 80th Conv. Aud. Eng. Soc., 8 pp. (Mar. 1986). cited by applicant
Shamoon et al., “A Rapidly Adaptive Lossless Compression Algorithm for High Fidelity Audio Coding,” IEEE Data Compression Conf. 1994, pp. 430-439 (Mar. 1994). cited by applicant
Sullivan et al., “The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions,” 21 pp. (Aug. 2004). cited by applicant
Theile et al., “Low-Bit Rate Coding of High Quality Audio Signals,” Proc. AES, 32 pp. (Mar. 1987). cited by applicant
Tu et al., “Context-Based Entropy Coding of Block Transform Coefficients for Image Compression,” IEEE Transactions on Image Processing, vol. 11, No. 11, pp. 1271-1283 (Nov. 2002). cited by applicant
Wien, “Variable Block-Size Transforms for Hybrid Video Coding,” Dissertation, 182 pp. (Feb. 2004). cited by applicant
Jeon et al., “Huffman Coding of DCT Coefficients Using Dynamic Codeword Assignment and Adaptive Codebook Selection,” Signal Processing: Image Communication 12, pp. 253-262 (1998). cited by applicant
Lakhani, “Optimal Humman Coding of DCT Blocks,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 14, No. 4, pp. 522-527 (2004). cited by applicant
Herre, Jurgen, “Temporal Noise Shaping, Quantization and Coding Methods in Perceptual Audio Coding: a Tutorial Introduction,” AES 17th Int'l Conference on High Quality Audio Coding, 14 pp. (1999). cited by applicant
Memon et al., “Lossless Compression of Video Sequences,” IEEE Trans. on Communications, 6 pp. (1996). cited by applicant
Quackenbush et al., “Noiseless Coding of Quantized Spectral Components in MPEG-2 Advanced Audio Coding,” Proc. 1997 Workshop on Applications of Signal Processing to Audio and Acoustics, 4 pp. (1997). cited by applicant
Primary Examiner: Lerner, Martin
Attorney, Agent or Firm: Chatterjee, Aaron
Sanders, Andrew
Minhas, Micky
Accession Number: edspgr.08712783
Database: USPTO Patent Grants
Be the first to leave a comment!
You must be logged in first