TEXT COMPRESSION ALGORITHMS - A COMPARATIVE STUDY

Data Compression may be defined as the science and art of the representation of information in a crisply condensed form. For decades, Data compression has been one of the critical enabling technologies for the ongoing digital multimedia revolution. There are a lot of data compression algorithms whic...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:ICTACT journal on communication technology Ročník 2; číslo 4; s. 444 - 451
Hlavní autori: S, Senthil, L, Robert
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: ICT Academy of Tamil Nadu 01.12.2011
Predmet:
ISSN:0976-0091, 2229-6948
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Data Compression may be defined as the science and art of the representation of information in a crisply condensed form. For decades, Data compression has been one of the critical enabling technologies for the ongoing digital multimedia revolution. There are a lot of data compression algorithms which are available to compress files of different formats. This paper provides a survey of different basic lossless data compression algorithms. Experimental results and comparisons of the lossless compression algorithms using Statistical compression techniques and Dictionary based compression techniques were performed on text data. Among the Statistical coding techniques, the algorithms such as Shannon-Fano Coding, Huffman coding, Adaptive Huffman coding, Run Length Encoding and Arithmetic coding are considered. Lempel Ziv scheme which is a dictionary based technique is divided into two families: one derived from LZ77 (LZ77, LZSS, LZH, LZB and LZR) and the other derived from LZ78 (LZ78, LZW, LZFG, LZC and LZT). A set of interesting conclusions are derived on this basis.
ISSN:0976-0091
2229-6948
DOI:10.21917/ijct.2011.0062