Compressive Sensing Based Image Codec With Partial Pre-Calculation

Compressive Sensing (CS) surpasses the limitations of the sampling theorem by reducing signal dimensions during sampling. Recent works integrate measurement coding into CS to enhance the compression ratio. However, these works significantly decrease image quality, and both encoding and decoding beco...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on multimedia Ročník 26; s. 4871 - 4883
Hlavní autori: Xu, Jiayao, Yang, Jian, Kimishima, Fuma, Taniguchi, Ittetsu, Zhou, Jinjia
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway IEEE 01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1520-9210, 1941-0077
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Compressive Sensing (CS) surpasses the limitations of the sampling theorem by reducing signal dimensions during sampling. Recent works integrate measurement coding into CS to enhance the compression ratio. However, these works significantly decrease image quality, and both encoding and decoding become time-consuming. This article proposes a Compressive Sensing based Image Codec with Partial Pre-calculation (CSCP) to solve these issues. The CSCP separates the original reconstruction procedure into two parts: reconstructing the frequency domain data and the inverse calculation. Depending on the feature of the chosen deterministic sensing matrix, the complex reconstruction procedure is reduced to twice matrix-based multiplications, resulting in a low time cost. Moreover, we can further optimize the reconstruction process by moving the frequency domain data reconstruction to the encoder, referred to as the partial pre-calculation process. Then compressing the sparse data in the frequency domain. This approach has two main benefits: 1) it reduces the complexity of the decoder, and 2) it results in less degradation in quality compared to existing measurement coding methods. Additionally, this work proposes the One-Row-Two-Tables strategy for defining Huffman Coding units. This approach leverages the quantized data distribution to improve compression efficiency while maintaining low complexity. In the decoder, the sequence of operations includes Huffman decoding, dequantization, and inverse calculation. Compared to the state-of-the-art, this work decreases 22.61<inline-formula><tex-math notation="LaTeX">\%</tex-math></inline-formula> bpp with 17.72<inline-formula><tex-math notation="LaTeX">\%</tex-math></inline-formula> increased quality. Meanwhile, time speeds up to 649.13× on the encoder, 11.03× on the decoder, and 288.46× in total.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2023.3327534