Faster and Stronger Lossless Compression with Optimized Autoregressive Framework

Neural AutoRegressive (AR) framework has been applied in general-purpose lossless compression recently to improve compression performance. However, this paper found that directly applying the original AR framework causes the duplicated processing problem and the in-batch distribution variation probl...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2023 60th ACM/IEEE Design Automation Conference (DAC) s. 1 - 6
Hlavní autori: Mao, Yu, Li, Jingzong, Cui, Yufei, Xue, Jason Chun
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 09.07.2023
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Neural AutoRegressive (AR) framework has been applied in general-purpose lossless compression recently to improve compression performance. However, this paper found that directly applying the original AR framework causes the duplicated processing problem and the in-batch distribution variation problem, which leads to deteriorated compression performance. The key to address the duplicated processing problem is to disentangle the processing of the history symbol set at the input side. Two new types of neural blocks are first proposed. An individual-block performs separate feature extraction on each history symbol while a mix-block models the correlation between extracted features and estimates the probability. A progressive AR-based compression framework (PAC) is then proposed, which only requires one history symbol from the host at a time rather than the whole history symbol set. In addition, we introduced a trainable matrix multiplication to model the ordered importance, replacing previous hardware-unfriendly Gumble-Softmax sampling. The in-batch distribution variation problem is caused by AR-based compression's structured batch construction. Based on this observation, a batch-location-aware individual block is proposed to capture the heterogeneous in-batch distributions precisely, improving the performance without efficiency losses. Experimental results show the proposed framework can achieve an average of 130% speed improvement with an average of 3% compression ratio gain across data domains compared to the state-of-the-art.
DOI:10.1109/DAC56929.2023.10247866