Lossy volume compression using Tucker truncation and thresholding

Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimensionality reduction and are being increasingly used for compactly encoding large multidimensional arrays, images and other visual data sets. In interactive applications, volume data often needs to be...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:The Visual computer Ročník 32; číslo 11; s. 1433 - 1446
Hlavní autoři: Ballester-Ripoll, Rafael, Pajarola, Renato
Médium: Journal Article
Jazyk:angličtina
Vydáno: Berlin/Heidelberg Springer Berlin Heidelberg 01.11.2016
Springer Nature B.V
Témata:
ISSN:0178-2789, 1432-2315
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimensionality reduction and are being increasingly used for compactly encoding large multidimensional arrays, images and other visual data sets. In interactive applications, volume data often needs to be decompressed and manipulated dynamically; when designing data reduction and reconstruction methods, several parameters must be taken into account, such as the achievable compression ratio, approximation error and reconstruction speed. Weighing these variables in an effective way is challenging, and here we present two main contributions to solve this issue for Tucker tensor decompositions. First, we provide algorithms to efficiently compute, store and retrieve good choices of tensor rank selection and decompression parameters in order to optimize memory usage, approximation quality and computational costs. Second, we propose a Tucker compression alternative based on coefficient thresholding and zigzag traversal, followed by logarithmic quantization on both the transformed tensor core and its factor matrices. In terms of approximation accuracy, this approach is theoretically and empirically better than the commonly used tensor rank truncation method.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-015-1130-y