Efficient structuring of the latent space for controllable data reconstruction and compression

Explainable neural models have gained a lot of attention in recent years. However, conventional encoder–decoder models do not capture information regarding the importance of the involved latent variables and rely on a heuristic a-priori specification of the dimensionality of the latent space or its...

Full description

Saved in:
Bibliographic Details
Published in:Graphics & visual computing Vol. 7; p. 200059
Main Authors: Trunz, Elena, Weinmann, Michael, Merzbach, Sebastian, Klein, Reinhard
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.12.2022
Subjects:
ISSN:2666-6294, 2666-6294
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Explainable neural models have gained a lot of attention in recent years. However, conventional encoder–decoder models do not capture information regarding the importance of the involved latent variables and rely on a heuristic a-priori specification of the dimensionality of the latent space or its selection based on multiple trainings. In this paper, we focus on the efficient structuring of the latent space of encoder–decoder approaches for explainable data reconstruction and compression. For this purpose, we leverage the concept of Shapley values to determine the contribution of the latent variables on the model’s output and rank them according to decreasing importance. As a result, a truncation of the latent dimensions to those that contribute the most to the overall reconstruction allows a trade-off between model compactness (i.e. dimensionality of the latent space) and representational power (i.e. reconstruction quality). In contrast to other recent autoencoder variants that incorporate a PCA-based ordering of the latent variables, our approach does not require time-consuming training processes and does not introduce additional weights. This makes our approach particularly valuable for compact representation and compression. We validate our approach at the examples of representing and compressing images as well as high-dimensional reflectance data. [Display omitted] •Analysis of the contribution and ranking of latent dimensions in encoder–decoder.•Novel method for specification of a suitable latent space dimensionality.•Theorem and the proof of the optimality of the Shapley ordering in the linear case.
ISSN:2666-6294
2666-6294
DOI:10.1016/j.gvc.2022.200059