Understanding encoder–decoder structures in machine learning using information measures

We present a theory of representation learning to model and understand the role of encoder–decoder design in machine learning (ML) from an information-theoretic angle. We use two main information concepts, information sufficiency (IS) and mutual information loss to represent predictive structures in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal processing Jg. 234; S. 109983
Hauptverfasser: Silva, Jorge F., Faraggi, Victor, Ramirez, Camilo, Egaña, Alvaro, Pavez, Eduardo
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier B.V 01.09.2025
Schlagworte:
ISSN:0165-1684
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present a theory of representation learning to model and understand the role of encoder–decoder design in machine learning (ML) from an information-theoretic angle. We use two main information concepts, information sufficiency (IS) and mutual information loss to represent predictive structures in machine learning. Our first main result provides a functional expression that characterizes the class of probabilistic models consistent with an IS encoder–decoder latent predictive structure. This result formally justifies the encoder–decoder forward stages many modern ML architectures adopt to learn latent (compressed) representations for classification. To illustrate IS as a realistic and relevant model assumption, we revisit some known ML concepts and present some interesting new examples: invariant, robust, sparse, and digital models. Furthermore, our IS characterization allows us to tackle the fundamental question of how much performance could be lost, using the cross entropy risk, when a given encoder–decoder architecture is adopted in a learning setting. Here, our second main result shows that a mutual information loss quantifies the lack of expressiveness attributed to the choice of a (biased) encoder–decoder ML design. Finally, we address the problem of universal cross-entropy learning with an encoder–decoder design where necessary and sufficiency conditions are established to meet this requirement. In all these results, Shannon’s information measures offer new interpretations and explanations for representation learning. •A new theory of representation learning to understand encoder–decoder design.•Information sufficiency to model and characterize the predictive structures in learning.•Shannon’s information loss proposes to measure the encoder’s lack of expressiveness.•New results for universal cross-entropy learning.•On the appropriateness of digital encoders and information bottleneck for learning.
ISSN:0165-1684
DOI:10.1016/j.sigpro.2025.109983