Learned Greedy Method (LGM): A novel neural architecture for sparse coding and beyond

The fields of signal and image processing have been deeply influenced by the introduction of deep neural networks. Despite their impressive success, the architectures used in these solutions come with no clear justification, being “black box” machines that lack interpretability. A constructive remed...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of visual communication and image representation Jg. 77; S. 103095
Hauptverfasser: Khatib, Rajaei, Simon, Dror, Elad, Michael
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.05.2021
Schlagworte:
ISSN:1047-3203, 1095-9076
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The fields of signal and image processing have been deeply influenced by the introduction of deep neural networks. Despite their impressive success, the architectures used in these solutions come with no clear justification, being “black box” machines that lack interpretability. A constructive remedy to this drawback is a systematic design of networks by unfolding well-understood iterative algorithms. A popular representative of this approach is LISTA, evaluating sparse representations of processed signals. In this paper, we revisit this task and propose an unfolded version of a greedy pursuit algorithm for the same goal. More specifically, we concentrate on the well-known OMP algorithm, and introduce its unfolded and learned version. Key features of our Learned Greedy Method (LGM) are the ability to accommodate a dynamic number of unfolded layers, and a stopping mechanism based on representation error. We develop several variants of the proposed LGM architecture and demonstrate their flexibility and efficiency. •Unfolding greedy sparse pursuit algorithms to deep neural networks, known as LGM.•Most of LGM features are well justified from sparse representation point of view.•Learning the parameters of LGM in a supervised fashion via back-propagation.•Demonstrating LGM capabilities in various experiments.
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2021.103095