Learning Image Formation and Regularization in Unrolling AMP for Lensless Image Reconstruction

This paper proposes an unrolling learnable approximate message passing recurrent neural network (called ULAMP-Net) for lensless image reconstruction. By unrolling the optimization iterations, key modules and parameters are made learnable to achieve high reconstruction quality. Specifically, observat...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on computational imaging Ročník 8; s. 479 - 489
Hlavní autoři: Yang, Jingyu, Yin, Xiangjun, Zhang, Mengxi, Yue, Huihui, Cui, Xingyu, Yue, Huanjing
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2573-0436, 2333-9403
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper proposes an unrolling learnable approximate message passing recurrent neural network (called ULAMP-Net) for lensless image reconstruction. By unrolling the optimization iterations, key modules and parameters are made learnable to achieve high reconstruction quality. Specifically, observation matrices are rectified on the fly through network learning to suppress systematic errors in the measurement of the point spread function. We devise a domain transformation structure to achieve a more powerful representation and propose a learnable multistage threshold function to accommodate a much richer family of priors with only a small amount of parameters. Finally, we introduce a multi-layer perceptron (MLP) module to enhance the input and an attention mechanism as an output module to refine the final results. Experimental results on display captured dataset and real scene data demonstrate that, compared with the state-of-the-art methods, our method achieves the best reconstruction quality with low computational complexity and the tiny model size on the testing set. Our code will be released in https://github.com/Xiangjun-TJU/ULAMP-NET .
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2573-0436
2333-9403
DOI:10.1109/TCI.2022.3181473