State-of-the-Art Approaches for Image Deconvolution Problems, including Modern Deep Learning Architectures

In modern digital microscopy, deconvolution methods are widely used to eliminate a number of image defects and increase resolution. In this review, we have divided these methods into classical, deep learning-based, and optimization-based methods. The review describes the major architectures of neura...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Micromachines (Basel) Ročník 12; číslo 12; s. 1558
Hlavní autoři: Makarkin, Mikhail, Bratashov, Daniil
Médium: Journal Article
Jazyk:angličtina
Vydáno: Switzerland MDPI AG 14.12.2021
MDPI
Témata:
ISSN:2072-666X, 2072-666X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In modern digital microscopy, deconvolution methods are widely used to eliminate a number of image defects and increase resolution. In this review, we have divided these methods into classical, deep learning-based, and optimization-based methods. The review describes the major architectures of neural networks, such as convolutional and generative adversarial networks, autoencoders, various forms of recurrent networks, and the attention mechanism used for the deconvolution problem. Special attention is paid to deep learning as the most powerful and flexible modern approach. The review describes the major architectures of neural networks used for the deconvolution problem. We describe the difficulties in their application, such as the discrepancy between the standard loss functions and the visual content and the heterogeneity of the images. Next, we examine how to deal with this by introducing new loss functions, multiscale learning, and prior knowledge of visual content. In conclusion, a review of promising directions and further development of deconvolution methods in microscopy is given.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Review-3
content type line 23
ISSN:2072-666X
2072-666X
DOI:10.3390/mi12121558