A Deep Convolutional Encoder–Decoder–Restorer Architecture for Image Deblurring

The accuracy of many computer vision tasks is reduced by blurred images, so deblur is important. More details of the image can be captured by a common multi-stage network, but the computational complexity of this method is higher compared with a single-stage network. However, a single-stage network...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neural processing letters Ročník 56; číslo 1; s. 27
Hlavní autoři: Fan, Yiqing, Hong, Chaoqun, Zeng, Guanghui, Liu, Lijuan
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 11.02.2024
Springer Nature B.V
Témata:
ISSN:1573-773X, 1370-4621, 1573-773X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The accuracy of many computer vision tasks is reduced by blurred images, so deblur is important. More details of the image can be captured by a common multi-stage network, but the computational complexity of this method is higher compared with a single-stage network. However, a single-stage network cannot capture multi-scale information well. To tackle the problem, a novel convolutional encoder–decoder–restorer architecture is proposed. In this architecture, a multi-scale input structure is used in the encoder. Improved supervised attention module is inserted into the encoder for enhanced feature acquisition. In decoder, information supplement block is proposed to fuse multi-scale features. Finally, the fused features are used for image recovery in the restorer. In order to optimise the model in multiple domains, the loss function is calculated separately in the spatial and frequency domains. Our method is compared with existing methods on the GOPRO dataset. In addition, to verify the applications of our proposed method, we conduct experiments on the Real image dataset, the VOC2007 dataset and the LFW dataset. Experimental results show that our proposed method outperforms state-of-the-art deblurring methods and improves the accuracy of different vision tasks.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1573-773X
1370-4621
1573-773X
DOI:10.1007/s11063-024-11455-w