Unrolling a rain-guided detail recovery network for singleimage deraining

Uložené v:
Podrobná bibliografia
Názov: Unrolling a rain-guided detail recovery network for singleimage deraining
Autori: Kailong Lin, Shaowei Zhang, Yu Luo, Jie Ling
Zdroj: Virtual Reality & Intelligent Hardware, Vol 5, Iss 1, Pp 11-23 (2023)
Informácie o vydavateľovi: Elsevier BV, 2023.
Rok vydania: 2023
Predmety: TK7885-7895, Computer engineering. Computer hardware, Rain attention, Context aggregation attention, 13. Climate action, Unrolling network, Detail recovery, 0211 other engineering and technologies, 0202 electrical engineering, electronic engineering, information engineering, 02 engineering and technology, 15. Life on land, Image deraining
Popis: Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.
Druh dokumentu: Article
Jazyk: English
ISSN: 2096-5796
DOI: 10.1016/j.vrih.2022.06.002
Prístupová URL adresa: https://doaj.org/article/df40d2aa1f6c45aba954e9f391248279
Rights: CC BY NC ND
Prístupové číslo: edsair.doi.dedup.....c38b98b963528d7fa93e21a5f1a7e8a9
Databáza: OpenAIRE
Popis
Abstrakt:Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.
ISSN:20965796
DOI:10.1016/j.vrih.2022.06.002