Single-frame multi-exposure image fusion via narrowband filter decoupled imaging

Multi-exposure image fusion (MEF) can efficiently enhance the dynamic range of image. It can break through the physical imaging limitations inherent in photoelectric sensors. However, the single-camera multi-exposure method requires a time investment, while the multi-camera single-exposure approach...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neurocomputing (Amsterdam) Ročník 625; s. 129441
Hlavní autoři: Zhao, Zhuang, Ke, Xin, Han, Jing, Wu, Zijian, Lu, Jun, Bai, Lianfa, Gong, Shuaifeng, Zhang, Yan, Peng, Yong, Xiong, Fengchao, Wei, Duan
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 07.04.2025
Témata:
ISSN:0925-2312
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Multi-exposure image fusion (MEF) can efficiently enhance the dynamic range of image. It can break through the physical imaging limitations inherent in photoelectric sensors. However, the single-camera multi-exposure method requires a time investment, while the multi-camera single-exposure approach is subject to imaging conditions. In this paper, we propose a single-frame decoupled imaging method that enables acquiring multiple differently exposed images from a single exposure captured by one color camera. The method leverages the physical imaging process of a color camera, decoupling the narrowband filtered RAW data into multiple exposure images by exploiting the variations in quantum efficiency distributions. And based on this approach, we construct a decomposed single-frame (DSF) images dataset. The sequence of images within this dataset are naturally spatio-temporally consistent and no longer require registration. Furthermore, a decomposed single-frame MEF network is proposed, termed as DSF-MEF, which employs a hierarchical encoder-decoder structure to predict exposure weight mappings. Specifically, we design a residual mixed attention module (RMAM) for exposure weight prediction. It uses channel and spatial domain attention mechanisms and residual jump connections to perform feature extraction. Subsequently, to improve the overall spatial continuity representation of the exposure weight map sequence, we construct multiscale feature integration module (MFIM) to capture exposure information at different resolution scales. A loss function composed of image structural similarity, gradient texture similarity, and pixel intensity is designed to comprehensively optimize fusion performance. Experimental results show that our method not only achieves single-frame HDR fusion imaging, but also achieves better fusion visual effects compared to other advanced methods.
ISSN:0925-2312
DOI:10.1016/j.neucom.2025.129441