Single-frame multi-exposure image fusion via narrowband filter decoupled imaging
Multi-exposure image fusion (MEF) can efficiently enhance the dynamic range of image. It can break through the physical imaging limitations inherent in photoelectric sensors. However, the single-camera multi-exposure method requires a time investment, while the multi-camera single-exposure approach...
Saved in:
| Published in: | Neurocomputing (Amsterdam) Vol. 625; p. 129441 |
|---|---|
| Main Authors: | , , , , , , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Elsevier B.V
07.04.2025
|
| Subjects: | |
| ISSN: | 0925-2312 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Multi-exposure image fusion (MEF) can efficiently enhance the dynamic range of image. It can break through the physical imaging limitations inherent in photoelectric sensors. However, the single-camera multi-exposure method requires a time investment, while the multi-camera single-exposure approach is subject to imaging conditions. In this paper, we propose a single-frame decoupled imaging method that enables acquiring multiple differently exposed images from a single exposure captured by one color camera. The method leverages the physical imaging process of a color camera, decoupling the narrowband filtered RAW data into multiple exposure images by exploiting the variations in quantum efficiency distributions. And based on this approach, we construct a decomposed single-frame (DSF) images dataset. The sequence of images within this dataset are naturally spatio-temporally consistent and no longer require registration. Furthermore, a decomposed single-frame MEF network is proposed, termed as DSF-MEF, which employs a hierarchical encoder-decoder structure to predict exposure weight mappings. Specifically, we design a residual mixed attention module (RMAM) for exposure weight prediction. It uses channel and spatial domain attention mechanisms and residual jump connections to perform feature extraction. Subsequently, to improve the overall spatial continuity representation of the exposure weight map sequence, we construct multiscale feature integration module (MFIM) to capture exposure information at different resolution scales. A loss function composed of image structural similarity, gradient texture similarity, and pixel intensity is designed to comprehensively optimize fusion performance. Experimental results show that our method not only achieves single-frame HDR fusion imaging, but also achieves better fusion visual effects compared to other advanced methods. |
|---|---|
| ISSN: | 0925-2312 |
| DOI: | 10.1016/j.neucom.2025.129441 |