Fusion-based variational image dehazing

Saved in:
Bibliographic Details
Title: Fusion-based variational image dehazing
Authors: Galdran, Adrian, Vazquez-Corral, Javier, Pardo, David, Bertalmío, Marcelo
Publisher Information: Institute of Electrical and Electronics Engineers (IEEE)
Publication Year: 2017
Collection: UPF Digital Repository (Universitat Pompeu Fabra, Barcelona)
Subject Terms: Color correction, Contrast enhancement, Image dehazing, Image fusion, Variational image processing
Description: We propose a novel image-dehazing technique based on the minimization of two energy functionals and a fusion scheme to combine the output of both optimizations. The proposed fusion-based variational image-dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input. The iterates produced by this minimization are kept, and a second energy that shrinks faster intensity values of well-contrasted regions is minimized, allowing to generate a set of difference-of-saturation (DiffSat) maps by observing the shrinking rate. The iterates produced in the first minimization are then fused with these DiffSat maps to produce a haze-free version of the degraded input. The FVID method does not rely on a physical model from which to estimate a depth map, nor it needs a training stage on a database of human-labeled examples. Experimental results on a wide set of hazy images demonstrate that FVID better preserves the image structure on nearby regions that are less affected by fog, and it is successfully compared with other current methods in the task of removing haze degradation from faraway regions. ; The work of J. Vazquez-Corral and M. Bertalmío was supported by the ERC Starting Grant 306337, by the FEDER Fund under Grant TIN2015-71537-P(MINECO/FEDER,UE), by the ICREA Academia Award, and by the Spanish government under Grant IJCI-2014-19516. The work of D. Pardo was supported by the Spanish government under Grant MTM2013-40824-P, by the BCAM Severo Ochoa accreditation of excellence SEV-2013-0323, and the Basque Government CRG Grant IT649-13.
Document Type: article in journal/newspaper
File Description: application/pdf
Language: English
Relation: IEEE Signal Processing Letters. 2017;24(2): 151-5.; info:eu-repo/grantAgreement/EC/FP7/306337; info:eu-repo/grantAgreement/ES/1PE/TIN2015-71537-P; info:eu-repo/grantAgreement/ES/1PE/MTM2013-40824-P; http://hdl.handle.net/10230/32459; http://dx.doi.org/10.1109/LSP.2016.2643168
DOI: 10.1109/LSP.2016.2643168
Availability: http://hdl.handle.net/10230/32459
https://doi.org/10.1109/LSP.2016.2643168
Rights: © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The final published article can be found at http://dx.doi.org/10.1109/LSP.2016.2643168 ; info:eu-repo/semantics/openAccess
Accession Number: edsbas.5F3449CF
Database: BASE
Description
Abstract:We propose a novel image-dehazing technique based on the minimization of two energy functionals and a fusion scheme to combine the output of both optimizations. The proposed fusion-based variational image-dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input. The iterates produced by this minimization are kept, and a second energy that shrinks faster intensity values of well-contrasted regions is minimized, allowing to generate a set of difference-of-saturation (DiffSat) maps by observing the shrinking rate. The iterates produced in the first minimization are then fused with these DiffSat maps to produce a haze-free version of the degraded input. The FVID method does not rely on a physical model from which to estimate a depth map, nor it needs a training stage on a database of human-labeled examples. Experimental results on a wide set of hazy images demonstrate that FVID better preserves the image structure on nearby regions that are less affected by fog, and it is successfully compared with other current methods in the task of removing haze degradation from faraway regions. ; The work of J. Vazquez-Corral and M. Bertalmío was supported by the ERC Starting Grant 306337, by the FEDER Fund under Grant TIN2015-71537-P(MINECO/FEDER,UE), by the ICREA Academia Award, and by the Spanish government under Grant IJCI-2014-19516. The work of D. Pardo was supported by the Spanish government under Grant MTM2013-40824-P, by the BCAM Severo Ochoa accreditation of excellence SEV-2013-0323, and the Basque Government CRG Grant IT649-13.
DOI:10.1109/LSP.2016.2643168