A Perceptually Optimized and Self-Calibrated Tone Mapping Operator

Gespeichert in:
Bibliographische Detailangaben
Titel: A Perceptually Optimized and Self-Calibrated Tone Mapping Operator
Autoren: Peibei Cao, Chenyang Le, Yuming Fang, Kede Ma
Quelle: IEEE Transactions on Visualization and Computer Graphics. 31:8268-8282
Publication Status: Preprint
Verlagsinformationen: Institute of Electrical and Electronics Engineers (IEEE), 2025.
Publikationsjahr: 2025
Schlagwörter: FOS: Computer and information sciences, Artificial Intelligence (cs.AI), Computer Science - Artificial Intelligence, Computer Vision and Pattern Recognition (cs.CV), Image and Video Processing (eess.IV), Computer Science - Computer Vision and Pattern Recognition, 0202 electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, 02 engineering and technology, Electrical Engineering and Systems Science - Image and Video Processing
Beschreibung: With the increasing popularity and accessibility of high dynamic range (HDR) photography, tone mapping operators (TMOs) for dynamic range compression are practically demanding. In this paper, we develop a two-stage neural network-based TMO that is self-calibrated and perceptually optimized. In Stage one, motivated by the physiology of the early stages of the human visual system, we first decompose an HDR image into a normalized Laplacian pyramid. We then use two lightweight deep neural networks (DNNs), taking the normalized representation as input and estimating the Laplacian pyramid of the corresponding LDR image. We optimize the tone mapping network by minimizing the normalized Laplacian pyramid distance (NLPD), a perceptual metric aligning with human judgments of tone-mapped image quality. In Stage two, the input HDR image is self-calibrated to compute the final LDR image. We feed the same HDR image but rescaled with different maximum luminances to the learned tone mapping network, and generate a pseudo-multi-exposure image stack with different detail visibility and color saturation. We then train another lightweight DNN to fuse the LDR image stack into a desired LDR image by maximizing a variant of the structural similarity index for multi-exposure image fusion (MEF-SSIM), which has been proven perceptually relevant to fused image quality. The proposed self-calibration mechanism through MEF enables our TMO to accept uncalibrated HDR images, while being physiology-driven. Extensive experiments show that our method produces images with consistently better visual quality. Additionally, since our method builds upon three lightweight DNNs, it is among the fastest local TMOs.
15 pages,17 figures
Publikationsart: Article
ISSN: 2160-9306
1077-2626
DOI: 10.1109/tvcg.2025.3566377
DOI: 10.48550/arxiv.2206.09146
Zugangs-URL: https://pubmed.ncbi.nlm.nih.gov/40315085
http://arxiv.org/abs/2206.09146
Rights: IEEE Copyright
arXiv Non-Exclusive Distribution
Dokumentencode: edsair.doi.dedup.....2769ce73e3ed63d522f4f7ff9cadbf05
Datenbank: OpenAIRE
Beschreibung
Abstract:With the increasing popularity and accessibility of high dynamic range (HDR) photography, tone mapping operators (TMOs) for dynamic range compression are practically demanding. In this paper, we develop a two-stage neural network-based TMO that is self-calibrated and perceptually optimized. In Stage one, motivated by the physiology of the early stages of the human visual system, we first decompose an HDR image into a normalized Laplacian pyramid. We then use two lightweight deep neural networks (DNNs), taking the normalized representation as input and estimating the Laplacian pyramid of the corresponding LDR image. We optimize the tone mapping network by minimizing the normalized Laplacian pyramid distance (NLPD), a perceptual metric aligning with human judgments of tone-mapped image quality. In Stage two, the input HDR image is self-calibrated to compute the final LDR image. We feed the same HDR image but rescaled with different maximum luminances to the learned tone mapping network, and generate a pseudo-multi-exposure image stack with different detail visibility and color saturation. We then train another lightweight DNN to fuse the LDR image stack into a desired LDR image by maximizing a variant of the structural similarity index for multi-exposure image fusion (MEF-SSIM), which has been proven perceptually relevant to fused image quality. The proposed self-calibration mechanism through MEF enables our TMO to accept uncalibrated HDR images, while being physiology-driven. Extensive experiments show that our method produces images with consistently better visual quality. Additionally, since our method builds upon three lightweight DNNs, it is among the fastest local TMOs.<br />15 pages,17 figures
ISSN:21609306
10772626
DOI:10.1109/tvcg.2025.3566377