A Perceptually Optimized and Self-Calibrated Tone Mapping Operator
Saved in:
| Title: | A Perceptually Optimized and Self-Calibrated Tone Mapping Operator |
|---|---|
| Authors: | Peibei Cao, Chenyang Le, Yuming Fang, Kede Ma |
| Source: | IEEE Transactions on Visualization and Computer Graphics. 31:8268-8282 |
| Publication Status: | Preprint |
| Publisher Information: | Institute of Electrical and Electronics Engineers (IEEE), 2025. |
| Publication Year: | 2025 |
| Subject Terms: | FOS: Computer and information sciences, Artificial Intelligence (cs.AI), Computer Science - Artificial Intelligence, Computer Vision and Pattern Recognition (cs.CV), Image and Video Processing (eess.IV), Computer Science - Computer Vision and Pattern Recognition, 0202 electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, 02 engineering and technology, Electrical Engineering and Systems Science - Image and Video Processing |
| Description: | With the increasing popularity and accessibility of high dynamic range (HDR) photography, tone mapping operators (TMOs) for dynamic range compression are practically demanding. In this paper, we develop a two-stage neural network-based TMO that is self-calibrated and perceptually optimized. In Stage one, motivated by the physiology of the early stages of the human visual system, we first decompose an HDR image into a normalized Laplacian pyramid. We then use two lightweight deep neural networks (DNNs), taking the normalized representation as input and estimating the Laplacian pyramid of the corresponding LDR image. We optimize the tone mapping network by minimizing the normalized Laplacian pyramid distance (NLPD), a perceptual metric aligning with human judgments of tone-mapped image quality. In Stage two, the input HDR image is self-calibrated to compute the final LDR image. We feed the same HDR image but rescaled with different maximum luminances to the learned tone mapping network, and generate a pseudo-multi-exposure image stack with different detail visibility and color saturation. We then train another lightweight DNN to fuse the LDR image stack into a desired LDR image by maximizing a variant of the structural similarity index for multi-exposure image fusion (MEF-SSIM), which has been proven perceptually relevant to fused image quality. The proposed self-calibration mechanism through MEF enables our TMO to accept uncalibrated HDR images, while being physiology-driven. Extensive experiments show that our method produces images with consistently better visual quality. Additionally, since our method builds upon three lightweight DNNs, it is among the fastest local TMOs. 15 pages,17 figures |
| Document Type: | Article |
| ISSN: | 2160-9306 1077-2626 |
| DOI: | 10.1109/tvcg.2025.3566377 |
| DOI: | 10.48550/arxiv.2206.09146 |
| Access URL: | https://pubmed.ncbi.nlm.nih.gov/40315085 http://arxiv.org/abs/2206.09146 |
| Rights: | IEEE Copyright arXiv Non-Exclusive Distribution |
| Accession Number: | edsair.doi.dedup.....2769ce73e3ed63d522f4f7ff9cadbf05 |
| Database: | OpenAIRE |
| Abstract: | With the increasing popularity and accessibility of high dynamic range (HDR) photography, tone mapping operators (TMOs) for dynamic range compression are practically demanding. In this paper, we develop a two-stage neural network-based TMO that is self-calibrated and perceptually optimized. In Stage one, motivated by the physiology of the early stages of the human visual system, we first decompose an HDR image into a normalized Laplacian pyramid. We then use two lightweight deep neural networks (DNNs), taking the normalized representation as input and estimating the Laplacian pyramid of the corresponding LDR image. We optimize the tone mapping network by minimizing the normalized Laplacian pyramid distance (NLPD), a perceptual metric aligning with human judgments of tone-mapped image quality. In Stage two, the input HDR image is self-calibrated to compute the final LDR image. We feed the same HDR image but rescaled with different maximum luminances to the learned tone mapping network, and generate a pseudo-multi-exposure image stack with different detail visibility and color saturation. We then train another lightweight DNN to fuse the LDR image stack into a desired LDR image by maximizing a variant of the structural similarity index for multi-exposure image fusion (MEF-SSIM), which has been proven perceptually relevant to fused image quality. The proposed self-calibration mechanism through MEF enables our TMO to accept uncalibrated HDR images, while being physiology-driven. Extensive experiments show that our method produces images with consistently better visual quality. Additionally, since our method builds upon three lightweight DNNs, it is among the fastest local TMOs.<br />15 pages,17 figures |
|---|---|
| ISSN: | 21609306 10772626 |
| DOI: | 10.1109/tvcg.2025.3566377 |
Full Text Finder
Nájsť tento článok vo Web of Science