Perception-driven infrared super-resolution via degradation-aware alternating optimization GAN

•New framework improves infrared image clarity by alternating optimization algorithm.•Enhances visual quality using deep learning features and generative adversarial network architecture.•Trained on diverse infrared scenes for better generalization in real-world applications.•Delivers superior super...

Full description

Saved in:
Bibliographic Details
Published in:Infrared physics & technology Vol. 152; p. 106217
Main Authors: Wang, Yu, Zhang, Lu, Yu, Quan, Yuan, Xin, Liu, Ying, Deng, Hao, Dai, Xinli, Zhang, Yueheng, Yang, Yao
Format: Journal Article
Language:English
Published: Elsevier B.V 01.01.2026
Subjects:
ISSN:1350-4495
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•New framework improves infrared image clarity by alternating optimization algorithm.•Enhances visual quality using deep learning features and generative adversarial network architecture.•Trained on diverse infrared scenes for better generalization in real-world applications.•Delivers superior super-resolution performance in both perceptual metrics and visual quality. Infrared imaging technology, with its unique capability for thermal radiation detection, is widely utilized in various fields such as military reconnaissance, medical diagnosis, industrial inspection, and aerospace. However, traditional infrared imaging systems are constrained by physical limitations and fail to meet the high-resolution demands of certain specific scenarios. Current infrared super-resolution models predominantly prioritize optimizing pixel-level metrics, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), while neglecting the human-perceived texture-level image quality. And they struggle to handle blind super-resolution scenarios involving unknown degradation kernels. To address these challenges, we propose AOGAN—a novel framework based on generative adversarial network (GAN) architecture. This model replaces the conventional generator with an alternating optimization network comprising a Restorer and an Estimator, which collaboratively perform image reconstruction and degradation estimation to achieve more accurate blind super-resolution. Additionally, we integrate a pre-trained VGG-19 network to extract deep image features for perceptual loss (PL) computation. The synergistic integration of PL function and GAN framework further enhances visual fidelity and perceptual quality. AOGAN demonstrates superior performance in infrared image super-resolution, outperforming state-of-the-art models in both perceptual metrics and visual quality. Furthermore, we applied the model to the inference task of infrared images of diverse scenarios captured by FLIR T540 camera. The experimental results show that our model exhibits excellent super-resolution reconstruction effects and holds a dominant position in four no-reference evaluation metrics.
ISSN:1350-4495
DOI:10.1016/j.infrared.2025.106217