Guidance disentanglement network for Optics-Guided thermal UAV image super-resolution
Saved in:
| Title: | Guidance disentanglement network for Optics-Guided thermal UAV image super-resolution |
|---|---|
| Authors: | Zhicheng Zhao, Juanjuan Gu, Chenglong Li, Chun Wang, Zhongling Huang, Jin Tang |
| Source: | ISPRS Journal of Photogrammetry and Remote Sensing. 228:64-82 |
| Publication Status: | Preprint |
| Publisher Information: | Elsevier BV, 2025. |
| Publication Year: | 2025 |
| Subject Terms: | FOS: Computer and information sciences, Computer Vision and Pattern Recognition (cs.CV), Image and Video Processing (eess.IV), Computer Science - Computer Vision and Pattern Recognition, FOS: Electrical engineering, electronic engineering, information engineering, Electrical Engineering and Systems Science - Image and Video Processing |
| Description: | Optics-guided Thermal UAV image Super-Resolution (OTUAV-SR) has attracted significant research interest due to its potential applications in security inspection, agricultural measurement, and object detection. Existing methods often employ single guidance model to generate the guidance features from optical images to assist thermal UAV images super-resolution. However, single guidance models make it difficult to generate effective guidance features under favorable and adverse conditions in UAV scenarios, thus limiting the performance of OTUAV-SR. To address this issue, we propose a novel Guidance Disentanglement network (GDNet), which disentangles the optical image representation according to typical UAV scenario attributes to form guidance features under both favorable and adverse conditions, for robust OTUAV-SR. Moreover, we design an attribute-aware fusion module to combine all attribute-based optical guidance features, which could form a more discriminative representation and fit the attribute-agnostic guidance process. To facilitate OTUAV-SR research in complex UAV scenarios, we introduce VGTSR2.0, a large-scale benchmark dataset containing 3,500 aligned optical-thermal image pairs captured under diverse conditions and scenes. Extensive experiments on VGTSR2.0 demonstrate that GDNet significantly improves OTUAV-SR performance over state-of-the-art methods, especially in the challenging low-light and foggy environments commonly encountered in UAV scenarios. The dataset and code will be publicly available at https://github.com/Jocelyney/GDNet. 18 pages, 19 figures, 8 tables |
| Document Type: | Article |
| Language: | English |
| ISSN: | 0924-2716 |
| DOI: | 10.1016/j.isprsjprs.2025.06.011 |
| DOI: | 10.48550/arxiv.2410.20466 |
| Access URL: | http://arxiv.org/abs/2410.20466 |
| Rights: | Elsevier TDM arXiv Non-Exclusive Distribution |
| Accession Number: | edsair.doi.dedup.....91ff1d50335c621f289c70b979628f6d |
| Database: | OpenAIRE |
| Abstract: | Optics-guided Thermal UAV image Super-Resolution (OTUAV-SR) has attracted significant research interest due to its potential applications in security inspection, agricultural measurement, and object detection. Existing methods often employ single guidance model to generate the guidance features from optical images to assist thermal UAV images super-resolution. However, single guidance models make it difficult to generate effective guidance features under favorable and adverse conditions in UAV scenarios, thus limiting the performance of OTUAV-SR. To address this issue, we propose a novel Guidance Disentanglement network (GDNet), which disentangles the optical image representation according to typical UAV scenario attributes to form guidance features under both favorable and adverse conditions, for robust OTUAV-SR. Moreover, we design an attribute-aware fusion module to combine all attribute-based optical guidance features, which could form a more discriminative representation and fit the attribute-agnostic guidance process. To facilitate OTUAV-SR research in complex UAV scenarios, we introduce VGTSR2.0, a large-scale benchmark dataset containing 3,500 aligned optical-thermal image pairs captured under diverse conditions and scenes. Extensive experiments on VGTSR2.0 demonstrate that GDNet significantly improves OTUAV-SR performance over state-of-the-art methods, especially in the challenging low-light and foggy environments commonly encountered in UAV scenarios. The dataset and code will be publicly available at https://github.com/Jocelyney/GDNet.<br />18 pages, 19 figures, 8 tables |
|---|---|
| ISSN: | 09242716 |
| DOI: | 10.1016/j.isprsjprs.2025.06.011 |
Full Text Finder
Nájsť tento článok vo Web of Science