MRFuse: Metric learning and masked autoencoder for fusing real infrared and visible images
The task of infrared and visible image fusion aims to retain the thermal targets from infrared images while preserving the details, brightness, and other important features from visible images. Current methods face challenges such as unclear fusion objectives, difficulty in interpreting the learning...
Gespeichert in:
| Veröffentlicht in: | Optics and laser technology Jg. 189; S. 112971 |
|---|---|
| Hauptverfasser: | , , , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Elsevier Ltd
01.11.2025
|
| Schlagworte: | |
| ISSN: | 0030-3992 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | The task of infrared and visible image fusion aims to retain the thermal targets from infrared images while preserving the details, brightness, and other important features from visible images. Current methods face challenges such as unclear fusion objectives, difficulty in interpreting the learning process, and uncontrollable auxiliary learning weights. To address these issues, this paper proposes a novel fusion method based on metric learning and masked autoencoders for real infrared and visible image fusion, termed MRFuse. MRFuse operates through a combination of metric mapping space, auxiliary networks, and fusion networks. First, we introduce a Real Degradation Estimation Module (RDEM), which employs a simple neural network to establish a controllable degradation estimation scheme within the metric space. Additionally, to train the metric space, we propose a sample generation method that provides complex training samples for the metric learning pipeline. Next, we present a fusion network based on masked autoencoding. Specifically, we construct hybrid masked infrared and visible image pairs and design a U-shaped ViT encoder–decoder architecture. This architecture leverages hierarchical feature representation and layer-wise fusion to reconstruct high-quality fused images. Finally, to train the fusion network, we design a masked region loss to constrain reconstruction errors within masked regions, and further employ gradient loss, structural consistency loss, and perceptual loss to enhance the quality of the fused images. Extensive experiments demonstrate that MRFuse exhibits superior controllability and excels in suppressing noise, blur, and glare, outperforming other state-of-the-art methods in both subjective and objective evaluations.
•A new controllable and interpretable method for infrared and visible image fusion.•Metric space guides the fusion process.•New hybrid mask input method increases the robustness of the fusion network.•MAE networks with masking loss constraints produce superior fusion images. |
|---|---|
| AbstractList | The task of infrared and visible image fusion aims to retain the thermal targets from infrared images while preserving the details, brightness, and other important features from visible images. Current methods face challenges such as unclear fusion objectives, difficulty in interpreting the learning process, and uncontrollable auxiliary learning weights. To address these issues, this paper proposes a novel fusion method based on metric learning and masked autoencoders for real infrared and visible image fusion, termed MRFuse. MRFuse operates through a combination of metric mapping space, auxiliary networks, and fusion networks. First, we introduce a Real Degradation Estimation Module (RDEM), which employs a simple neural network to establish a controllable degradation estimation scheme within the metric space. Additionally, to train the metric space, we propose a sample generation method that provides complex training samples for the metric learning pipeline. Next, we present a fusion network based on masked autoencoding. Specifically, we construct hybrid masked infrared and visible image pairs and design a U-shaped ViT encoder–decoder architecture. This architecture leverages hierarchical feature representation and layer-wise fusion to reconstruct high-quality fused images. Finally, to train the fusion network, we design a masked region loss to constrain reconstruction errors within masked regions, and further employ gradient loss, structural consistency loss, and perceptual loss to enhance the quality of the fused images. Extensive experiments demonstrate that MRFuse exhibits superior controllability and excels in suppressing noise, blur, and glare, outperforming other state-of-the-art methods in both subjective and objective evaluations.
•A new controllable and interpretable method for infrared and visible image fusion.•Metric space guides the fusion process.•New hybrid mask input method increases the robustness of the fusion network.•MAE networks with masking loss constraints produce superior fusion images. |
| ArticleNumber | 112971 |
| Author | Zhu, Depeng Jiang, Yichun Guo, Jinxin Han, Deng Zhan, Weida Chen, Yu Xu, Xiaoyu Li, YuBin |
| Author_xml | – sequence: 1 givenname: YuBin surname: Li fullname: Li, YuBin email: lyb@mails.cust.edu.cn organization: Changchun University of Science and Technology National Demonstration Center for Experimental Electrical, 130000, Changchun, China – sequence: 2 givenname: Weida orcidid: 0000-0003-1011-7416 surname: Zhan fullname: Zhan, Weida email: zhanweida@cust.edu.cn organization: Changchun University of Science and Technology National Demonstration Center for Experimental Electrical, 130000, Changchun, China – sequence: 3 givenname: Jinxin surname: Guo fullname: Guo, Jinxin email: guojinxin@mails.cust.edu.cn organization: Changchun University of Science and Technology National Demonstration Center for Experimental Electrical, 130000, Changchun, China – sequence: 4 givenname: Depeng surname: Zhu fullname: Zhu, Depeng email: zhudepeng@mails.cust.edu.cn organization: Changchun University of Science and Technology National Demonstration Center for Experimental Electrical, 130000, Changchun, China – sequence: 5 givenname: Yichun surname: Jiang fullname: Jiang, Yichun email: jiangyichun@cust.edu.cn organization: Changchun University of Science and Technology National Demonstration Center for Experimental Electrical, 130000, Changchun, China – sequence: 6 givenname: Yu surname: Chen fullname: Chen, Yu email: chenyu@mails.cust.edu.cn organization: Changchun University of Science and Technology National Demonstration Center for Experimental Electrical, 130000, Changchun, China – sequence: 7 givenname: Xiaoyu surname: Xu fullname: Xu, Xiaoyu email: cust-xxy@mails.cust.edu.cn organization: Changchun University of Science and Technology National Demonstration Center for Experimental Electrical, 130000, Changchun, China – sequence: 8 givenname: Deng surname: Han fullname: Han, Deng email: jl11269@buaa.edu.cn organization: Jilin Province Zhixing IoT Research Institute Co., Ltd, 130117, Changchun, China |
| BookMark | eNqFkMFOwzAQRH0oEm3hG_APJNhxaifcqooCUiskBBcu1sZeVy5pXNlpJf6eREVcOazmMjOafTMy6UKHhNxxlnPG5f0-D8e-hdSjyQtWLHLOi1rxCZkyJlgm6rq4JrOU9oyxUi7ElHxu39anhA90i330hrYIsfPdjkJn6QHSF1oKpz5gZ4LFSF0Y7pRGR0Roqe9chDiaBv_ZJ9-0SP0BdphuyJWDNuHtr87Jx_rxffWcbV6fXlbLTWYKKfqsaQS3HERZl5K5Bh0KV1fOVGqhFC-hZAVIbq2rRaVANBJlBUoaa2xpneRiTtSl18SQUkSnj3FYEL81Z3rEovf6D4sesegLliG5vCRxmHf2GHUyfvgUrY9oem2D_7fjB3zwdXQ |
| Cites_doi | 10.1007/s11263-023-01948-x 10.1007/s13042-023-01833-6 10.1016/j.inffus.2022.09.030 10.1016/j.inffus.2022.03.007 10.1109/ICCVW54120.2021.00389 10.1016/j.optcom.2024.131024 10.3390/app9061103 10.1109/TMM.2022.3228685 10.1016/j.inffus.2022.07.013 10.1109/TIM.2020.2986875 10.1016/j.infrared.2017.02.005 10.1016/j.dib.2017.09.038 10.1016/j.inffus.2021.02.023 10.1016/j.inffus.2022.09.019 10.1016/j.infrared.2022.104383 10.1109/TPAMI.2022.3221486 10.1016/j.inffus.2024.102841 10.1117/1.JEI.31.3.033043 10.1016/j.inffus.2019.07.005 10.3390/s18041169 10.1016/j.inffus.2022.11.010 10.1109/TMI.2019.2934577 10.1016/j.eswa.2024.123731 10.3934/ipi.2022075 10.1109/TASLP.2023.3349053 10.1109/TMM.2022.3192661 10.1016/j.inffus.2019.07.011 10.1364/AO.55.006480 10.3390/s17051127 10.1109/TCSVT.2023.3234340 10.1109/TCSVT.2021.3109895 10.1007/s11042-020-10035-z 10.1007/s00542-022-05315-7 10.1145/3503161.3547902 10.1109/CVPR52688.2022.00320 10.1364/OL.425485 10.1016/j.patcog.2024.110822 10.3390/electronics12071732 10.1016/j.inffus.2015.11.003 10.1016/j.inffus.2018.09.004 10.1016/j.infrared.2022.104486 10.3390/rs14122789 10.1016/j.infrared.2017.05.007 10.1109/TGRS.2023.3334729 10.3390/rs15030685 10.1016/j.cviu.2022.103407 10.1016/j.inffus.2022.10.034 10.1016/j.bspc.2022.104277 10.1109/TIP.2020.2977573 10.3390/s23020599 10.1109/TIP.2023.3263113 10.1016/j.compag.2022.107511 |
| ContentType | Journal Article |
| Copyright | 2025 Elsevier Ltd |
| Copyright_xml | – notice: 2025 Elsevier Ltd |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.optlastec.2025.112971 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Physics |
| ExternalDocumentID | 10_1016_j_optlastec_2025_112971 S0030399225005626 |
| GroupedDBID | --K --M -~X .DC .~1 0R~ 123 1B1 1RT 1~. 1~5 29N 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JN AABXZ AAEDT AAEDW AAEPC AAIKC AAIKJ AAKOC AALRI AAMNW AAOAW AAQFI AAQXK AATTM AAXKI AAXUO AAYWO ABDPE ABJNI ABMAC ABNEU ABWVN ABXDB ABXRA ACBEA ACDAQ ACFVG ACGFO ACGFS ACIWK ACNNM ACRLP ACRPL ACVFH ADBBV ADCNI ADEZE ADMUD ADNMO ADTZH AEBSH AECPX AEIPS AEKER AENEX AEUPX AEZYN AFFNX AFJKZ AFPUW AFRZQ AFTJW AFXIZ AGCQF AGHFR AGQPQ AGRNS AGUBO AGYEJ AHHHB AHJVU AIEXJ AIGII AIIUN AIKHN AITUG AIVDX AKBMS AKRWK AKYEP ALMA_UNASSIGNED_HOLDINGS AMRAJ ANKPU APXCP ASPBG AVWKF AXJTR AZFZN BBWZM BJAXD BKOJK BLXMC BNPGV CS3 DU5 EBS EFJIC EJD EO8 EO9 EP2 EP3 FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q GBLVA HMV HVGLF HZ~ IHE J1W JJJVA KOM LY7 M38 M41 MAGPM MO0 N9A NDZJH O-L O9- OAUVE OGIMB OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SDF SDG SDP SES SET SEW SPC SPCBC SPD SPG SSH SSM SSQ SST SSZ T5K TN5 UHS WH7 WUQ ZMT ~G- 9DU AAYXX ACLOT CITATION EFKBS EFLBG ~HD |
| ID | FETCH-LOGICAL-c263t-bb31d1a349460fbefe3f98fc8757714a402a61ddf9387a3b6e68a76cdcd4df613 |
| ISICitedReferencesCount | 1 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001508785100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0030-3992 |
| IngestDate | Sat Nov 29 07:48:38 EST 2025 Sat Jul 05 17:12:27 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Deep learning Metric space Masked autoencoder Image degradation Image fusion |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c263t-bb31d1a349460fbefe3f98fc8757714a402a61ddf9387a3b6e68a76cdcd4df613 |
| ORCID | 0000-0003-1011-7416 |
| ParticipantIDs | crossref_primary_10_1016_j_optlastec_2025_112971 elsevier_sciencedirect_doi_10_1016_j_optlastec_2025_112971 |
| PublicationCentury | 2000 |
| PublicationDate | November 2025 2025-11-00 |
| PublicationDateYYYYMMDD | 2025-11-01 |
| PublicationDate_xml | – month: 11 year: 2025 text: November 2025 |
| PublicationDecade | 2020 |
| PublicationTitle | Optics and laser technology |
| PublicationYear | 2025 |
| Publisher | Elsevier Ltd |
| Publisher_xml | – name: Elsevier Ltd |
| References | Tang, Yuan, Zhang, Jiang, Ma (b60) 2022; 83 Bavirisetti, Xiao, Liu (b63) 2017 Wang, Shao, Chen, Xu, Zhang (b26) 2022; 25 Liu, Dou, Chen, Qin, Heng (b33) 2019; 39 Toet (b47) 2017; 15 Tang, He, Liu, Duan, Si (b31) 2023; 33 Karim, Tong, Li, Qadir, Farooq, Yu (b51) 2023; 90 Zhang, Liu, Sun, Yan, Zhao, Zhang (b67) 2020; 54 Yiming Sun, Bing Cao, Pengfei Zhu, Qinghua Hu, Detfusion: A detection-driven infrared and visible image fusion network, in: Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 4003–4011. Eymaël, Vandeghen, Cioppa, Giancola, Ghanem, Van Droogenbroeck (b44) 2024 Ma, Zhou, Wang, Zong (b64) 2017; 82 Kan, He, Cen, Li, Mladenovic, He (b39) 2022; 45 Tang, He, Liu (b30) 2022; 25 Wu, Liu, Sui, Cao (b70) 2021; 46 Li, Wu, Kittler (b10) 2018 Ma, Wang, Li, Yang, Li, Song, Li (b2) 2023; 23 Lin, Gao, Shi, Dong, Du (b40) 2023; 61 Singh, Singh, Gehlot, Kaur, Gagandeep (b7) 2023; 29 Xue, Wang, Zhao (b18) 2022; 127 Kim, Lee, Choi, Lee (b41) 2024 Gao, Liu, Yan (b19) 2020; 39 Zhang, Zhang (b53) 2023; 204 Patel, Chaudhary (b6) 2020 Li, Zhang, Hong, Yao, Chanussot (b12) 2023; 61 Fang, Zeng, Zhang, Liu, Zhao, Zhang, Yang, Liu, Miao, Hu (b37) 2023; 80 Wang, Wang, Wu, Xu, Zhang (b23) 2021; 32 Tang, Xiang, Zhang, Gong, Ma (b46) 2023; 91 Setiadi (b56) 2021; 80 Chen, Eickhoff (b35) 2021 Li, Kan, He (b38) 2020 Dong, Li, Li (b3) 2023; 12 Zhao, Xu, Zhang, Liu, Li, Zhang (b11) 2020 Li, Wu (b48) 2019; 9 Ma, Xu, Jiang, Mei, Zhang (b25) 2020; 29 Li, Hu, Ni, Zeng (b49) 2023; 17 Hong, Wu, Xu (b17) 2022; 31 Liu, Dian, Li, Liu (b28) 2023; 91 Liu, Liu, Wang, Du (b15) 2024; 573 Le, Huang, Xu, Fan, Ma, Mei, Ma (b66) 2022; 88 Li, Liu, Zhou, Zhang, Kasabov (b55) 2023; 128 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick, Masked autoencoders are scalable vision learners, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16000–16009. Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu, Video swin transformer, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3202–3211. Jiang, Ren, Lin (b34) 2023 Weifeng Ge, Deep metric learning with hierarchical triplet loss, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 269–285. Xu, Gong, Tian, Huang, Ma (b16) 2022; 218 Chang, Feng, Yang, Gao (b1) 2023; 32 (b58) 2017 Zuo, Liu, Bai, Wang, Sun (b20) 2017; 17 Huang, Bi, Wu (b21) 2018; 18 Guo, Zhan, Jiang, Ge, Chen, Xu, Li, Liu (b45) 2024; 249 Wang, Li, Zhang, Luo, Chen, Wang, Chi, Dai (b69) 2025; 117 Gao, Shi, Zhu, Fu, Wu (b4) 2022; 14 Ma, Yu, Liang, Li, Jiang (b24) 2019; 48 Wang, Xi, Li, Li (b29) 2023; 72 Huang, Li, Du, Shen (b14) 2024 Zhou, Wang, Li, Dong (b61) 2016; 30 Zhou, Dong, Xie, Gao (b62) 2016; 55 Cheng, Xu, Wu (b27) 2023; 92 Li, Wu, Kittler (b22) 2021; 73 Jiawei Ren, Mingyuan Zhang, Cunjun Yu, Ziwei Liu, Balanced mse for imbalanced visual regression, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7926–7935. Li, Zou, Wang, Lin (b8) 2023; 15 Luo, Huang, Li, Wang, Tan (b50) 2022 Yang, Zhang, Huang, Zuo, Sun (b9) 2020; 70 Peng, Zhao, Hu, Zhuang, Wang (b5) 2023; 14 Ma, Liang, Yu, Chen, Guo, Wu, Jiang (b57) 2020; 54 Tang, He, Liu (b32) 2024; 156 Zhang, Zhang, Bai, Zhang (b65) 2017; 83 Li, Liu, Zhang, Liu (b13) 2024; 132 Gupta, Wu, Deng, Li (b43) 2023; 36 Xinyu Jia, Chuang Zhu, Minzhen Li, Wenqi Tang, Wenli Zhou, LLVIP: A visible-infrared paired dataset for low-light vision, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3496–3504. Ma (10.1016/j.optlastec.2025.112971_b57) 2020; 54 Tang (10.1016/j.optlastec.2025.112971_b32) 2024; 156 Singh (10.1016/j.optlastec.2025.112971_b7) 2023; 29 Gao (10.1016/j.optlastec.2025.112971_b4) 2022; 14 Li (10.1016/j.optlastec.2025.112971_b22) 2021; 73 Kim (10.1016/j.optlastec.2025.112971_b41) 2024 Le (10.1016/j.optlastec.2025.112971_b66) 2022; 88 Gupta (10.1016/j.optlastec.2025.112971_b43) 2023; 36 10.1016/j.optlastec.2025.112971_b42 Guo (10.1016/j.optlastec.2025.112971_b45) 2024; 249 Tang (10.1016/j.optlastec.2025.112971_b60) 2022; 83 Patel (10.1016/j.optlastec.2025.112971_b6) 2020 Jiang (10.1016/j.optlastec.2025.112971_b34) 2023 Zhang (10.1016/j.optlastec.2025.112971_b65) 2017; 83 Fang (10.1016/j.optlastec.2025.112971_b37) 2023; 80 Zhang (10.1016/j.optlastec.2025.112971_b53) 2023; 204 Zhou (10.1016/j.optlastec.2025.112971_b61) 2016; 30 Hong (10.1016/j.optlastec.2025.112971_b17) 2022; 31 Li (10.1016/j.optlastec.2025.112971_b12) 2023; 61 Kan (10.1016/j.optlastec.2025.112971_b39) 2022; 45 Li (10.1016/j.optlastec.2025.112971_b55) 2023; 128 Zhang (10.1016/j.optlastec.2025.112971_b67) 2020; 54 10.1016/j.optlastec.2025.112971_b54 Tang (10.1016/j.optlastec.2025.112971_b30) 2022; 25 Li (10.1016/j.optlastec.2025.112971_b48) 2019; 9 Li (10.1016/j.optlastec.2025.112971_b38) 2020 10.1016/j.optlastec.2025.112971_b59 Setiadi (10.1016/j.optlastec.2025.112971_b56) 2021; 80 Cheng (10.1016/j.optlastec.2025.112971_b27) 2023; 92 Li (10.1016/j.optlastec.2025.112971_b8) 2023; 15 Chen (10.1016/j.optlastec.2025.112971_b35) 2021 Xue (10.1016/j.optlastec.2025.112971_b18) 2022; 127 Yang (10.1016/j.optlastec.2025.112971_b9) 2020; 70 Ma (10.1016/j.optlastec.2025.112971_b24) 2019; 48 10.1016/j.optlastec.2025.112971_b52 Peng (10.1016/j.optlastec.2025.112971_b5) 2023; 14 Zhao (10.1016/j.optlastec.2025.112971_b11) 2020 Zhou (10.1016/j.optlastec.2025.112971_b62) 2016; 55 Huang (10.1016/j.optlastec.2025.112971_b14) 2024 Karim (10.1016/j.optlastec.2025.112971_b51) 2023; 90 Ma (10.1016/j.optlastec.2025.112971_b2) 2023; 23 Zuo (10.1016/j.optlastec.2025.112971_b20) 2017; 17 Chang (10.1016/j.optlastec.2025.112971_b1) 2023; 32 Li (10.1016/j.optlastec.2025.112971_b13) 2024; 132 Lin (10.1016/j.optlastec.2025.112971_b40) 2023; 61 Huang (10.1016/j.optlastec.2025.112971_b21) 2018; 18 Wang (10.1016/j.optlastec.2025.112971_b69) 2025; 117 Tang (10.1016/j.optlastec.2025.112971_b46) 2023; 91 Ma (10.1016/j.optlastec.2025.112971_b64) 2017; 82 Gao (10.1016/j.optlastec.2025.112971_b19) 2020; 39 10.1016/j.optlastec.2025.112971_b68 Liu (10.1016/j.optlastec.2025.112971_b28) 2023; 91 Toet (10.1016/j.optlastec.2025.112971_b47) 2017; 15 Xu (10.1016/j.optlastec.2025.112971_b16) 2022; 218 Liu (10.1016/j.optlastec.2025.112971_b33) 2019; 39 Bavirisetti (10.1016/j.optlastec.2025.112971_b63) 2017 Eymaël (10.1016/j.optlastec.2025.112971_b44) 2024 (10.1016/j.optlastec.2025.112971_b58) 2017 Wang (10.1016/j.optlastec.2025.112971_b23) 2021; 32 Tang (10.1016/j.optlastec.2025.112971_b31) 2023; 33 Luo (10.1016/j.optlastec.2025.112971_b50) 2022 Dong (10.1016/j.optlastec.2025.112971_b3) 2023; 12 Wu (10.1016/j.optlastec.2025.112971_b70) 2021; 46 Wang (10.1016/j.optlastec.2025.112971_b26) 2022; 25 Wang (10.1016/j.optlastec.2025.112971_b29) 2023; 72 Ma (10.1016/j.optlastec.2025.112971_b25) 2020; 29 Li (10.1016/j.optlastec.2025.112971_b10) 2018 10.1016/j.optlastec.2025.112971_b36 Liu (10.1016/j.optlastec.2025.112971_b15) 2024; 573 Li (10.1016/j.optlastec.2025.112971_b49) 2023; 17 |
| References_xml | – volume: 33 start-page: 3159 year: 2023 end-page: 3172 ident: b31 article-title: Datfuse: Infrared and visible image fusion via dual attention transformer publication-title: IEEE Trans. Circuits Syst. Video Technol. – start-page: 141 year: 2020 end-page: 157 ident: b38 article-title: Unsupervised deep metric learning with transformed attention consistency and contrastive clustering loss publication-title: European Conference on Computer Vision – volume: 17 start-page: 726 year: 2023 end-page: 745 ident: b49 article-title: Deep CNN denoiser prior for blurred images restoration with multiplicative noise publication-title: Inverse Probl. Imaging – volume: 128 year: 2023 ident: b55 article-title: Infrared and visible image fusion based on residual dense network and gradient loss publication-title: Infrared Phys. Technol. – volume: 88 start-page: 305 year: 2022 end-page: 318 ident: b66 article-title: UIFGAN: An unsupervised continual-learning generative adversarial network for unified image fusion publication-title: Inf. Fusion – year: 2017 ident: b58 article-title: Flir dataset – volume: 156 year: 2024 ident: b32 article-title: Itfuse: An interactive transformer for infrared and visible image fusion publication-title: Pattern Recognit. – volume: 15 start-page: 249 year: 2017 end-page: 251 ident: b47 article-title: The TNO multiband image data collection publication-title: Data Brief – volume: 31 year: 2022 ident: b17 article-title: Mefuse: end-to-end infrared and visible image fusion method based on multibranch encoder publication-title: J. Electron. Imaging – year: 2023 ident: b34 article-title: Llm-blender: Ensembling large language models with pairwise ranking and generative fusion – reference: Xinyu Jia, Chuang Zhu, Minzhen Li, Wenqi Tang, Wenli Zhou, LLVIP: A visible-infrared paired dataset for low-light vision, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3496–3504. – volume: 14 start-page: 2789 year: 2022 ident: b4 article-title: Infrared and visible image fusion with deep neural network in enhanced flight vision system publication-title: Remote. Sens. – volume: 83 start-page: 79 year: 2022 end-page: 92 ident: b60 article-title: Piafusion: A progressive infrared and visible image fusion network based on illumination aware publication-title: Inf. Fusion – volume: 573 year: 2024 ident: b15 article-title: WaveFusionNet: Infrared and visible image fusion based on multi-scale feature encoder–decoder and discrete wavelet decomposition publication-title: Opt. Commun. – volume: 70 start-page: 1 year: 2020 end-page: 15 ident: b9 article-title: Infrared and visible image fusion using visual saliency sparse representation and detail injection model publication-title: IEEE Trans. Instrum. Meas. – volume: 61 start-page: 1 year: 2023 end-page: 12 ident: b12 article-title: LRR-Net: An interpretable deep unfolding network for hyperspectral anomaly detection publication-title: IEEE Trans. Geosci. Remote Sens. – volume: 39 start-page: 4617 year: 2020 end-page: 4629 ident: b19 article-title: Infrared and visible image fusion using dual-tree complex wavelet transform and convolutional sparse representation publication-title: J. Intell. Fuzzy Systems – volume: 25 start-page: 7800 year: 2022 end-page: 7813 ident: b26 article-title: Infrared and visible image fusion via interactive compensatory attention adversarial learning publication-title: IEEE Trans. Multimed. – start-page: 2705 year: 2018 end-page: 2710 ident: b10 article-title: Infrared and visible image fusion using a deep learning framework publication-title: 2018 24th International Conference on Pattern Recognition – volume: 91 start-page: 477 year: 2023 end-page: 493 ident: b46 article-title: Divfusion: Darkness-free infrared and visible image fusion publication-title: Inf. Fusion – volume: 73 start-page: 72 year: 2021 end-page: 86 ident: b22 article-title: RFN-Nest: An end-to-end residual fusion network for infrared and visible images publication-title: Inf. Fusion – volume: 218 year: 2022 ident: b16 article-title: CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition publication-title: Comput. Vis. Image Underst. – year: 2024 ident: b41 article-title: Audio super-resolution with robust speech representation learning of masked autoencoder publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. – volume: 9 start-page: 1103 year: 2019 ident: b48 article-title: Learning deep CNN denoiser priors for depth image inpainting publication-title: Appl. Sci. – volume: 29 start-page: 4980 year: 2020 end-page: 4995 ident: b25 article-title: DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion publication-title: IEEE Trans. Image Process. – year: 2022 ident: b50 article-title: Learning the degradation distribution for blind image super-resolution – volume: 91 start-page: 205 year: 2023 end-page: 214 ident: b28 article-title: Sgfusion: A saliency guided deep-learning framework for pixel-level image fusion publication-title: Inf. Fusion – volume: 54 start-page: 99 year: 2020 end-page: 118 ident: b67 article-title: IFCNN: A general image fusion framework based on convolutional neural network publication-title: Inf. Fusion – reference: Yiming Sun, Bing Cao, Pengfei Zhu, Qinghua Hu, Detfusion: A detection-driven infrared and visible image fusion network, in: Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 4003–4011. – volume: 55 start-page: 6480 year: 2016 end-page: 6490 ident: b62 article-title: Fusion of infrared and visible images for night-vision context enhancement publication-title: Appl. Opt. – volume: 18 start-page: 1169 year: 2018 ident: b21 article-title: Infrared and visible image fusion based on different constraints in the non-subsampled shearlet transform domain publication-title: Sensors – volume: 48 start-page: 11 year: 2019 end-page: 26 ident: b24 article-title: FusionGAN: A generative adversarial network for infrared and visible image fusion publication-title: Inf. Fusion – volume: 12 start-page: 1732 year: 2023 ident: b3 article-title: Research on detection and recognition technology of a visible and infrared dim and small target based on deep learning publication-title: Electron. – reference: Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu, Video swin transformer, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3202–3211. – volume: 45 start-page: 7220 year: 2022 end-page: 7238 ident: b39 article-title: Contrastive bayesian analysis for deep metric learning publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 90 start-page: 185 year: 2023 end-page: 217 ident: b51 article-title: Current advances and future perspectives of image fusion: A comprehensive review publication-title: Inf. Fusion – volume: 54 start-page: 85 year: 2020 end-page: 98 ident: b57 article-title: Infrared and visible image fusion via detail preserving adversarial learning publication-title: Inf. Fusion – volume: 132 start-page: 1625 year: 2024 end-page: 1644 ident: b13 article-title: A deep learning framework for infrared and visible image fusion without strict registration publication-title: Int. J. Comput. Vis. – year: 2024 ident: b14 article-title: Spatiotemporal enhancement and interlevel fusion network for remote sensing images change detection publication-title: IEEE Trans. Geosci. Remote Sens. – volume: 204 year: 2023 ident: b53 article-title: Modified U-Net for plant diseased leaf image segmentation publication-title: Comput. Electron. Agric. – volume: 39 start-page: 718 year: 2019 end-page: 728 ident: b33 article-title: Multi-task deep model with margin ranking loss for lung nodule analysis publication-title: IEEE Trans. Med. Imaging – volume: 61 start-page: 1 year: 2023 end-page: 14 ident: b40 article-title: SS-MAE: Spatial–spectral masked autoencoder for multisource remote sensing image classification publication-title: IEEE Trans. Geosci. Remote Sens. – volume: 249 year: 2024 ident: b45 article-title: Mfhod: Multi-modal image fusion method based on the higher-order degradation model publication-title: Expert Syst. Appl. – volume: 17 start-page: 1127 year: 2017 ident: b20 article-title: Airborne infrared and visible image fusion combined with region segmentation publication-title: Sensors – volume: 80 start-page: 8423 year: 2021 end-page: 8444 ident: b56 article-title: PSNR vs SSIM: imperceptibility quality assessment for image steganography publication-title: Multimedia Tools Appl. – volume: 46 start-page: 2908 year: 2021 end-page: 2911 ident: b70 article-title: High-speed computer-generated holography using an autoencoder-based deep neural network publication-title: Opt. Lett. – volume: 29 start-page: 457 year: 2023 end-page: 467 ident: b7 article-title: IR and visible image fusion using DWT and bilateral filter publication-title: Microsyst. Technol. – volume: 32 start-page: 3360 year: 2021 end-page: 3374 ident: b23 article-title: Unfusion: A unified multi-scale densely connected network for infrared and visible image fusion publication-title: IEEE Trans. Circuits Syst. Video Technol. – volume: 92 start-page: 80 year: 2023 end-page: 92 ident: b27 article-title: Mufusion: A general unsupervised image fusion network based on memory unit publication-title: Inf. Fusion – volume: 83 start-page: 227 year: 2017 end-page: 237 ident: b65 article-title: Infrared and visual image fusion through infrared feature extraction and visual information preservation publication-title: Infrared Phys. Technol. – start-page: 1 year: 2017 end-page: 9 ident: b63 article-title: Multi-sensor image fusion based on fourth order partial differential equations publication-title: 2017 20th International Conference on Information Fusion (Fusion) – volume: 127 year: 2022 ident: b18 article-title: Flfuse-net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information publication-title: Infrared Phys. Technol. – volume: 25 start-page: 5413 year: 2022 end-page: 5428 ident: b30 article-title: YDTR: Infrared and visible image fusion via Y-shape dynamic transformer publication-title: IEEE Trans. Multimed. – year: 2021 ident: b35 article-title: Poolrank: Max/min pooling-based ranking loss for listwise learning & ranking balance – start-page: 127 year: 2020 end-page: 144 ident: b6 article-title: A review on infrared and visible image fusion techniques publication-title: Intelligent Communication Technologies and Virtual Mobile Networks: ICICV 2019 – volume: 117 year: 2025 ident: b69 article-title: SMAE-fusion: Integrating saliency-aware masked autoencoder with hybrid attention transformer for infrared–visible image fusion publication-title: Inf. Fusion – year: 2020 ident: b11 article-title: DIDFuse: Deep image decomposition for infrared and visible image fusion – volume: 80 year: 2023 ident: b37 article-title: Deep metric learning with mirror attention and fine triplet loss for fundus image retrieval in ophthalmology publication-title: Biomed. Signal Process. Control. – year: 2024 ident: b44 article-title: Efficient image pre-training with siamese cropped masked autoencoders – volume: 23 start-page: 599 year: 2023 ident: b2 article-title: Infrared and visible image fusion technology and application: A review publication-title: Sensors – volume: 14 start-page: 3281 year: 2023 end-page: 3293 ident: b5 article-title: Siamese infrared and visible light fusion network for RGB-T tracking publication-title: Int. J. Mach. Learn. Cybern. – reference: Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick, Masked autoencoders are scalable vision learners, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16000–16009. – volume: 32 start-page: 2077 year: 2023 end-page: 2092 ident: b1 article-title: AFT: Adaptive fusion transformer for visible and infrared images publication-title: IEEE Trans. Image Process. – volume: 15 start-page: 685 year: 2023 ident: b8 article-title: Infrared and visible image fusion method based on a principal component analysis network and image pyramid publication-title: Remote. Sens. – volume: 72 start-page: 1 year: 2023 end-page: 12 ident: b29 article-title: Fusiongram: An infrared and visible image fusion framework based on gradient residual and attention mechanism publication-title: IEEE Trans. Instrum. Meas. – volume: 36 start-page: 40676 year: 2023 end-page: 40693 ident: b43 article-title: Siamese masked autoencoders publication-title: Adv. Neural Inf. Process. Syst. – volume: 30 start-page: 15 year: 2016 end-page: 26 ident: b61 article-title: Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters publication-title: Inf. Fusion – reference: Weifeng Ge, Deep metric learning with hierarchical triplet loss, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 269–285. – reference: Jiawei Ren, Mingyuan Zhang, Cunjun Yu, Ziwei Liu, Balanced mse for imbalanced visual regression, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 7926–7935. – volume: 82 start-page: 8 year: 2017 end-page: 17 ident: b64 article-title: Infrared and visible image fusion based on visual saliency map and weighted least square optimization publication-title: Infrared Phys. Technol. – volume: 132 start-page: 1625 issue: 5 year: 2024 ident: 10.1016/j.optlastec.2025.112971_b13 article-title: A deep learning framework for infrared and visible image fusion without strict registration publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-023-01948-x – volume: 14 start-page: 3281 issue: 9 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b5 article-title: Siamese infrared and visible light fusion network for RGB-T tracking publication-title: Int. J. Mach. Learn. Cybern. doi: 10.1007/s13042-023-01833-6 – volume: 91 start-page: 205 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b28 article-title: Sgfusion: A saliency guided deep-learning framework for pixel-level image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2022.09.030 – volume: 83 start-page: 79 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b60 article-title: Piafusion: A progressive infrared and visible image fusion network based on illumination aware publication-title: Inf. Fusion doi: 10.1016/j.inffus.2022.03.007 – ident: 10.1016/j.optlastec.2025.112971_b59 doi: 10.1109/ICCVW54120.2021.00389 – volume: 573 year: 2024 ident: 10.1016/j.optlastec.2025.112971_b15 article-title: WaveFusionNet: Infrared and visible image fusion based on multi-scale feature encoder–decoder and discrete wavelet decomposition publication-title: Opt. Commun. doi: 10.1016/j.optcom.2024.131024 – volume: 9 start-page: 1103 issue: 6 year: 2019 ident: 10.1016/j.optlastec.2025.112971_b48 article-title: Learning deep CNN denoiser priors for depth image inpainting publication-title: Appl. Sci. doi: 10.3390/app9061103 – year: 2024 ident: 10.1016/j.optlastec.2025.112971_b14 article-title: Spatiotemporal enhancement and interlevel fusion network for remote sensing images change detection publication-title: IEEE Trans. Geosci. Remote Sens. – volume: 25 start-page: 7800 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b26 article-title: Infrared and visible image fusion via interactive compensatory attention adversarial learning publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2022.3228685 – ident: 10.1016/j.optlastec.2025.112971_b42 – year: 2021 ident: 10.1016/j.optlastec.2025.112971_b35 – year: 2024 ident: 10.1016/j.optlastec.2025.112971_b44 – volume: 88 start-page: 305 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b66 article-title: UIFGAN: An unsupervised continual-learning generative adversarial network for unified image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2022.07.013 – volume: 70 start-page: 1 year: 2020 ident: 10.1016/j.optlastec.2025.112971_b9 article-title: Infrared and visible image fusion using visual saliency sparse representation and detail injection model publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2020.2986875 – volume: 82 start-page: 8 year: 2017 ident: 10.1016/j.optlastec.2025.112971_b64 article-title: Infrared and visible image fusion based on visual saliency map and weighted least square optimization publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2017.02.005 – volume: 15 start-page: 249 year: 2017 ident: 10.1016/j.optlastec.2025.112971_b47 article-title: The TNO multiband image data collection publication-title: Data Brief doi: 10.1016/j.dib.2017.09.038 – volume: 73 start-page: 72 year: 2021 ident: 10.1016/j.optlastec.2025.112971_b22 article-title: RFN-Nest: An end-to-end residual fusion network for infrared and visible images publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.02.023 – volume: 90 start-page: 185 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b51 article-title: Current advances and future perspectives of image fusion: A comprehensive review publication-title: Inf. Fusion doi: 10.1016/j.inffus.2022.09.019 – volume: 127 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b18 article-title: Flfuse-net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2022.104383 – volume: 45 start-page: 7220 issue: 6 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b39 article-title: Contrastive bayesian analysis for deep metric learning publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2022.3221486 – volume: 117 year: 2025 ident: 10.1016/j.optlastec.2025.112971_b69 article-title: SMAE-fusion: Integrating saliency-aware masked autoencoder with hybrid attention transformer for infrared–visible image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2024.102841 – start-page: 127 year: 2020 ident: 10.1016/j.optlastec.2025.112971_b6 article-title: A review on infrared and visible image fusion techniques – volume: 31 issue: 3 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b17 article-title: Mefuse: end-to-end infrared and visible image fusion method based on multibranch encoder publication-title: J. Electron. Imaging doi: 10.1117/1.JEI.31.3.033043 – volume: 39 start-page: 4617 issue: 3 year: 2020 ident: 10.1016/j.optlastec.2025.112971_b19 article-title: Infrared and visible image fusion using dual-tree complex wavelet transform and convolutional sparse representation publication-title: J. Intell. Fuzzy Systems – volume: 54 start-page: 85 year: 2020 ident: 10.1016/j.optlastec.2025.112971_b57 article-title: Infrared and visible image fusion via detail preserving adversarial learning publication-title: Inf. Fusion doi: 10.1016/j.inffus.2019.07.005 – volume: 18 start-page: 1169 issue: 4 year: 2018 ident: 10.1016/j.optlastec.2025.112971_b21 article-title: Infrared and visible image fusion based on different constraints in the non-subsampled shearlet transform domain publication-title: Sensors doi: 10.3390/s18041169 – volume: 92 start-page: 80 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b27 article-title: Mufusion: A general unsupervised image fusion network based on memory unit publication-title: Inf. Fusion doi: 10.1016/j.inffus.2022.11.010 – volume: 39 start-page: 718 issue: 3 year: 2019 ident: 10.1016/j.optlastec.2025.112971_b33 article-title: Multi-task deep model with margin ranking loss for lung nodule analysis publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2019.2934577 – volume: 249 year: 2024 ident: 10.1016/j.optlastec.2025.112971_b45 article-title: Mfhod: Multi-modal image fusion method based on the higher-order degradation model publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2024.123731 – volume: 17 start-page: 726 issue: 3 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b49 article-title: Deep CNN denoiser prior for blurred images restoration with multiplicative noise publication-title: Inverse Probl. Imaging doi: 10.3934/ipi.2022075 – year: 2024 ident: 10.1016/j.optlastec.2025.112971_b41 article-title: Audio super-resolution with robust speech representation learning of masked autoencoder publication-title: IEEE/ACM Trans. Audio Speech Lang. Process. doi: 10.1109/TASLP.2023.3349053 – volume: 25 start-page: 5413 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b30 article-title: YDTR: Infrared and visible image fusion via Y-shape dynamic transformer publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2022.3192661 – volume: 54 start-page: 99 year: 2020 ident: 10.1016/j.optlastec.2025.112971_b67 article-title: IFCNN: A general image fusion framework based on convolutional neural network publication-title: Inf. Fusion doi: 10.1016/j.inffus.2019.07.011 – ident: 10.1016/j.optlastec.2025.112971_b36 – volume: 72 start-page: 1 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b29 article-title: Fusiongram: An infrared and visible image fusion framework based on gradient residual and attention mechanism publication-title: IEEE Trans. Instrum. Meas. – volume: 55 start-page: 6480 issue: 23 year: 2016 ident: 10.1016/j.optlastec.2025.112971_b62 article-title: Fusion of infrared and visible images for night-vision context enhancement publication-title: Appl. Opt. doi: 10.1364/AO.55.006480 – start-page: 2705 year: 2018 ident: 10.1016/j.optlastec.2025.112971_b10 article-title: Infrared and visible image fusion using a deep learning framework – volume: 17 start-page: 1127 issue: 5 year: 2017 ident: 10.1016/j.optlastec.2025.112971_b20 article-title: Airborne infrared and visible image fusion combined with region segmentation publication-title: Sensors doi: 10.3390/s17051127 – volume: 33 start-page: 3159 issue: 7 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b31 article-title: Datfuse: Infrared and visible image fusion via dual attention transformer publication-title: IEEE Trans. Circuits Syst. Video Technol. doi: 10.1109/TCSVT.2023.3234340 – start-page: 141 year: 2020 ident: 10.1016/j.optlastec.2025.112971_b38 article-title: Unsupervised deep metric learning with transformed attention consistency and contrastive clustering loss – volume: 32 start-page: 3360 issue: 6 year: 2021 ident: 10.1016/j.optlastec.2025.112971_b23 article-title: Unfusion: A unified multi-scale densely connected network for infrared and visible image fusion publication-title: IEEE Trans. Circuits Syst. Video Technol. doi: 10.1109/TCSVT.2021.3109895 – volume: 80 start-page: 8423 issue: 6 year: 2021 ident: 10.1016/j.optlastec.2025.112971_b56 article-title: PSNR vs SSIM: imperceptibility quality assessment for image steganography publication-title: Multimedia Tools Appl. doi: 10.1007/s11042-020-10035-z – volume: 29 start-page: 457 issue: 4 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b7 article-title: IR and visible image fusion using DWT and bilateral filter publication-title: Microsyst. Technol. doi: 10.1007/s00542-022-05315-7 – ident: 10.1016/j.optlastec.2025.112971_b68 doi: 10.1145/3503161.3547902 – year: 2020 ident: 10.1016/j.optlastec.2025.112971_b11 – ident: 10.1016/j.optlastec.2025.112971_b52 doi: 10.1109/CVPR52688.2022.00320 – volume: 46 start-page: 2908 issue: 12 year: 2021 ident: 10.1016/j.optlastec.2025.112971_b70 article-title: High-speed computer-generated holography using an autoencoder-based deep neural network publication-title: Opt. Lett. doi: 10.1364/OL.425485 – volume: 156 year: 2024 ident: 10.1016/j.optlastec.2025.112971_b32 article-title: Itfuse: An interactive transformer for infrared and visible image fusion publication-title: Pattern Recognit. doi: 10.1016/j.patcog.2024.110822 – volume: 12 start-page: 1732 issue: 7 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b3 article-title: Research on detection and recognition technology of a visible and infrared dim and small target based on deep learning publication-title: Electron. doi: 10.3390/electronics12071732 – volume: 30 start-page: 15 year: 2016 ident: 10.1016/j.optlastec.2025.112971_b61 article-title: Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters publication-title: Inf. Fusion doi: 10.1016/j.inffus.2015.11.003 – volume: 48 start-page: 11 year: 2019 ident: 10.1016/j.optlastec.2025.112971_b24 article-title: FusionGAN: A generative adversarial network for infrared and visible image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2018.09.004 – volume: 128 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b55 article-title: Infrared and visible image fusion based on residual dense network and gradient loss publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2022.104486 – volume: 14 start-page: 2789 issue: 12 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b4 article-title: Infrared and visible image fusion with deep neural network in enhanced flight vision system publication-title: Remote. Sens. doi: 10.3390/rs14122789 – volume: 83 start-page: 227 year: 2017 ident: 10.1016/j.optlastec.2025.112971_b65 article-title: Infrared and visual image fusion through infrared feature extraction and visual information preservation publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2017.05.007 – year: 2023 ident: 10.1016/j.optlastec.2025.112971_b34 – ident: 10.1016/j.optlastec.2025.112971_b54 – volume: 36 start-page: 40676 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b43 article-title: Siamese masked autoencoders publication-title: Adv. Neural Inf. Process. Syst. – volume: 61 start-page: 1 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b40 article-title: SS-MAE: Spatial–spectral masked autoencoder for multisource remote sensing image classification publication-title: IEEE Trans. Geosci. Remote Sens. doi: 10.1109/TGRS.2023.3334729 – volume: 15 start-page: 685 issue: 3 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b8 article-title: Infrared and visible image fusion method based on a principal component analysis network and image pyramid publication-title: Remote. Sens. doi: 10.3390/rs15030685 – volume: 218 year: 2022 ident: 10.1016/j.optlastec.2025.112971_b16 article-title: CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition publication-title: Comput. Vis. Image Underst. doi: 10.1016/j.cviu.2022.103407 – volume: 91 start-page: 477 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b46 article-title: Divfusion: Darkness-free infrared and visible image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2022.10.034 – volume: 80 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b37 article-title: Deep metric learning with mirror attention and fine triplet loss for fundus image retrieval in ophthalmology publication-title: Biomed. Signal Process. Control. doi: 10.1016/j.bspc.2022.104277 – year: 2017 ident: 10.1016/j.optlastec.2025.112971_b58 – start-page: 1 year: 2017 ident: 10.1016/j.optlastec.2025.112971_b63 article-title: Multi-sensor image fusion based on fourth order partial differential equations – year: 2022 ident: 10.1016/j.optlastec.2025.112971_b50 – volume: 29 start-page: 4980 year: 2020 ident: 10.1016/j.optlastec.2025.112971_b25 article-title: DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2020.2977573 – volume: 23 start-page: 599 issue: 2 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b2 article-title: Infrared and visible image fusion technology and application: A review publication-title: Sensors doi: 10.3390/s23020599 – volume: 32 start-page: 2077 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b1 article-title: AFT: Adaptive fusion transformer for visible and infrared images publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2023.3263113 – volume: 61 start-page: 1 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b12 article-title: LRR-Net: An interpretable deep unfolding network for hyperspectral anomaly detection publication-title: IEEE Trans. Geosci. Remote Sens. – volume: 204 year: 2023 ident: 10.1016/j.optlastec.2025.112971_b53 article-title: Modified U-Net for plant diseased leaf image segmentation publication-title: Comput. Electron. Agric. doi: 10.1016/j.compag.2022.107511 |
| SSID | ssj0004653 |
| Score | 2.418553 |
| Snippet | The task of infrared and visible image fusion aims to retain the thermal targets from infrared images while preserving the details, brightness, and other... |
| SourceID | crossref elsevier |
| SourceType | Index Database Publisher |
| StartPage | 112971 |
| SubjectTerms | Deep learning Image degradation Image fusion Masked autoencoder Metric space |
| Title | MRFuse: Metric learning and masked autoencoder for fusing real infrared and visible images |
| URI | https://dx.doi.org/10.1016/j.optlastec.2025.112971 |
| Volume | 189 |
| WOSCitedRecordID | wos001508785100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 issn: 0030-3992 databaseCode: AIEXJ dateStart: 19950201 customDbUrl: isFulltext: true dateEnd: 99991231 titleUrlDefault: https://www.sciencedirect.com omitProxy: false ssIdentifier: ssj0004653 providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELagBakcKihUFArygWuqPJzE7q2gPkC0ICjSwiWyY1tKUbPRJqn25zN-JBvKSoAQl2gVOc5m5tNkdna-bxB6JTiRpKQioCrJAhKFOoCXAglImoeMUqE5Le2wifzigs5m7KOnELR2nEBe13S5ZM1_dTWcA2cb6uxfuHvcFE7AZ3A6HMHtcPwjx59_Oukd4_zcTMsqh8EQjot4zdvvkGPyvpsbCUujJGEaDXVvawYLZTU49MK2pZv1hntuyFXVNQSedprKfmhGhWfIwGGf7pcq_XvbKvC1f12NEDQFatvXp6pVLeC0d___VPVyurJ3AbFR_u3qixNx6ll6Y8VsYM2sWpRsFE4g9jN2KwqztRHdFReuDuZNBw8DT3Jg7mOYT8zNbrkll_3Z7G7VdlMjcxpnd9FmnKcMgvbm0dvj2bsJa9ZrlPpv81P339rbrc9dJvnI5UO07X9I4CMHgEfojqp30IOJvOQOum_be8v2MfrmQHGIHSTwAAkM3sMOEngCCQyQwA4S2EACD5Cw6z0ksIPEE_Tl5PjyzVngp2oEZZwlXSBEEsmIG1miLNRCaZVoRnVpJhvkEeEkjHkWSalZQnOeiExllOdZKUtJpIbsbxdt1PNaPUWYUQ7ZJRGMK0EET5kMtY5yJbXKIDMP91A42KtonHhKMXQVXhWjiQtj4sKZeA8dDnYtfA7ocrsCAPG7i5_9y8XP0dYKwftoo1v06gW6V950Vbt46cHzAzHVh5M |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MRFuse%3A+Metric+learning+and+masked+autoencoder+for+fusing+real+infrared+and+visible+images&rft.jtitle=Optics+and+laser+technology&rft.au=Li%2C+YuBin&rft.au=Zhan%2C+Weida&rft.au=Guo%2C+Jinxin&rft.au=Zhu%2C+Depeng&rft.date=2025-11-01&rft.pub=Elsevier+Ltd&rft.issn=0030-3992&rft.volume=189&rft_id=info:doi/10.1016%2Fj.optlastec.2025.112971&rft.externalDocID=S0030399225005626 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0030-3992&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0030-3992&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0030-3992&client=summon |