DMFusion: A dual-branch multi-scale feature fusion network for medical multi-modal image fusion
In the field of medical imaging, high-quality multi-modal image fusion is crucial for improving diagnostic accuracy. By integrating information from different imaging modalities, medical multi-modal image fusion provides more comprehensive and accurate images. However, many existing fusion methods e...
Uložené v:
| Vydané v: | Biomedical signal processing and control Ročník 105; s. 107572 |
|---|---|
| Hlavní autori: | , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Elsevier Ltd
01.07.2025
|
| Predmet: | |
| ISSN: | 1746-8094 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | In the field of medical imaging, high-quality multi-modal image fusion is crucial for improving diagnostic accuracy. By integrating information from different imaging modalities, medical multi-modal image fusion provides more comprehensive and accurate images. However, many existing fusion methods either overlook the unique information of each modality or fail to capture commonalities, resulting in incomplete fused images. To address this challenge, we propose an advanced medical multi-modal image fusion framework called Dual-Branch Multi-Scale Feature Fusion (DMFusion), aiming to optimize the fusion performance of multi-modal medical images. The DMFusion framework is based on a dual-branch autoencoder (AE) structure, where one branch is dedicated to extracting modality-specific distinctive features, and the other branch focuses on capturing shared features between modalities. This design allows DMFusion to not only preserve key features of each modality but also to effectively integrate their common information. Furthermore, our encoder employs multi-scale feature extraction techniques, enhancing the model’s perception of image details and allowing effective capture and fusion of image features at various scales. During the fusion process, both the encoder and decoder employ lightweight self-attention mechanisms. The encoder uses designed selection rules to precisely select salient features from the two branches, which are then fed into the decoder to achieve deep fusion. This decoder employs advanced image reconstruction techniques to generate fused images with richer texture details and better visual quality. Through qualitative and quantitative experiments on the publicly available Harvard Medical dataset and a dataset of abdominal multi-modal medical images from China, our method has demonstrated superior performance in medical image fusion tasks. The results indicate that the DMFusion framework can effectively enhance the accuracy of medical image fusion, providing new insights for future research on multi-modal image fusion.
•Dual-branch autoencoder for multi-modal medical image fusion.•Modality-specific and shared features extracted by separate branches.•Multi-scale feature extraction enhances perception of image details.•Lightweight self-attention and selection rules enable precise feature fusion. |
|---|---|
| AbstractList | In the field of medical imaging, high-quality multi-modal image fusion is crucial for improving diagnostic accuracy. By integrating information from different imaging modalities, medical multi-modal image fusion provides more comprehensive and accurate images. However, many existing fusion methods either overlook the unique information of each modality or fail to capture commonalities, resulting in incomplete fused images. To address this challenge, we propose an advanced medical multi-modal image fusion framework called Dual-Branch Multi-Scale Feature Fusion (DMFusion), aiming to optimize the fusion performance of multi-modal medical images. The DMFusion framework is based on a dual-branch autoencoder (AE) structure, where one branch is dedicated to extracting modality-specific distinctive features, and the other branch focuses on capturing shared features between modalities. This design allows DMFusion to not only preserve key features of each modality but also to effectively integrate their common information. Furthermore, our encoder employs multi-scale feature extraction techniques, enhancing the model’s perception of image details and allowing effective capture and fusion of image features at various scales. During the fusion process, both the encoder and decoder employ lightweight self-attention mechanisms. The encoder uses designed selection rules to precisely select salient features from the two branches, which are then fed into the decoder to achieve deep fusion. This decoder employs advanced image reconstruction techniques to generate fused images with richer texture details and better visual quality. Through qualitative and quantitative experiments on the publicly available Harvard Medical dataset and a dataset of abdominal multi-modal medical images from China, our method has demonstrated superior performance in medical image fusion tasks. The results indicate that the DMFusion framework can effectively enhance the accuracy of medical image fusion, providing new insights for future research on multi-modal image fusion.
•Dual-branch autoencoder for multi-modal medical image fusion.•Modality-specific and shared features extracted by separate branches.•Multi-scale feature extraction enhances perception of image details.•Lightweight self-attention and selection rules enable precise feature fusion. |
| ArticleNumber | 107572 |
| Author | Tan, Xiaoyu Qiu, Xihe Ma, Gengchen |
| Author_xml | – sequence: 1 givenname: Gengchen orcidid: 0009-0002-3841-9889 surname: Ma fullname: Ma, Gengchen organization: School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China – sequence: 2 givenname: Xihe orcidid: 0000-0003-4024-925X surname: Qiu fullname: Qiu, Xihe email: qiuxihe@sues.edu.cn organization: School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, China – sequence: 3 givenname: Xiaoyu orcidid: 0000-0003-3555-7143 surname: Tan fullname: Tan, Xiaoyu organization: INF Technology (Shanghai) Co., Ltd., Shanghai, China |
| BookMark | eNp9kM9OAjEQh3vAREBfwFNfYLEt7W7XeCEoaILxwr3p9o8Wd7ek7Wp8e4vgxQOnmUx-32Tmm4BR73sDwA1GM4xwebubNXGvZgQRlgcVq8gIjHFFy4Kjml6CSYw7hCivMB0D8fCyGqLz_R1cQD3ItmiC7NU77IY2uSIq2RpojUxDyPU3CXuTvnz4gNYH2BntcuYU77zOvevk21_4ClxY2UZzfapTsF09bpdPxeZ1_bxcbAo1RygVljNLaqVLZjFTNZVaEsZYIxGtSsZwpZGmJakbLZWtjTWMS46opQ1nc0TmU8CPa1XwMQZjhXJJpnxACtK1AiNxcCN24uBGHNyIo5uMkn_oPuQPwvd56P4ImfzTpzNBROVMr7KNYFQS2rtz-A_BbYLe |
| CitedBy_id | crossref_primary_10_3390_app15179298 crossref_primary_10_1007_s11517_025_03361_7 crossref_primary_10_1007_s40747_025_01964_z |
| Cites_doi | 10.1109/TIP.2003.819861 10.1109/CVPR.2016.308 10.1016/j.procs.2015.10.057 10.1016/j.inffus.2018.07.010 10.1109/TPAMI.2020.3012548 10.1109/CVPR.2018.00474 10.1109/JAS.2022.106082 10.1109/TCI.2021.3100986 10.1016/j.inffus.2021.06.001 10.1016/j.inffus.2013.01.001 10.1016/j.cviu.2021.103228 10.1109/CVPR46437.2021.00681 10.1016/j.compbiomed.2023.106923 10.1609/aaai.v36i2.20109 10.1109/JAS.2022.105686 10.1109/ICCV48922.2021.00986 10.1109/TCI.2017.2786138 10.1109/ACCESS.2020.2982016 10.1109/TIM.2018.2838778 10.1109/ICCV48922.2021.00061 10.1016/j.image.2021.116554 10.1109/TIM.2020.3022438 10.1007/s11042-022-13507-6 10.1364/OE.27.038312 10.1109/CVPR.2017.243 10.1016/j.inffus.2013.12.002 10.1016/j.sigpro.2020.107734 10.1109/ICCV.2019.00140 10.1016/j.bspc.2021.102788 10.1109/CVPR52688.2022.00564 10.1109/TBME.2012.2217493 10.1109/TIP.2013.2244222 10.1109/TIP.2020.2977573 10.1109/CVPR52688.2022.00571 10.1109/TCSVT.2021.3056725 10.1007/s11263-021-01501-8 10.1109/CVPR.2016.90 10.1109/TIP.2018.2887342 10.1016/j.bspc.2017.02.005 10.1016/j.inffus.2011.01.002 10.1007/s11263-018-1117-z 10.1007/s13721-021-00342-2 10.1007/s13534-014-0161-z 10.1109/CVPR52729.2023.00572 10.1109/CVPR52688.2022.01186 10.1016/j.compbiomed.2024.108771 10.1016/j.bspc.2019.101810 10.1109/ICCV51070.2023.00742 10.1109/ICCVW54120.2021.00210 10.1109/CVPR.2015.7298594 10.1109/CVPR52688.2022.01167 10.1016/j.inffus.2021.02.023 10.1016/j.neucom.2021.08.044 |
| ContentType | Journal Article |
| Copyright | 2025 Elsevier Ltd |
| Copyright_xml | – notice: 2025 Elsevier Ltd |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.bspc.2025.107572 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| ExternalDocumentID | 10_1016_j_bspc_2025_107572 S1746809425000837 |
| GroupedDBID | --- --K --M .~1 0R~ 1B1 1~. 1~5 23N 4.4 457 4G. 5GY 5VS 6J9 7-5 71M 8P~ 9DU AAEDT AAEDW AAIKJ AAKOC AALRI AAOAW AAQFI AATTM AAXKI AAXUO AAYFN AAYWO ABBOA ABFNM ABFRF ABJNI ABMAC ABWVN ABXDB ACDAQ ACGFO ACGFS ACLOT ACNNM ACRLP ACRPL ACVFH ACZNC ADBBV ADCNI ADEZE ADMUD ADNMO ADTZH AEBSH AECPX AEFWE AEIPS AEKER AENEX AEUPX AFJKZ AFPUW AFTJW AGHFR AGUBO AGYEJ AHJVU AHZHX AIALX AIEXJ AIGII AIIUN AIKHN AITUG AKBMS AKRWK AKYEP ALMA_UNASSIGNED_HOLDINGS AMRAJ ANKPU AOUOD APXCP AXJTR BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFKBS EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HZ~ IHE J1W JJJVA KOM M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 ROL RPZ SDF SDG SES SPC SPCBC SST SSV SSZ T5K UNMZH ~G- ~HD AAYXX CITATION |
| ID | FETCH-LOGICAL-c300t-f85f29cd65f15c94ada2555ba04765517d0d4629bdacf9efe58a804f4b853023 |
| ISICitedReferencesCount | 4 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001421853900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1746-8094 |
| IngestDate | Tue Nov 18 21:10:17 EST 2025 Thu Nov 27 00:44:24 EST 2025 Wed Dec 10 14:41:50 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Medical image fusion CNN-transformer Multi-scale Multi-modality images Dual-branch |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c300t-f85f29cd65f15c94ada2555ba04765517d0d4629bdacf9efe58a804f4b853023 |
| ORCID | 0000-0003-4024-925X 0009-0002-3841-9889 0000-0003-3555-7143 |
| ParticipantIDs | crossref_citationtrail_10_1016_j_bspc_2025_107572 crossref_primary_10_1016_j_bspc_2025_107572 elsevier_sciencedirect_doi_10_1016_j_bspc_2025_107572 |
| PublicationCentury | 2000 |
| PublicationDate | July 2025 2025-07-00 |
| PublicationDateYYYYMMDD | 2025-07-01 |
| PublicationDate_xml | – month: 07 year: 2025 text: July 2025 |
| PublicationDecade | 2020 |
| PublicationTitle | Biomedical signal processing and control |
| PublicationYear | 2025 |
| Publisher | Elsevier Ltd |
| Publisher_xml | – name: Elsevier Ltd |
| References | Li, Kang, Hu (b43) 2013; 22 Ma, Zhao, Jiang, Zhou, Guo (b16) 2019; 127 Xu, Ma (b11) 2021; 76 Yao, Xiong, Wang, Liu, Chen (b4) 2019; 27 Zong, Qiu (b40) 2017; 34 Jie, Xu, Li, Tan (b50) 2024 Xiao, Guo, Veelaert, Philips (b25) 2022; 101 Meher, Agrawal, Panda, Abraham (b3) 2019; 48 Cheng, He, Lv (b31) 2008 J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, R. Timofte, Swinir: Image restoration using swin transformer, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1833–1844. Diwakar, Singh, Shankar, Nayak, Nayak, Vimal, Singh, Sisodia (b35) 2022; 11 Wang, He, Liu (b29) 2024; 179 Das, Gupta, Bakde (b47) 2024 A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, et al., Searching for mobilenetv3, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1314–1324. Xu (b44) 2014; 19 Tan, Le (b69) 2019 Liu, Fan, Jiang, Liu, Luo (b8) 2021; 32 Diwakar, Singh, Shankar (b36) 2021; 68 Fang, Zhao, Yang, Qin, Zhang (b49) 2021; 463 Kumar, Diwakar (b37) 2021; 32 Simonyan, Zisserman (b64) 2014 Zhang (b7) 2021; 44 Bhavana, Krishnappa (b32) 2015; 70 Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, S. Xie, A convnet for the 2020s, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11976–11986. Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo, Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022. L. Qu, S. Liu, M. Wang, Z. Song, Transmef: A transformer-based multi-exposure image fusion framework using self-supervised multi-task learning, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, 2022, pp. 2126–2134. J. Guo, K. Han, H. Wu, Y. Tang, X. Chen, Y. Wang, C. Xu, Cmt: Convolutional neural networks meet vision transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12175–12185. Ma, Xu, Jiang, Mei, Zhang (b46) 2020; 29 Huang, Le, Ma, Fan, Zhang, Yang (b45) 2020; 8 Krizhevsky, Sutskever, Hinton (b63) 2012; 25 S.W. Zamir, A. Arora, S. Khan, M. Hayat, F.S. Khan, M.-H. Yang, Restormer: Efficient transformer for high-resolution image restoration, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5728–5739. Dhaundiyal, Tripathi, Joshi, Diwakar, Singh (b33) 2020; Vol. 1478 Ganasala, Kumar (b2) 2014; 4 J. Liu, X. Fan, Z. Huang, G. Wu, R. Liu, W. Zhong, Z. Luo, Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5802–5811. Zhang, Ma (b20) 2021; 129 Hu, Li (b42) 2012; 13 Xu, Zhang, Ma (b19) 2021; 7 Peng, Li, Yang, Wang (b34) 2021; 210 Yin, Liu, Liu, Chen (b1) 2018; 68 Z. Zhao, H. Bai, J. Zhang, Y. Zhang, S. Xu, Z. Lin, R. Timofte, L. Van Gool, Cddfuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5906–5916. Touvron, Cord, Douze, Massa, Sablayrolles, Jégou (b54) 2021 Liang, Jiang, Liu, Ma (b23) 2022 C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9. Li, Wu, Kittler (b21) 2021; 73 Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, Polosukhin (b51) 2017; 30 He, Li, Xu, Yan, Tang, Zhang, Wang, Li (b12) 2023 Ma, Tang, Fan, Huang, Mei, Ma (b15) 2022; 9 Carion, Massa, Synnaeve, Usunier, Kirillov, Zagoruyko (b57) 2020 Xu, Wang, Ma (b18) 2021; 70 Li, Wu (b22) 2018; 28 K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. Xu, Ma, Jiang, Guo, Ling (b9) 2020; 44 S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P.H. Torr, et al., Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 6881–6890. Wang, Bovik, Sheikh, Simoncelli (b72) 2004; 13 Zhao, Xu, Zhang, Liu, Li, Zhang (b10) 2020 Maqsood, Javed (b41) 2020; 57 Ding, Li, Guo, Zhou, Liu, Xie (b27) 2023; 159 James, Dasarathy (b13) 2014; 19 Zhao, Xu, Zhang, Liu, Zhang (b5) 2020; 177 Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly (b52) 2020 Zhu, Su, Lu, Li, Wang, Dai (b58) 2020 Ma, Duanmu, Yeganeh, Wang (b6) 2017; 4 Tang, Deng, Ma, Huang, Ma (b17) 2022; 9 Jian, Yang, Liu, Jeon, Gao, Chisholm (b28) 2020; 70 Li, Yin, Fang (b39) 2012; 59 Z. Zhao, H. Bai, Y. Zhu, J. Zhang, S. Xu, Y. Zhang, K. Zhang, D. Meng, R. Timofte, L. Van Gool, DDFM: denoising diffusion model for multi-modality image fusion, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 8082–8093. W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, L. Shao, Pyramid vision transformer: A versatile backbone for dense prediction without convolutions, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 568–578. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520. Diwakar, Shankar, Chakraborty, Singh, Arunkumar (b38) 2022; 81 G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4700–4708. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for computer vision, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826. Ronneberger, Fischer, Brox (b30) 2015 Meher (10.1016/j.bspc.2025.107572_b3) 2019; 48 Xu (10.1016/j.bspc.2025.107572_b44) 2014; 19 Das (10.1016/j.bspc.2025.107572_b47) 2024 Li (10.1016/j.bspc.2025.107572_b22) 2018; 28 Zhao (10.1016/j.bspc.2025.107572_b5) 2020; 177 Li (10.1016/j.bspc.2025.107572_b21) 2021; 73 Kumar (10.1016/j.bspc.2025.107572_b37) 2021; 32 Zhao (10.1016/j.bspc.2025.107572_b10) 2020 Ma (10.1016/j.bspc.2025.107572_b46) 2020; 29 Zhu (10.1016/j.bspc.2025.107572_b58) 2020 Tan (10.1016/j.bspc.2025.107572_b69) 2019 Ganasala (10.1016/j.bspc.2025.107572_b2) 2014; 4 Xiao (10.1016/j.bspc.2025.107572_b25) 2022; 101 Vaswani (10.1016/j.bspc.2025.107572_b51) 2017; 30 10.1016/j.bspc.2025.107572_b71 Tang (10.1016/j.bspc.2025.107572_b17) 2022; 9 Yin (10.1016/j.bspc.2025.107572_b1) 2018; 68 Dosovitskiy (10.1016/j.bspc.2025.107572_b52) 2020 Zhang (10.1016/j.bspc.2025.107572_b20) 2021; 129 Yao (10.1016/j.bspc.2025.107572_b4) 2019; 27 Zong (10.1016/j.bspc.2025.107572_b40) 2017; 34 Xu (10.1016/j.bspc.2025.107572_b18) 2021; 70 10.1016/j.bspc.2025.107572_b70 Jian (10.1016/j.bspc.2025.107572_b28) 2020; 70 Li (10.1016/j.bspc.2025.107572_b43) 2013; 22 Simonyan (10.1016/j.bspc.2025.107572_b64) 2014 Xu (10.1016/j.bspc.2025.107572_b11) 2021; 76 Diwakar (10.1016/j.bspc.2025.107572_b36) 2021; 68 Xu (10.1016/j.bspc.2025.107572_b9) 2020; 44 Ma (10.1016/j.bspc.2025.107572_b16) 2019; 127 Diwakar (10.1016/j.bspc.2025.107572_b38) 2022; 81 Touvron (10.1016/j.bspc.2025.107572_b54) 2021 10.1016/j.bspc.2025.107572_b24 10.1016/j.bspc.2025.107572_b68 10.1016/j.bspc.2025.107572_b26 Carion (10.1016/j.bspc.2025.107572_b57) 2020 Zhang (10.1016/j.bspc.2025.107572_b7) 2021; 44 10.1016/j.bspc.2025.107572_b61 10.1016/j.bspc.2025.107572_b60 Peng (10.1016/j.bspc.2025.107572_b34) 2021; 210 10.1016/j.bspc.2025.107572_b62 Krizhevsky (10.1016/j.bspc.2025.107572_b63) 2012; 25 10.1016/j.bspc.2025.107572_b65 10.1016/j.bspc.2025.107572_b67 10.1016/j.bspc.2025.107572_b66 Wang (10.1016/j.bspc.2025.107572_b29) 2024; 179 Huang (10.1016/j.bspc.2025.107572_b45) 2020; 8 Maqsood (10.1016/j.bspc.2025.107572_b41) 2020; 57 Cheng (10.1016/j.bspc.2025.107572_b31) 2008 Hu (10.1016/j.bspc.2025.107572_b42) 2012; 13 Ma (10.1016/j.bspc.2025.107572_b6) 2017; 4 Jie (10.1016/j.bspc.2025.107572_b50) 2024 10.1016/j.bspc.2025.107572_b14 Liu (10.1016/j.bspc.2025.107572_b8) 2021; 32 Ma (10.1016/j.bspc.2025.107572_b15) 2022; 9 10.1016/j.bspc.2025.107572_b59 Diwakar (10.1016/j.bspc.2025.107572_b35) 2022; 11 Fang (10.1016/j.bspc.2025.107572_b49) 2021; 463 Wang (10.1016/j.bspc.2025.107572_b72) 2004; 13 Li (10.1016/j.bspc.2025.107572_b39) 2012; 59 Xu (10.1016/j.bspc.2025.107572_b19) 2021; 7 10.1016/j.bspc.2025.107572_b53 10.1016/j.bspc.2025.107572_b56 10.1016/j.bspc.2025.107572_b55 James (10.1016/j.bspc.2025.107572_b13) 2014; 19 He (10.1016/j.bspc.2025.107572_b12) 2023 Ding (10.1016/j.bspc.2025.107572_b27) 2023; 159 Ronneberger (10.1016/j.bspc.2025.107572_b30) 2015 Dhaundiyal (10.1016/j.bspc.2025.107572_b33) 2020; Vol. 1478 10.1016/j.bspc.2025.107572_b48 Liang (10.1016/j.bspc.2025.107572_b23) 2022 Bhavana (10.1016/j.bspc.2025.107572_b32) 2015; 70 |
| References_xml | – volume: 68 year: 2021 ident: b36 article-title: Multi-modal medical image fusion framework using co-occurrence filter and local extrema in NSST domain publication-title: Biomed. Signal Process. Control – volume: Vol. 1478 year: 2020 ident: b33 article-title: Clustering based multi-modality medical image fusion publication-title: Journal of Physics: Conference Series – volume: 9 start-page: 1200 year: 2022 end-page: 1217 ident: b15 article-title: SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer publication-title: IEEE/CAA J. Autom. Sin. – year: 2020 ident: b58 article-title: Deformable detr: Deformable transformers for end-to-end object detection – volume: 4 start-page: 414 year: 2014 end-page: 424 ident: b2 article-title: Multimodality medical image fusion based on new features in NSST domain publication-title: Biomed. Eng. Lett. – reference: J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, R. Timofte, Swinir: Image restoration using swin transformer, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1833–1844. – volume: 73 start-page: 72 year: 2021 end-page: 86 ident: b21 article-title: RFN-nest: An end-to-end residual fusion network for infrared and visible images publication-title: Inf. Fusion – start-page: 234 year: 2015 end-page: 241 ident: b30 article-title: U-net: Convolutional networks for biomedical image segmentation publication-title: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 – volume: 25 year: 2012 ident: b63 article-title: Imagenet classification with deep convolutional neural networks publication-title: Adv. Neural Inf. Process. Syst. – reference: J. Guo, K. Han, H. Wu, Y. Tang, X. Chen, Y. Wang, C. Xu, Cmt: Convolutional neural networks meet vision transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12175–12185. – volume: 8 start-page: 55145 year: 2020 end-page: 55157 ident: b45 article-title: MGMDcGAN: medical image fusion using multi-generator multi-discriminator conditional generative adversarial network publication-title: IEEE Access – volume: 34 start-page: 195 year: 2017 end-page: 205 ident: b40 article-title: Medical image fusion based on sparse representation of classified image patches publication-title: Biomed. Signal Process. Control – start-page: 2523 year: 2008 end-page: 2525 ident: b31 article-title: Medical image of PET/CT weighted fusion based on wavelet transform publication-title: 2008 2nd International Conference on Bioinformatics and Biomedical Engineering – volume: 177 year: 2020 ident: b5 article-title: Bayesian fusion for infrared and visible images publication-title: Signal Process. – volume: 101 year: 2022 ident: b25 article-title: DMDN: Degradation model-based deep network for multi-focus image fusion publication-title: Signal Process., Image Commun. – year: 2023 ident: b12 article-title: Hqg-net: Unpaired medical image enhancement with high-quality guidance publication-title: IEEE Trans. Neural Netw. Learn. Syst. – start-page: 719 year: 2022 end-page: 735 ident: b23 article-title: Fusion from decomposition: A self-supervised decomposition approach for image fusion publication-title: European Conference on Computer Vision – volume: 13 start-page: 196 year: 2012 end-page: 206 ident: b42 article-title: The multiscale directional bilateral filter and its application to multisensor image fusion publication-title: Inf. Fusion – volume: 28 start-page: 2614 year: 2018 end-page: 2623 ident: b22 article-title: DenseFuse: A fusion approach to infrared and visible images publication-title: IEEE Trans. Image Process. – volume: 44 start-page: 502 year: 2020 end-page: 518 ident: b9 article-title: U2Fusion: A unified unsupervised image fusion network publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 159 year: 2023 ident: b27 article-title: M4fnet: Multimodal medical image fusion network via multi-receptive-field and multi-scale feature integration publication-title: Comput. Biol. Med. – volume: 179 year: 2024 ident: b29 article-title: EMOST: A dual-branch hybrid network for medical image fusion via efficient model module and sparse transformer publication-title: Comput. Biol. Med. – volume: 19 start-page: 4 year: 2014 end-page: 19 ident: b13 article-title: Medical image fusion: A survey of the state of the art publication-title: Inf. Fusion – volume: 32 year: 2021 ident: b37 article-title: A novel approach for multimodality medical image fusion over secure environment publication-title: Trans. Emerg. Telecommun. Technol. – volume: 4 start-page: 60 year: 2017 end-page: 72 ident: b6 article-title: Multi-exposure image fusion by optimizing a structural similarity index publication-title: IEEE Trans. Comput. Imaging – volume: 48 start-page: 119 year: 2019 end-page: 132 ident: b3 article-title: A survey on region based image fusion methods publication-title: Inf. Fusion – volume: 44 start-page: 4819 year: 2021 end-page: 4838 ident: b7 article-title: Deep learning-based multi-focus image fusion: A survey and a comparative study publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – year: 2024 ident: b47 article-title: An end-to-end content-aware generative adversarial network based method for multimodal medical image fusion publication-title: Data Analytics for Intelligent Systems: Techniques and Solutions – reference: G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4700–4708. – reference: C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for computer vision, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826. – volume: 76 start-page: 177 year: 2021 end-page: 186 ident: b11 article-title: EMFusion: An unsupervised enhanced medical image fusion network publication-title: Inf. Fusion – volume: 22 start-page: 2864 year: 2013 end-page: 2875 ident: b43 article-title: Image fusion with guided filtering publication-title: IEEE Trans. Image Process. – volume: 29 start-page: 4980 year: 2020 end-page: 4995 ident: b46 article-title: DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion publication-title: IEEE Trans. Image Process. – volume: 59 start-page: 3450 year: 2012 end-page: 3459 ident: b39 article-title: Group-sparse representation with dictionary learning for medical image denoising and fusion publication-title: IEEE Trans. Biomed. Eng. – reference: K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. – reference: M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.-C. Chen, Mobilenetv2: Inverted residuals and linear bottlenecks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520. – reference: A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, et al., Searching for mobilenetv3, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1314–1324. – volume: 9 start-page: 2121 year: 2022 end-page: 2137 ident: b17 article-title: SuperFusion: A versatile image registration and fusion network with semantic awareness publication-title: IEEE/CAA J. Autom. Sin. – volume: 7 start-page: 824 year: 2021 end-page: 836 ident: b19 article-title: Classification saliency-based rule for visible and infrared image fusion publication-title: IEEE Trans. Comput. Imaging – volume: 68 start-page: 49 year: 2018 end-page: 64 ident: b1 article-title: Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain publication-title: IEEE Trans. Instrum. Meas. – reference: L. Qu, S. Liu, M. Wang, Z. Song, Transmef: A transformer-based multi-exposure image fusion framework using self-supervised multi-task learning, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, 2022, pp. 2126–2134. – volume: 57 year: 2020 ident: b41 article-title: Multi-modal medical image fusion based on two-scale image decomposition and sparse representation publication-title: Biomed. Signal Process. Control – volume: 70 start-page: 625 year: 2015 end-page: 631 ident: b32 article-title: Multi-modality medical image fusion using discrete wavelet transform publication-title: Procedia Comput. Sci. – volume: 127 start-page: 512 year: 2019 end-page: 531 ident: b16 article-title: Locality preserving matching publication-title: Int. J. Comput. Vis. – reference: W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo, L. Shao, Pyramid vision transformer: A versatile backbone for dense prediction without convolutions, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 568–578. – volume: 32 start-page: 105 year: 2021 end-page: 119 ident: b8 article-title: Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion publication-title: IEEE Trans. Circuits Syst. Video Technol. – reference: Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, S. Xie, A convnet for the 2020s, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11976–11986. – start-page: 213 year: 2020 end-page: 229 ident: b57 article-title: End-to-end object detection with transformers publication-title: European Conference on Computer Vision – year: 2014 ident: b64 article-title: Very deep convolutional networks for large-scale image recognition – volume: 70 start-page: 1 year: 2020 end-page: 15 ident: b28 article-title: SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion publication-title: IEEE Trans. Instrum. Meas. – volume: 81 start-page: 37477 year: 2022 end-page: 37497 ident: b38 article-title: Multi-modal medical image fusion in NSST domain for internet of medical things publication-title: Multimedia Tools Appl. – volume: 463 start-page: 198 year: 2021 end-page: 211 ident: b49 article-title: A light-weight, efficient, and general cross-modal image fusion network publication-title: Neurocomputing – start-page: 10347 year: 2021 end-page: 10357 ident: b54 article-title: Training data-efficient image transformers & distillation through attention publication-title: International Conference on Machine Learning – year: 2024 ident: b50 article-title: TSJNet: A multi-modality target and semantic awareness joint-driven image fusion network – start-page: 6105 year: 2019 end-page: 6114 ident: b69 article-title: Efficientnet: Rethinking model scaling for convolutional neural networks publication-title: International Conference on Machine Learning – reference: Z. Zhao, H. Bai, J. Zhang, Y. Zhang, S. Xu, Z. Lin, R. Timofte, L. Van Gool, Cddfuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5906–5916. – volume: 19 start-page: 38 year: 2014 end-page: 48 ident: b44 article-title: Medical image fusion using multi-level local extrema publication-title: Inf. Fusion – reference: S. Zheng, J. Lu, H. Zhao, X. Zhu, Z. Luo, Y. Wang, Y. Fu, J. Feng, T. Xiang, P.H. Torr, et al., Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 6881–6890. – volume: 129 start-page: 2761 year: 2021 end-page: 2785 ident: b20 article-title: SDNet: A versatile squeeze-and-decomposition network for real-time image fusion publication-title: Int. J. Comput. Vis. – volume: 30 year: 2017 ident: b51 article-title: Attention is all you need publication-title: Adv. Neural Inf. Process. Syst. – volume: 13 start-page: 600 year: 2004 end-page: 612 ident: b72 article-title: Image quality assessment: from error visibility to structural similarity publication-title: IEEE Trans. Image Process. – volume: 27 start-page: 38312 year: 2019 end-page: 38325 ident: b4 article-title: Spectral-depth imaging with deep learning based reconstruction publication-title: Opt. Express – reference: Z. Zhao, H. Bai, Y. Zhu, J. Zhang, S. Xu, Y. Zhang, K. Zhang, D. Meng, R. Timofte, L. Van Gool, DDFM: denoising diffusion model for multi-modality image fusion, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 8082–8093. – reference: C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9. – reference: J. Liu, X. Fan, Z. Huang, G. Wu, R. Liu, W. Zhong, Z. Luo, Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5802–5811. – volume: 11 start-page: 15 year: 2022 ident: b35 article-title: Directive clustering contrast-based multi-modality medical image fusion for smart healthcare system publication-title: Netw. Model. Anal. Heal. Inform. Bioinform. – reference: Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo, Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022. – volume: 70 start-page: 1 year: 2021 end-page: 13 ident: b18 article-title: DRF: Disentangled representation for visible and infrared image fusion publication-title: IEEE Trans. Instrum. Meas. – year: 2020 ident: b52 article-title: An image is worth 16x16 words: Transformers for image recognition at scale – reference: S.W. Zamir, A. Arora, S. Khan, M. Hayat, F.S. Khan, M.-H. Yang, Restormer: Efficient transformer for high-resolution image restoration, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5728–5739. – year: 2020 ident: b10 article-title: DIDFuse: Deep image decomposition for infrared and visible image fusion – volume: 210 year: 2021 ident: b34 article-title: Multi-focus image fusion approach based on CNP systems in NSCT domain publication-title: Comput. Vis. Image Underst. – volume: 13 start-page: 600 issue: 4 year: 2004 ident: 10.1016/j.bspc.2025.107572_b72 article-title: Image quality assessment: from error visibility to structural similarity publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2003.819861 – ident: 10.1016/j.bspc.2025.107572_b66 doi: 10.1109/CVPR.2016.308 – volume: 70 start-page: 625 year: 2015 ident: 10.1016/j.bspc.2025.107572_b32 article-title: Multi-modality medical image fusion using discrete wavelet transform publication-title: Procedia Comput. Sci. doi: 10.1016/j.procs.2015.10.057 – volume: 48 start-page: 119 year: 2019 ident: 10.1016/j.bspc.2025.107572_b3 article-title: A survey on region based image fusion methods publication-title: Inf. Fusion doi: 10.1016/j.inffus.2018.07.010 – volume: 44 start-page: 502 issue: 1 year: 2020 ident: 10.1016/j.bspc.2025.107572_b9 article-title: U2Fusion: A unified unsupervised image fusion network publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2020.3012548 – volume: 70 start-page: 1 year: 2021 ident: 10.1016/j.bspc.2025.107572_b18 article-title: DRF: Disentangled representation for visible and infrared image fusion publication-title: IEEE Trans. Instrum. Meas. – start-page: 2523 year: 2008 ident: 10.1016/j.bspc.2025.107572_b31 article-title: Medical image of PET/CT weighted fusion based on wavelet transform – volume: Vol. 1478 year: 2020 ident: 10.1016/j.bspc.2025.107572_b33 article-title: Clustering based multi-modality medical image fusion – ident: 10.1016/j.bspc.2025.107572_b68 doi: 10.1109/CVPR.2018.00474 – volume: 9 start-page: 2121 issue: 12 year: 2022 ident: 10.1016/j.bspc.2025.107572_b17 article-title: SuperFusion: A versatile image registration and fusion network with semantic awareness publication-title: IEEE/CAA J. Autom. Sin. doi: 10.1109/JAS.2022.106082 – volume: 7 start-page: 824 year: 2021 ident: 10.1016/j.bspc.2025.107572_b19 article-title: Classification saliency-based rule for visible and infrared image fusion publication-title: IEEE Trans. Comput. Imaging doi: 10.1109/TCI.2021.3100986 – volume: 76 start-page: 177 year: 2021 ident: 10.1016/j.bspc.2025.107572_b11 article-title: EMFusion: An unsupervised enhanced medical image fusion network publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.06.001 – volume: 19 start-page: 38 year: 2014 ident: 10.1016/j.bspc.2025.107572_b44 article-title: Medical image fusion using multi-level local extrema publication-title: Inf. Fusion doi: 10.1016/j.inffus.2013.01.001 – start-page: 234 year: 2015 ident: 10.1016/j.bspc.2025.107572_b30 article-title: U-net: Convolutional networks for biomedical image segmentation – volume: 210 year: 2021 ident: 10.1016/j.bspc.2025.107572_b34 article-title: Multi-focus image fusion approach based on CNP systems in NSCT domain publication-title: Comput. Vis. Image Underst. doi: 10.1016/j.cviu.2021.103228 – ident: 10.1016/j.bspc.2025.107572_b56 doi: 10.1109/CVPR46437.2021.00681 – volume: 159 year: 2023 ident: 10.1016/j.bspc.2025.107572_b27 article-title: M4fnet: Multimodal medical image fusion network via multi-receptive-field and multi-scale feature integration publication-title: Comput. Biol. Med. doi: 10.1016/j.compbiomed.2023.106923 – ident: 10.1016/j.bspc.2025.107572_b24 doi: 10.1609/aaai.v36i2.20109 – volume: 25 year: 2012 ident: 10.1016/j.bspc.2025.107572_b63 article-title: Imagenet classification with deep convolutional neural networks publication-title: Adv. Neural Inf. Process. Syst. – year: 2020 ident: 10.1016/j.bspc.2025.107572_b58 – volume: 9 start-page: 1200 issue: 7 year: 2022 ident: 10.1016/j.bspc.2025.107572_b15 article-title: SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer publication-title: IEEE/CAA J. Autom. Sin. doi: 10.1109/JAS.2022.105686 – ident: 10.1016/j.bspc.2025.107572_b53 doi: 10.1109/ICCV48922.2021.00986 – volume: 4 start-page: 60 issue: 1 year: 2017 ident: 10.1016/j.bspc.2025.107572_b6 article-title: Multi-exposure image fusion by optimizing a structural similarity index publication-title: IEEE Trans. Comput. Imaging doi: 10.1109/TCI.2017.2786138 – volume: 44 start-page: 4819 issue: 9 year: 2021 ident: 10.1016/j.bspc.2025.107572_b7 article-title: Deep learning-based multi-focus image fusion: A survey and a comparative study publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 8 start-page: 55145 year: 2020 ident: 10.1016/j.bspc.2025.107572_b45 article-title: MGMDcGAN: medical image fusion using multi-generator multi-discriminator conditional generative adversarial network publication-title: IEEE Access doi: 10.1109/ACCESS.2020.2982016 – volume: 32 issue: 2 year: 2021 ident: 10.1016/j.bspc.2025.107572_b37 article-title: A novel approach for multimodality medical image fusion over secure environment publication-title: Trans. Emerg. Telecommun. Technol. – volume: 68 start-page: 49 issue: 1 year: 2018 ident: 10.1016/j.bspc.2025.107572_b1 article-title: Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2018.2838778 – ident: 10.1016/j.bspc.2025.107572_b55 doi: 10.1109/ICCV48922.2021.00061 – volume: 101 year: 2022 ident: 10.1016/j.bspc.2025.107572_b25 article-title: DMDN: Degradation model-based deep network for multi-focus image fusion publication-title: Signal Process., Image Commun. doi: 10.1016/j.image.2021.116554 – start-page: 213 year: 2020 ident: 10.1016/j.bspc.2025.107572_b57 article-title: End-to-end object detection with transformers – volume: 70 start-page: 1 year: 2020 ident: 10.1016/j.bspc.2025.107572_b28 article-title: SEDRFuse: A symmetric encoder–decoder with residual block network for infrared and visible image fusion publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2020.3022438 – volume: 81 start-page: 37477 issue: 26 year: 2022 ident: 10.1016/j.bspc.2025.107572_b38 article-title: Multi-modal medical image fusion in NSST domain for internet of medical things publication-title: Multimedia Tools Appl. doi: 10.1007/s11042-022-13507-6 – start-page: 10347 year: 2021 ident: 10.1016/j.bspc.2025.107572_b54 article-title: Training data-efficient image transformers & distillation through attention – volume: 27 start-page: 38312 issue: 26 year: 2019 ident: 10.1016/j.bspc.2025.107572_b4 article-title: Spectral-depth imaging with deep learning based reconstruction publication-title: Opt. Express doi: 10.1364/OE.27.038312 – ident: 10.1016/j.bspc.2025.107572_b62 doi: 10.1109/CVPR.2017.243 – volume: 19 start-page: 4 year: 2014 ident: 10.1016/j.bspc.2025.107572_b13 article-title: Medical image fusion: A survey of the state of the art publication-title: Inf. Fusion doi: 10.1016/j.inffus.2013.12.002 – volume: 177 year: 2020 ident: 10.1016/j.bspc.2025.107572_b5 article-title: Bayesian fusion for infrared and visible images publication-title: Signal Process. doi: 10.1016/j.sigpro.2020.107734 – ident: 10.1016/j.bspc.2025.107572_b67 doi: 10.1109/ICCV.2019.00140 – volume: 68 year: 2021 ident: 10.1016/j.bspc.2025.107572_b36 article-title: Multi-modal medical image fusion framework using co-occurrence filter and local extrema in NSST domain publication-title: Biomed. Signal Process. Control doi: 10.1016/j.bspc.2021.102788 – ident: 10.1016/j.bspc.2025.107572_b60 doi: 10.1109/CVPR52688.2022.00564 – volume: 59 start-page: 3450 issue: 12 year: 2012 ident: 10.1016/j.bspc.2025.107572_b39 article-title: Group-sparse representation with dictionary learning for medical image denoising and fusion publication-title: IEEE Trans. Biomed. Eng. doi: 10.1109/TBME.2012.2217493 – volume: 22 start-page: 2864 issue: 7 year: 2013 ident: 10.1016/j.bspc.2025.107572_b43 article-title: Image fusion with guided filtering publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2013.2244222 – volume: 29 start-page: 4980 year: 2020 ident: 10.1016/j.bspc.2025.107572_b46 article-title: DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2020.2977573 – ident: 10.1016/j.bspc.2025.107572_b14 doi: 10.1109/CVPR52688.2022.00571 – year: 2014 ident: 10.1016/j.bspc.2025.107572_b64 – year: 2024 ident: 10.1016/j.bspc.2025.107572_b47 article-title: An end-to-end content-aware generative adversarial network based method for multimodal medical image fusion – volume: 32 start-page: 105 issue: 1 year: 2021 ident: 10.1016/j.bspc.2025.107572_b8 article-title: Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion publication-title: IEEE Trans. Circuits Syst. Video Technol. doi: 10.1109/TCSVT.2021.3056725 – volume: 129 start-page: 2761 issue: 10 year: 2021 ident: 10.1016/j.bspc.2025.107572_b20 article-title: SDNet: A versatile squeeze-and-decomposition network for real-time image fusion publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-021-01501-8 – ident: 10.1016/j.bspc.2025.107572_b61 doi: 10.1109/CVPR.2016.90 – year: 2024 ident: 10.1016/j.bspc.2025.107572_b50 – volume: 28 start-page: 2614 issue: 5 year: 2018 ident: 10.1016/j.bspc.2025.107572_b22 article-title: DenseFuse: A fusion approach to infrared and visible images publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2018.2887342 – volume: 34 start-page: 195 year: 2017 ident: 10.1016/j.bspc.2025.107572_b40 article-title: Medical image fusion based on sparse representation of classified image patches publication-title: Biomed. Signal Process. Control doi: 10.1016/j.bspc.2017.02.005 – volume: 13 start-page: 196 issue: 3 year: 2012 ident: 10.1016/j.bspc.2025.107572_b42 article-title: The multiscale directional bilateral filter and its application to multisensor image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2011.01.002 – year: 2020 ident: 10.1016/j.bspc.2025.107572_b52 – volume: 127 start-page: 512 year: 2019 ident: 10.1016/j.bspc.2025.107572_b16 article-title: Locality preserving matching publication-title: Int. J. Comput. Vis. doi: 10.1007/s11263-018-1117-z – volume: 11 start-page: 15 issue: 1 year: 2022 ident: 10.1016/j.bspc.2025.107572_b35 article-title: Directive clustering contrast-based multi-modality medical image fusion for smart healthcare system publication-title: Netw. Model. Anal. Heal. Inform. Bioinform. doi: 10.1007/s13721-021-00342-2 – start-page: 6105 year: 2019 ident: 10.1016/j.bspc.2025.107572_b69 article-title: Efficientnet: Rethinking model scaling for convolutional neural networks – year: 2020 ident: 10.1016/j.bspc.2025.107572_b10 – volume: 4 start-page: 414 year: 2014 ident: 10.1016/j.bspc.2025.107572_b2 article-title: Multimodality medical image fusion based on new features in NSST domain publication-title: Biomed. Eng. Lett. doi: 10.1007/s13534-014-0161-z – ident: 10.1016/j.bspc.2025.107572_b26 doi: 10.1109/CVPR52729.2023.00572 – ident: 10.1016/j.bspc.2025.107572_b71 doi: 10.1109/CVPR52688.2022.01186 – volume: 179 year: 2024 ident: 10.1016/j.bspc.2025.107572_b29 article-title: EMOST: A dual-branch hybrid network for medical image fusion via efficient model module and sparse transformer publication-title: Comput. Biol. Med. doi: 10.1016/j.compbiomed.2024.108771 – volume: 57 year: 2020 ident: 10.1016/j.bspc.2025.107572_b41 article-title: Multi-modal medical image fusion based on two-scale image decomposition and sparse representation publication-title: Biomed. Signal Process. Control doi: 10.1016/j.bspc.2019.101810 – ident: 10.1016/j.bspc.2025.107572_b48 doi: 10.1109/ICCV51070.2023.00742 – ident: 10.1016/j.bspc.2025.107572_b59 doi: 10.1109/ICCVW54120.2021.00210 – ident: 10.1016/j.bspc.2025.107572_b65 doi: 10.1109/CVPR.2015.7298594 – ident: 10.1016/j.bspc.2025.107572_b70 doi: 10.1109/CVPR52688.2022.01167 – year: 2023 ident: 10.1016/j.bspc.2025.107572_b12 article-title: Hqg-net: Unpaired medical image enhancement with high-quality guidance publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 30 year: 2017 ident: 10.1016/j.bspc.2025.107572_b51 article-title: Attention is all you need publication-title: Adv. Neural Inf. Process. Syst. – volume: 73 start-page: 72 year: 2021 ident: 10.1016/j.bspc.2025.107572_b21 article-title: RFN-nest: An end-to-end residual fusion network for infrared and visible images publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.02.023 – volume: 463 start-page: 198 year: 2021 ident: 10.1016/j.bspc.2025.107572_b49 article-title: A light-weight, efficient, and general cross-modal image fusion network publication-title: Neurocomputing doi: 10.1016/j.neucom.2021.08.044 – start-page: 719 year: 2022 ident: 10.1016/j.bspc.2025.107572_b23 article-title: Fusion from decomposition: A self-supervised decomposition approach for image fusion |
| SSID | ssj0048714 |
| Score | 2.4026535 |
| Snippet | In the field of medical imaging, high-quality multi-modal image fusion is crucial for improving diagnostic accuracy. By integrating information from different... |
| SourceID | crossref elsevier |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 107572 |
| SubjectTerms | CNN-transformer Dual-branch Medical image fusion Multi-modality images Multi-scale |
| Title | DMFusion: A dual-branch multi-scale feature fusion network for medical multi-modal image fusion |
| URI | https://dx.doi.org/10.1016/j.bspc.2025.107572 |
| Volume | 105 |
| WOSCitedRecordID | wos001421853900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 issn: 1746-8094 databaseCode: AIEXJ dateStart: 20060101 customDbUrl: isFulltext: true dateEnd: 99991231 titleUrlDefault: https://www.sciencedirect.com omitProxy: false ssIdentifier: ssj0048714 providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Nb9QwELVgywEOiE_R8iEfuEWp0myc2NxW0AoQVCDtIbfIduw2FZtdNbuo_feMPU42LVDRA5doZU2caN-LPbbfzBDytlYFt7oQsYDp2B0zqlgazWMD3i9jBsZH6ZO4fimOj3lZim9BOtT5cgJF2_KLC7H6r1BDG4DtQmdvAffQKTTAbwAdrgA7XP8J-A9fjzZdUGzMIhdqFStXPeMUxYNxB6iYyBqf0TOy3jZqUQ3uRYeLcHaD5otl7fJyLJy2B42vnAP76H0MrWxOnGe7wsiDPvYxSOG3G9-4Ed-eAFkGXn5vNq65bE4Hms1xY7Zs5PJyM96aSNkgY-1H0yJz2Y6xivEw3CZsNGDC6pNh7Z7fxnLcVjjbV93K5ZpM2f7W-Gri7GsT2iAz7BVsZ5Xro3J9VNjHXbKTFkzwCdmZfTosP_eTNyzffDr44c1DnBVKAq-_yZ99mZF_Mn9EHoaFBZ0hIR6TO6Z9Qh6M0k0-JVVPjXd0RkfEoCNi0EAMiljTQAwKxKABaToiBvXECMbPyPzocP7-YxwKbMR6miTr2HJmU6HrnNkDpkUmawkrTKZkkhU5fLtFndRZngpVS22FsYZxyZPMZor7YlPPyaRdtuYFodxOpzrPrOXKZkwqLm2aWJMwmQtmlNolB_0_VemQfN7VQPlR_R2jXRIN96ww9cqN1qwHoArOIzqFFfDphvv2bvWUl-T-luivyGR9vjGvyT39c910528CmX4BKsiRpg |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DMFusion%3A+A+dual-branch+multi-scale+feature+fusion+network+for+medical+multi-modal+image+fusion&rft.jtitle=Biomedical+signal+processing+and+control&rft.au=Ma%2C+Gengchen&rft.au=Qiu%2C+Xihe&rft.au=Tan%2C+Xiaoyu&rft.date=2025-07-01&rft.issn=1746-8094&rft.volume=105&rft.spage=107572&rft_id=info:doi/10.1016%2Fj.bspc.2025.107572&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_bspc_2025_107572 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1746-8094&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1746-8094&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1746-8094&client=summon |