Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion
In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Different from other methods based on deep learning, our network architecture is designed by drawing inspirations from a new propo...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence Jg. 43; H. 10; S. 3333 - 3348 |
|---|---|
| Hauptverfasser: | , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
IEEE
01.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 0162-8828, 1939-3539, 2160-9292, 1939-3539 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Different from other methods based on deep learning, our network architecture is designed by drawing inspirations from a new proposed multi-modal convolutional sparse coding (MCSC) model. The key feature of the proposed network is that it can automatically split the common information shared among different modalities, from the unique information that belongs to each single modality, and is therefore denoted with CU-Net, i.e., common and unique information splitting network. Specifically, the CU-Net is composed of three modules, i.e., the unique feature extraction module (UFEM), common feature preservation module (CFPM), and image reconstruction module (IRM). The architecture of each module is derived from the corresponding part in the MCSC model, which consists of several learned convolutional sparse coding (LCSC) blocks. Extensive numerical results verify the effectiveness of our method on a variety of MIR and MIF tasks, including RGB guided depth image super-resolution, flash guided non-flash image denoising, multi-focus and multi-exposure image fusion. |
|---|---|
| AbstractList | In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Different from other methods based on deep learning, our network architecture is designed by drawing inspirations from a new proposed multi-modal convolutional sparse coding (MCSC) model. The key feature of the proposed network is that it can automatically split the common information shared among different modalities, from the unique information that belongs to each single modality, and is therefore denoted with CU-Net, i.e., common and unique information splitting network. Specifically, the CU-Net is composed of three modules, i.e., the unique feature extraction module (UFEM), common feature preservation module (CFPM), and image reconstruction module (IRM). The architecture of each module is derived from the corresponding part in the MCSC model, which consists of several learned convolutional sparse coding (LCSC) blocks. Extensive numerical results verify the effectiveness of our method on a variety of MIR and MIF tasks, including RGB guided depth image super-resolution, flash guided non-flash image denoising, multi-focus and multi-exposure image fusion. In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Different from other methods based on deep learning, our network architecture is designed by drawing inspirations from a new proposed multi-modal convolutional sparse coding (MCSC) model. The key feature of the proposed network is that it can automatically split the common information shared among different modalities, from the unique information that belongs to each single modality, and is therefore denoted with CU-Net, i.e., common and unique information splitting network. Specifically, the CU-Net is composed of three modules, i.e., the unique feature extraction module (UFEM), common feature preservation module (CFPM), and image reconstruction module (IRM). The architecture of each module is derived from the corresponding part in the MCSC model, which consists of several learned convolutional sparse coding (LCSC) blocks. Extensive numerical results verify the effectiveness of our method on a variety of MIR and MIF tasks, including RGB guided depth image super-resolution, flash guided non-flash image denoising, multi-focus and multi-exposure image fusion.In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Different from other methods based on deep learning, our network architecture is designed by drawing inspirations from a new proposed multi-modal convolutional sparse coding (MCSC) model. The key feature of the proposed network is that it can automatically split the common information shared among different modalities, from the unique information that belongs to each single modality, and is therefore denoted with CU-Net, i.e., common and unique information splitting network. Specifically, the CU-Net is composed of three modules, i.e., the unique feature extraction module (UFEM), common feature preservation module (CFPM), and image reconstruction module (IRM). The architecture of each module is derived from the corresponding part in the MCSC model, which consists of several learned convolutional sparse coding (LCSC) blocks. Extensive numerical results verify the effectiveness of our method on a variety of MIR and MIF tasks, including RGB guided depth image super-resolution, flash guided non-flash image denoising, multi-focus and multi-exposure image fusion. |
| Author | Deng, Xin Dragotti, Pier Luigi |
| Author_xml | – sequence: 1 givenname: Xin orcidid: 0000-0002-4708-6572 surname: Deng fullname: Deng, Xin email: x.deng16@imperial.ac.uk organization: Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom – sequence: 2 givenname: Pier Luigi orcidid: 0000-0002-6073-2807 surname: Dragotti fullname: Dragotti, Pier Luigi email: p.dragotti@imperial.ac.uk organization: Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/32248098$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kclOwzAQhi0EgrK8AEgoEhcuKV5ixz5WZatEC0LlHDnOBAWSuNgJiLfHXeiBA6exrO_zeOY_RLutbQGhU4KHhGB1NX8aTSdDiikeUiUTmiQ7aECJwLGiiu6iASaCxlJSeYAOvX_DmCQcs310wChNJFZygGbXAItobNtPW_ddZVtdRzPo3ap0X9a9R6V10bSvuyqe2iLcTxr9CtEz-M46vVQi3RbRbe_D8Rjtlbr2cLKpR-jl9mY-vo8fHu8m49FDbBgnXUyUUYXmJDE8VboUealyTAVnjADhqWRUcRC6yDXjOaEEtNAGylynxKS44OwIXa7fXTj70YevZE3lDdS1bsH2PqNMiiRMqUhAL_6gb7Z3Yc5AcSHT0E2qQJ1vqD5voMgWrmq0-85-NxUAugaMs947KLcIwdkyjmwVR7aMI9vEEST5RzJVt9pZ53RV_6-erdUKALa9FOYcC8Z-AGyylmw |
| CODEN | ITPIDJ |
| CitedBy_id | crossref_primary_10_1016_j_inffus_2024_102230 crossref_primary_10_1016_j_patcog_2022_109260 crossref_primary_10_1016_j_neucom_2023_126918 crossref_primary_10_1016_j_inffus_2025_103600 crossref_primary_10_1016_j_patcog_2024_111102 crossref_primary_10_1109_TGRS_2025_3543498 crossref_primary_10_1109_JSTARS_2023_3249202 crossref_primary_10_1109_TIP_2021_3106812 crossref_primary_10_3390_math13162584 crossref_primary_10_1016_j_cviu_2025_104278 crossref_primary_10_1145_3677123 crossref_primary_10_1109_TIM_2025_3584116 crossref_primary_10_1109_TIM_2025_3527616 crossref_primary_10_1002_wics_1646 crossref_primary_10_1049_ipr2_12844 crossref_primary_10_1117_1_JRS_16_034520 crossref_primary_10_1016_j_inffus_2024_102884 crossref_primary_10_3390_s25010024 crossref_primary_10_1016_j_cviu_2023_103841 crossref_primary_10_1109_TIP_2024_3374072 crossref_primary_10_3390_s24082466 crossref_primary_10_1007_s11263_024_02089_5 crossref_primary_10_1109_TGRS_2022_3216319 crossref_primary_10_1109_TIP_2023_3242824 crossref_primary_10_3390_s24030867 crossref_primary_10_1109_JSTARS_2024_3408806 crossref_primary_10_1109_TNNLS_2023_3250664 crossref_primary_10_1109_TCSVT_2021_3078559 crossref_primary_10_1145_3584860 crossref_primary_10_1007_s12530_025_09663_3 crossref_primary_10_1109_TPAMI_2025_3578468 crossref_primary_10_1016_j_patrec_2024_06_025 crossref_primary_10_1109_TIP_2024_3445729 crossref_primary_10_1109_TCSVT_2024_3507540 crossref_primary_10_1109_TPAMI_2023_3268209 crossref_primary_10_1109_TGRS_2023_3329150 crossref_primary_10_1177_09544100221143565 crossref_primary_10_1016_j_inffus_2023_03_012 crossref_primary_10_1109_TIP_2021_3131041 crossref_primary_10_3390_rs15112869 crossref_primary_10_1016_j_inffus_2024_102603 crossref_primary_10_3390_rs16152775 crossref_primary_10_3390_rs16213979 crossref_primary_10_1109_TIP_2023_3235536 crossref_primary_10_1109_TPAMI_2024_3504490 crossref_primary_10_1093_ijlct_ctaf015 crossref_primary_10_1109_TCSVT_2024_3397012 crossref_primary_10_3390_math11214556 crossref_primary_10_1109_TGRS_2024_3408793 crossref_primary_10_1117_1_JEI_31_6_063062 crossref_primary_10_1109_TNNLS_2023_3253472 crossref_primary_10_1016_j_eswa_2023_120733 crossref_primary_10_3390_electronics13163309 crossref_primary_10_1016_j_inffus_2022_03_006 crossref_primary_10_1007_s11042_022_14108_z crossref_primary_10_1109_TIP_2021_3058764 crossref_primary_10_1007_s00371_024_03760_1 crossref_primary_10_1049_ipr2_12877 crossref_primary_10_1016_j_inffus_2025_103487 crossref_primary_10_1016_j_asoc_2025_113472 crossref_primary_10_1109_TMM_2024_3521833 crossref_primary_10_1016_j_cmpb_2025_109014 crossref_primary_10_1109_TIM_2023_3280496 crossref_primary_10_1109_TCI_2024_3439990 crossref_primary_10_1007_s11760_025_04537_2 crossref_primary_10_1109_TMM_2024_3521720 crossref_primary_10_1002_prm2_12156 crossref_primary_10_1016_j_cviu_2022_103407 crossref_primary_10_1109_TCSVT_2023_3327766 crossref_primary_10_1109_TGRS_2023_3290074 crossref_primary_10_1109_TGRS_2025_3603835 crossref_primary_10_1145_3612922 crossref_primary_10_1016_j_inffus_2023_101851 crossref_primary_10_1109_TIM_2024_3417544 crossref_primary_10_1109_TPAMI_2023_3334624 crossref_primary_10_1038_s41598_025_03567_7 crossref_primary_10_1109_TMM_2022_3214375 crossref_primary_10_1016_j_imavis_2024_105344 crossref_primary_10_1109_TIP_2024_3387297 crossref_primary_10_1016_j_sigpro_2024_109620 crossref_primary_10_1016_j_inffus_2022_12_002 crossref_primary_10_1109_TPAMI_2024_3525089 crossref_primary_10_1016_j_inffus_2025_103506 crossref_primary_10_1117_1_JEI_34_2_023002 crossref_primary_10_1109_TCSVT_2022_3202692 crossref_primary_10_1016_j_inffus_2025_103146 crossref_primary_10_1109_TPAMI_2024_3406556 crossref_primary_10_1007_s11263_022_01699_1 crossref_primary_10_3390_electronics13204020 crossref_primary_10_1016_j_compbiomed_2024_109577 crossref_primary_10_1016_j_asoc_2024_112240 crossref_primary_10_1142_S0219467825500391 crossref_primary_10_1109_TIM_2022_3203455 crossref_primary_10_1109_TCSVT_2022_3144455 crossref_primary_10_2174_0118750362370697250630063814 crossref_primary_10_1002_jbio_202400420 crossref_primary_10_1109_JETCAS_2024_3394495 crossref_primary_10_1007_s11263_025_02409_3 crossref_primary_10_1109_TGRS_2023_3339843 crossref_primary_10_1016_j_compeleceng_2024_109256 crossref_primary_10_1016_j_displa_2024_102752 crossref_primary_10_1108_IJICC_10_2024_0516 crossref_primary_10_1109_ACCESS_2023_3234917 crossref_primary_10_1016_j_neucom_2024_127672 crossref_primary_10_1109_LGRS_2025_3554798 crossref_primary_10_1109_TIP_2022_3141251 crossref_primary_10_1007_s10489_022_03950_1 crossref_primary_10_1109_TCSVT_2022_3163649 crossref_primary_10_1016_j_inffus_2025_103414 crossref_primary_10_1109_TGRS_2024_3406690 crossref_primary_10_1016_j_bspc_2024_107050 crossref_primary_10_1109_TIP_2024_3489275 crossref_primary_10_1109_TNNLS_2024_3454811 crossref_primary_10_1016_j_atmosres_2024_107505 crossref_primary_10_1109_TGRS_2025_3547945 crossref_primary_10_1007_s11263_024_02256_8 crossref_primary_10_1016_j_patcog_2024_110689 crossref_primary_10_1016_j_neunet_2023_09_023 crossref_primary_10_1109_TPAMI_2024_3523364 crossref_primary_10_1109_TNNLS_2022_3165180 crossref_primary_10_1016_j_neunet_2024_106603 crossref_primary_10_1016_j_inffus_2022_09_019 crossref_primary_10_1109_TMI_2025_3563523 crossref_primary_10_1016_j_imavis_2025_105669 crossref_primary_10_1117_1_JRS_18_022203 crossref_primary_10_1109_TCSVT_2022_3190553 crossref_primary_10_1109_TIP_2025_3570571 crossref_primary_10_1186_s12880_023_01160_w |
| Cites_doi | 10.1109/TPAMI.2018.2883553 10.1109/CVPR.2016.265 10.1109/CVPR.2019.00399 10.1007/978-3-030-01240-3_39 10.1109/TIP.2008.924281 10.1109/ICCV.2011.6126423 10.1109/TIP.2016.2564643 10.1109/TIP.2019.2944270 10.1109/TPAMI.2017.2669034 10.1109/LSP.2014.2354534 10.1109/TIP.2018.2887342 10.1007/978-3-319-46475-6_43 10.1016/j.inffus.2011.01.002 10.1515/9783110524116 10.1109/ICCV.2015.389 10.1016/j.patcog.2004.03.010 10.1109/TIP.2003.819861 10.1109/TIP.2017.2671921 10.1109/TIP.2018.2794218 10.1142/S0218126616501231 10.1016/j.inffus.2014.10.004 10.1109/CVPR.2007.383248 10.1109/ICIP.2019.8803313 10.1109/TIP.2018.2887029 10.1109/CVPR.2016.90 10.1109/ICCV.2015.66 10.1109/ICASSP.2018.8462313 10.1109/TCI.2019.2916502 10.1109/TIP.2012.2197014 10.1109/TCSVT.2018.2866399 10.1109/CVPR.2016.182 10.1109/CVPR.2017.83 10.1016/j.inffus.2010.04.001 10.1007/978-3-319-46475-6_25 10.1109/TIP.2015.2468183 10.1109/TPAMI.2015.2417569 10.1109/TIM.2018.2838778 10.1109/LSP.2016.2618776 10.1109/ICCV.2013.194 10.1109/TIP.2018.2874285 10.1007/978-3-319-10578-9_53 10.1002/cpa.20042 10.1145/1275808.1276497 10.1109/ICCV.2017.189 10.1109/TPAMI.2018.2890623 10.1109/TCSVT.2019.2923901 10.1016/j.sigpro.2009.01.012 10.1109/TGRS.2014.2381272 10.1016/j.inffus.2018.09.004 10.1016/j.inffus.2016.05.004 10.1109/ICCV.2017.505 10.1109/ICCV.2013.13 10.1109/JSEN.2018.2822712 10.1109/ICASSP.2019.8683124 10.1109/CVPRW.2017.151 10.5244/C.30.7 10.1109/TIP.2010.2046811 10.1109/CVPR.2018.00652 10.1109/ICCV.2015.212 10.1109/TIP.2016.2612826 10.1109/CVPR.2017.406 10.1109/TIP.2015.2495260 10.1109/TPAMI.2019.2961672 10.1136/jnnp.74.3.288 10.1109/CVPR.2017.618 10.1117/12.766355 10.1109/TIP.2015.2501749 10.1109/CVPR.2019.00180 10.1109/CVPR.2018.00197 10.1007/978-3-319-46493-0_10 10.1007/978-3-642-33783-3_44 10.1007/978-3-642-15549-9_1 10.1016/j.inffus.2016.12.001 10.1109/TCI.2017.2786138 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TPAMI.2020.2984244 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 1939-3539 |
| EndPage | 3348 |
| ExternalDocumentID | 32248098 10_1109_TPAMI_2020_2984244 9055063 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: CSC-Imperial Scholarship |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 AAYXX CITATION NPM RIC Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c351t-19c9da514c579af6bf9b0265331e15783295e6adba35b121ea6acefba71c70d53 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 192 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000692232400009&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0162-8828 1939-3539 |
| IngestDate | Mon Sep 29 03:48:20 EDT 2025 Mon Jun 30 05:06:42 EDT 2025 Wed Feb 19 02:30:41 EST 2025 Sat Nov 29 05:15:59 EST 2025 Tue Nov 18 22:11:39 EST 2025 Wed Aug 27 02:27:33 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 10 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c351t-19c9da514c579af6bf9b0265331e15783295e6adba35b121ea6acefba71c70d53 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-4708-6572 0000-0002-6073-2807 |
| PMID | 32248098 |
| PQID | 2568778389 |
| PQPubID | 85458 |
| PageCount | 16 |
| ParticipantIDs | ieee_primary_9055063 proquest_miscellaneous_2386432291 crossref_primary_10_1109_TPAMI_2020_2984244 pubmed_primary_32248098 crossref_citationtrail_10_1109_TPAMI_2020_2984244 proquest_journals_2568778389 |
| PublicationCentury | 2000 |
| PublicationDate | 2021-10-01 |
| PublicationDateYYYYMMDD | 2021-10-01 |
| PublicationDate_xml | – month: 10 year: 2021 text: 2021-10-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
| PublicationYear | 2021 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref57 ref13 ref59 ref15 ref58 ref53 ref52 kim (ref12) 2019 ref55 ref11 ref54 ref10 ref17 kwon (ref9) 2015 ref16 ref19 ref18 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 papyan (ref14) 2017; 18 ref49 ref8 ref7 ref4 ref3 ref6 ref5 ref82 ref81 ref40 ref80 ref79 ref35 ref78 song (ref32) 2016 ref34 ref75 ref31 ref74 ref30 ref77 ref33 ref76 shi (ref37) 2018 ref2 ref1 ref39 ref38 ref71 ref70 lu (ref56) 2015 ref73 ref72 ref68 ref24 ref67 ref23 ref26 ref69 ref25 ref64 ref20 ref63 ref66 ref22 ref65 ref21 ref28 ref27 ref29 lahoud (ref36) 2018 ref60 ref62 ref61 |
| References_xml | – start-page: 360 year: 2016 ident: ref32 article-title: Deep depth super-resolution: Learning depth super-resolution using deep convolutional neural network publication-title: Proc Asian Conf Comput Vis – ident: ref28 doi: 10.1109/TPAMI.2018.2883553 – ident: ref1 doi: 10.1109/CVPR.2016.265 – ident: ref59 doi: 10.1109/CVPR.2019.00399 – ident: ref70 doi: 10.1007/978-3-030-01240-3_39 – ident: ref23 doi: 10.1109/TIP.2008.924281 – ident: ref54 doi: 10.1109/ICCV.2011.6126423 – ident: ref57 doi: 10.1109/TIP.2016.2564643 – ident: ref13 doi: 10.1109/TIP.2019.2944270 – ident: ref27 doi: 10.1109/TPAMI.2017.2669034 – ident: ref41 doi: 10.1109/LSP.2014.2354534 – ident: ref4 doi: 10.1109/TIP.2018.2887342 – ident: ref2 doi: 10.1007/978-3-319-46475-6_43 – ident: ref45 doi: 10.1016/j.inffus.2011.01.002 – ident: ref64 doi: 10.1515/9783110524116 – ident: ref25 doi: 10.1109/ICCV.2015.389 – ident: ref40 doi: 10.1016/j.patcog.2004.03.010 – ident: ref68 doi: 10.1109/TIP.2003.819861 – ident: ref73 doi: 10.1109/TIP.2017.2671921 – ident: ref72 doi: 10.1109/TIP.2018.2794218 – ident: ref78 doi: 10.1142/S0218126616501231 – ident: ref77 doi: 10.1016/j.inffus.2014.10.004 – ident: ref66 doi: 10.1109/CVPR.2007.383248 – ident: ref17 doi: 10.1109/ICIP.2019.8803313 – ident: ref33 doi: 10.1109/TIP.2018.2887029 – ident: ref61 doi: 10.1109/CVPR.2016.90 – ident: ref55 doi: 10.1109/ICCV.2015.66 – ident: ref52 doi: 10.1109/ICASSP.2018.8462313 – ident: ref10 doi: 10.1109/TCI.2019.2916502 – ident: ref46 doi: 10.1109/TIP.2012.2197014 – start-page: 0 year: 2018 ident: ref37 article-title: Deep residual attention network for spectral image super-resolution publication-title: Proc Eur Conf Comput Vis – ident: ref34 doi: 10.1109/TCSVT.2018.2866399 – ident: ref62 doi: 10.1109/CVPR.2016.182 – ident: ref5 doi: 10.1109/CVPR.2017.83 – ident: ref43 doi: 10.1016/j.inffus.2010.04.001 – ident: ref76 doi: 10.1007/978-3-319-46475-6_25 – ident: ref21 doi: 10.1109/TIP.2015.2468183 – ident: ref26 doi: 10.1109/TPAMI.2015.2417569 – ident: ref81 doi: 10.1109/TIM.2018.2838778 – ident: ref3 doi: 10.1109/LSP.2016.2618776 – ident: ref8 doi: 10.1109/ICCV.2013.194 – ident: ref35 doi: 10.1109/TIP.2018.2874285 – ident: ref24 doi: 10.1007/978-3-319-10578-9_53 – ident: ref82 doi: 10.1002/cpa.20042 – year: 2019 ident: ref12 article-title: Deformable kernel networks for joint image filtering – ident: ref18 doi: 10.1145/1275808.1276497 – ident: ref16 doi: 10.1109/ICCV.2017.189 – ident: ref11 doi: 10.1109/TPAMI.2018.2890623 – ident: ref60 doi: 10.1109/TCSVT.2019.2923901 – ident: ref49 doi: 10.1109/TIP.2018.2887342 – ident: ref42 doi: 10.1016/j.sigpro.2009.01.012 – ident: ref44 doi: 10.1109/TGRS.2014.2381272 – volume: 18 start-page: 2887 year: 2017 ident: ref14 article-title: Convolutional neural networks analyzed via convolutional sparse coding publication-title: J Mach Learn Res – ident: ref50 doi: 10.1016/j.inffus.2018.09.004 – ident: ref39 doi: 10.1016/j.inffus.2016.05.004 – start-page: 35 year: 2018 ident: ref36 article-title: Multi-modal spectral image super-resolution publication-title: Proc Eur Conf Comput Vis – ident: ref48 doi: 10.1109/ICCV.2017.505 – ident: ref20 doi: 10.1109/ICCV.2013.13 – start-page: 2245 year: 2015 ident: ref56 article-title: Sparse depth super resolution publication-title: Proc Conf Comput Vis Pattern Recognit – ident: ref80 doi: 10.1109/JSEN.2018.2822712 – ident: ref38 doi: 10.1109/ICASSP.2019.8683124 – ident: ref58 doi: 10.1109/CVPRW.2017.151 – ident: ref65 doi: 10.5244/C.30.7 – ident: ref69 doi: 10.1109/TIP.2010.2046811 – ident: ref75 doi: 10.1109/CVPR.2018.00652 – ident: ref15 doi: 10.1109/ICCV.2015.212 – ident: ref6 doi: 10.1109/TIP.2016.2612826 – ident: ref22 doi: 10.1109/CVPR.2017.406 – ident: ref51 doi: 10.1109/TIP.2015.2495260 – ident: ref7 doi: 10.1109/TPAMI.2019.2961672 – ident: ref79 doi: 10.1136/jnnp.74.3.288 – ident: ref63 doi: 10.1109/CVPR.2017.618 – ident: ref71 doi: 10.1117/12.766355 – ident: ref53 doi: 10.1109/TIP.2015.2501749 – ident: ref31 doi: 10.1109/CVPR.2019.00180 – ident: ref30 doi: 10.1109/CVPR.2018.00197 – ident: ref29 doi: 10.1007/978-3-319-46493-0_10 – ident: ref67 doi: 10.1007/978-3-642-33783-3_44 – ident: ref19 doi: 10.1007/978-3-642-15549-9_1 – start-page: 159 year: 2015 ident: ref9 article-title: Data-driven depth map refinement via multi-scale sparse representation publication-title: Proc Conf Comput Vis Pattern Recognit – ident: ref47 doi: 10.1016/j.inffus.2016.12.001 – ident: ref74 doi: 10.1109/TCI.2017.2786138 |
| SSID | ssj0014503 |
| Score | 2.6994846 |
| Snippet | In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF)... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 3333 |
| SubjectTerms | Artificial neural networks Coding Computer architecture Computer vision Convolutional codes Convolutional neural networks Feature extraction Image coding Image fusion Image processing Image reconstruction Image resolution Image restoration Machine learning Modules multi-modal convolutional sparse coding Multi-modal image restoration Neural networks Task analysis |
| Title | Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion |
| URI | https://ieeexplore.ieee.org/document/9055063 https://www.ncbi.nlm.nih.gov/pubmed/32248098 https://www.proquest.com/docview/2568778389 https://www.proquest.com/docview/2386432291 |
| Volume | 43 |
| WOSCitedRecordID | wos000692232400009&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fS9xAEB5UpOhDtdratCpb8K2NZpPL7s6j2B764CFF4d7C7mYChTYRvfPv7-xeElqoQp8Skk2yyTeT_WZ3fgCclE1jNPeQFUm5lBl4mTLMlGaKnHE1Gm9sLDahZzMzn-PNGnwZY2GIKDqf0WnYjWv5deeXYarsDDPm06pYh3Wt9SpWa1wxmJSxCjIzGNZwNiOGAJkMz25vzq-v2BTMs9McTYjs2oJXLMgTk6H5azyKBVae55pxzJnu_F9vd-F1zy3F-UoY3sAatXuwM9RtEL0a78H2H0kI92H2leheXHTtUy-FfIuQsiNuoo-4YGIrYqRuet3VfPzqF_-FxPdYlCYiK2xbi-kyzLy9hbvpt9uLy7SvspD6opSLVKLH2jJv8qVG2yjXoGPDjGmgJMn6XORYkrK1s0XpZC7JKuupcVZLr7O6LN7BRtu19B6EsdZlLpgoCifoCjRECnFCubS2cSoBOXzryvcpyEMljJ9VNEUyrCJUVYCq6qFK4PN4zf0qAceLrfcDEGPLHoMEDgdIq15HHysme0bz-xlM4NN4mrUrLJnYlroltykMU7Y8R5nAwUoUxnsPEvTh38_8CFt58H-Jjn-HsLF4WNIRbPqnxY_Hh2MW4bk5jiL8G2af6dM |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fb5RAEJ7UarR9sNqqpVbFxDdLyy4s7Dw21Usv9i6NOZO-kd1lSEwUmvauf7-ze0A00SY-QWCBhW-G_WZ3fgB8UE2jS-4hK1JhE2bgKmGYKUkLstrWqJ02odhEOZ_rqyu83ICjMRaGiILzGR373bCWX3du5afKTjBlPl1kD-ChynMp1tFa45pBrkIdZOYwrONsSAwhMimeLC5PZ1M2BmV6LFH72K4teMyinOsU9R8jUiix8m-2GUadyc7_9fcZPO3ZZXy6FofnsEHtLuwMlRviXpF3Yfu3NIR7MP9EdB2fde1dL4d8C5-0I2yCl3jM1DYOsbrJrKv5-PQn_4fir6EsTcA2Nm0dT1Z-7u0FfJt8XpydJ32dhcRlSiwTgQ5rw8zJqRJNU9gGLZtmTAQFCdboTKKiwtTWZMoKKcgUxlFjTSlcmdYqewmbbdfSPsTaGJtab6QUmKPNUBMViDlJYUxjiwjE8K0r1ych97UwflTBGEmxClBVHqqqhyqCj-M11-sUHPe23vNAjC17DCI4HCCtei29rZju6ZLfT2ME78fTrF9-0cS01K24TaaZtEmJIoJXa1EY7z1I0MHfn_kOnpwvZhfVxXT-5TVsSe8NE9wAD2FzebOiN_DI3S2_3968DYL8CzSe7DI |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Convolutional+Neural+Network+for+Multi-modal+Image+Restoration+and+Fusion&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Deng%2C+Xin&rft.au=Dragotti%2C+Pier+Luigi&rft.date=2021-10-01&rft.eissn=1939-3539&rft_id=info:doi/10.1109%2FTPAMI.2020.2984244&rft_id=info%3Apmid%2F32248098&rft.externalDocID=32248098 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |