Interpretable Multi-modal Image Registration Network Based on Disentangled Convolutional Sparse Coding
Multi-modal image registration aims to spatially align two images from different modalities to make their feature points match with each other. Captured by different sensors, the images from different modalities often contain many distinct features, which makes it challenging to find their accurate...
Uložené v:
| Vydané v: | IEEE transactions on image processing Ročník 32; s. 1 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
United States
IEEE
01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Predmet: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Multi-modal image registration aims to spatially align two images from different modalities to make their feature points match with each other. Captured by different sensors, the images from different modalities often contain many distinct features, which makes it challenging to find their accurate correspondences. With the success of deep learning, many deep networks have been proposed to align multi-modal images, however, they are mostly lack of interpretability. In this paper, we first model the multi-modal image registration problem as a disentangled convolutional sparse coding (DCSC) model. In this model, the multi-modal features that are responsible for alignment (RA features) are well separated from the features that are not responsible for alignment (nRA features). By only allowing the RA features to participate in the deformation field prediction, we can eliminate the interference of the nRA features to improve the registration accuracy and efficiency. The optimization process of the DCSC model to separate the RA and nRA features is then turned into a deep network, namely Interpretable Multi-modal Image Registration Network (InMIR-Net). To ensure the accurate separation of RA and nRA features, we further design an accompanying guidance network (AG-Net) to supervise the extraction of RA features in InMIR-Net. The advantage of InMIR-Net is that it provides a universal framework to tackle both rigid and non-rigid multi-modal image registration tasks. Extensive experimental results verify the effectiveness of our method on both rigid and non-rigid registrations on various multi-modal image datasets, including RGB/depth images, RGB/near-infrared (NIR) images, RGB/multi-spectral images, T1/T2 weighted magnetic resonance (MR) images and computed tomography (CT)/MR images. The codes are available at https://github.com/lep990816/Interpretable-Multi-modal-Image-Registration. |
|---|---|
| AbstractList | Multi-modal image registration aims to spatially align two images from different modalities to make their feature points match with each other. Captured by different sensors, the images from different modalities often contain many distinct features, which makes it challenging to find their accurate correspondences. With the success of deep learning, many deep networks have been proposed to align multi-modal images, however, they are mostly lack of interpretability. In this paper, we first model the multi-modal image registration problem as a disentangled convolutional sparse coding (DCSC) model. In this model, the multi-modal features that are responsible for alignment (RA features) are well separated from the features that are not responsible for alignment (nRA features). By only allowing the RA features to participate in the deformation field prediction, we can eliminate the interference of the nRA features to improve the registration accuracy and efficiency. The optimization process of the DCSC model to separate the RA and nRA features is then turned into a deep network, namely Interpretable Multi-modal Image Registration Network (InMIR-Net). To ensure the accurate separation of RA and nRA features, we further design an accompanying guidance network (AG-Net) to supervise the extraction of RA features in InMIR-Net. The advantage of InMIR-Net is that it provides a universal framework to tackle both rigid and non-rigid multi-modal image registration tasks. Extensive experimental results verify the effectiveness of our method on both rigid and non-rigid registrations on various multi-modal image datasets, including RGB/depth images, RGB/near-infrared (NIR) images, RGB/multi-spectral images, T1/T2 weighted magnetic resonance (MR) images and computed tomography (CT)/MR images. The codes are available at https://github.com/lep990816/Interpretable-Multi-modal-Image-Registration . Multi-modal image registration aims to spatially align two images from different modalities to make their feature points match with each other. Captured by different sensors, the images from different modalities often contain many distinct features, which makes it challenging to find their accurate correspondences. With the success of deep learning, many deep networks have been proposed to align multi-modal images, however, they are mostly lack of interpretability. In this paper, we first model the multi-modal image registration problem as a disentangled convolutional sparse coding (DCSC) model. In this model, the multi-modal features that are responsible for alignment (RA features) are well separated from the features that are not responsible for alignment (nRA features). By only allowing the RA features to participate in the deformation field prediction, we can eliminate the interference of the nRA features to improve the registration accuracy and efficiency. The optimization process of the DCSC model to separate the RA and nRA features is then turned into a deep network, namely Interpretable Multi-modal Image Registration Network (InMIR-Net). To ensure the accurate separation of RA and nRA features, we further design an accompanying guidance network (AG-Net) to supervise the extraction of RA features in InMIR-Net. The advantage of InMIR-Net is that it provides a universal framework to tackle both rigid and non-rigid multi-modal image registration tasks. Extensive experimental results verify the effectiveness of our method on both rigid and non-rigid registrations on various multi-modal image datasets, including RGB/depth images, RGB/near-infrared (NIR) images, RGB/multi-spectral images, T1/T2 weighted magnetic resonance (MR) images and computed tomography (CT)/MR images. The codes are available at https://github.com/lep990816/Interpretable-Multi-modal-Image-Registration.Multi-modal image registration aims to spatially align two images from different modalities to make their feature points match with each other. Captured by different sensors, the images from different modalities often contain many distinct features, which makes it challenging to find their accurate correspondences. With the success of deep learning, many deep networks have been proposed to align multi-modal images, however, they are mostly lack of interpretability. In this paper, we first model the multi-modal image registration problem as a disentangled convolutional sparse coding (DCSC) model. In this model, the multi-modal features that are responsible for alignment (RA features) are well separated from the features that are not responsible for alignment (nRA features). By only allowing the RA features to participate in the deformation field prediction, we can eliminate the interference of the nRA features to improve the registration accuracy and efficiency. The optimization process of the DCSC model to separate the RA and nRA features is then turned into a deep network, namely Interpretable Multi-modal Image Registration Network (InMIR-Net). To ensure the accurate separation of RA and nRA features, we further design an accompanying guidance network (AG-Net) to supervise the extraction of RA features in InMIR-Net. The advantage of InMIR-Net is that it provides a universal framework to tackle both rigid and non-rigid multi-modal image registration tasks. Extensive experimental results verify the effectiveness of our method on both rigid and non-rigid registrations on various multi-modal image datasets, including RGB/depth images, RGB/near-infrared (NIR) images, RGB/multi-spectral images, T1/T2 weighted magnetic resonance (MR) images and computed tomography (CT)/MR images. The codes are available at https://github.com/lep990816/Interpretable-Multi-modal-Image-Registration. |
| Author | Duan, Yiping Xu, Mai Liu, Enpeng Li, Shengxi Deng, Xin |
| Author_xml | – sequence: 1 givenname: Xin orcidid: 0000-0002-4708-6572 surname: Deng fullname: Deng, Xin organization: School of Cyber Science and Technology, Beihang University, Beijing, China – sequence: 2 givenname: Enpeng surname: Liu fullname: Liu, Enpeng organization: School of Electronic and Information Engineering, Beihang University, Beijing, China – sequence: 3 givenname: Shengxi surname: Li fullname: Li, Shengxi organization: School of Electronic and Information Engineering, Beihang University, Beijing, China – sequence: 4 givenname: Yiping orcidid: 0000-0001-9638-7112 surname: Duan fullname: Duan, Yiping organization: Department of Electronic Engineering, Tsinghua University, Beijing, China – sequence: 5 givenname: Mai orcidid: 0000-0002-0277-3301 surname: Xu fullname: Xu, Mai organization: School of Electronic and Information Engineering, Beihang University, Beijing, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/37022244$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kUuPFCEURokZ4zx078KYStzMptoL3KK6ltqO2sn4iI7rCl116TBS0AKl8d9L260xs3AF3JwD5PvO2YkPnhh7zGHBOXTPb9YfFwKEXEiBAALvsTPeIa8BUJyUPTRt3XLsTtl5SrcAHBuuHrBT2YIQAvGMmbXPFHeRst44qt7NLtt6CqN21XrSW6o-0damHHW2wVfvKf8I8Wv1UicaqzJ4ZRP5rP3WlfMq-O_BzXuy6J93OiYqw9H67UN232iX6NFxvWBfXl_drN7W1x_erFcvrutB4jLXouWKk1JSaNMC6qZR3LTKSKVRqtGYznR8qbQw0AwGJUhuug0gVyA22JC8YJeHe3cxfJsp5X6yaSDntKcwp160Xclj2aEq6LM76G2YY_n5nmpRStEgFOrpkZo3E439LtpJx5_9nwQLAAdgiCGlSOYvwqHfl9SXkvp9Sf2xpKKoO8pg8--AS87W_U98chAtEf3zDkhskMtf3iyc1Q |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1088_1361_6501_acfd4d crossref_primary_10_1007_s11760_025_04174_9 crossref_primary_10_1007_s10044_025_01438_5 crossref_primary_10_32604_cmc_2024_049640 crossref_primary_10_1016_j_patcog_2024_111102 crossref_primary_10_1088_1402_4896_ad0099 crossref_primary_10_3390_s24165447 crossref_primary_10_1016_j_cmpb_2023_107745 crossref_primary_10_1109_TBME_2025_3529870 crossref_primary_10_23919_cje_2023_00_419 crossref_primary_10_1007_s10586_023_04102_x crossref_primary_10_1016_j_patcog_2024_110615 crossref_primary_10_1016_j_displa_2024_102775 crossref_primary_10_1109_TGRS_2024_3367986 crossref_primary_10_1109_TMM_2024_3521720 crossref_primary_10_7717_peerj_cs_1596 crossref_primary_10_1109_TIP_2024_3472491 crossref_primary_10_1109_TPAMI_2024_3366234 crossref_primary_10_1016_j_engappai_2024_108150 crossref_primary_10_1109_TCSVT_2024_3369757 crossref_primary_10_1016_j_heliyon_2024_e34402 crossref_primary_10_3390_brainsci13071045 crossref_primary_10_1016_j_jag_2024_104186 crossref_primary_10_1109_TCSVT_2023_3298811 crossref_primary_10_1109_TGRS_2025_3556000 crossref_primary_10_1016_j_isci_2023_107736 crossref_primary_10_3390_rs16122141 crossref_primary_10_1016_j_eswa_2023_122934 crossref_primary_10_3390_bioengineering10080979 crossref_primary_10_1109_TIV_2024_3393015 crossref_primary_10_1108_DLP_01_2024_0018 crossref_primary_10_1016_j_compbiomed_2023_107293 crossref_primary_10_1088_1402_4896_acedd3 crossref_primary_10_1016_j_future_2023_07_004 crossref_primary_10_1007_s11276_023_03566_4 crossref_primary_10_3390_brainsci13091320 crossref_primary_10_1007_s11431_023_2650_x crossref_primary_10_1016_j_artmed_2023_102737 crossref_primary_10_1016_j_jmsy_2024_04_024 crossref_primary_10_3390_app13063396 crossref_primary_10_1007_s11760_023_02761_2 crossref_primary_10_1109_JSTARS_2025_3527175 crossref_primary_10_1109_ACCESS_2023_3313174 crossref_primary_10_1016_j_neucom_2025_129810 crossref_primary_10_1080_15397734_2023_2229913 crossref_primary_10_3390_e25071062 crossref_primary_10_3390_rs17050749 crossref_primary_10_3390_sym15071418 crossref_primary_10_1007_s00500_023_08852_z crossref_primary_10_1016_j_bspc_2023_105492 crossref_primary_10_3390_bioengineering11070701 crossref_primary_10_1109_TGRS_2025_3576290 crossref_primary_10_1142_S0218001425570058 crossref_primary_10_1109_TGRS_2025_3587800 crossref_primary_10_3390_biomimetics8030268 crossref_primary_10_1016_j_jksuci_2024_102090 crossref_primary_10_1016_j_jag_2023_103574 crossref_primary_10_1038_s41598_023_40169_7 crossref_primary_10_1109_TIP_2025_3556602 crossref_primary_10_1002_ima_70024 crossref_primary_10_1007_s11042_023_16517_0 crossref_primary_10_3390_app15052508 crossref_primary_10_1007_s11042_024_18902_9 crossref_primary_10_3390_bioengineering10080957 crossref_primary_10_1007_s11042_023_17991_2 crossref_primary_10_1016_j_ins_2024_121009 |
| Cites_doi | 10.1109/TCSVT.2019.2923901 10.1007/978-3-319-66182-7_35 10.1109/ICCV48922.2021.01462 10.1007/978-3-030-59716-0_22 10.1109/LRA.2018.2809549 10.1109/TCI.2020.2965304 10.1007/978-3-319-10593-2_21 10.1109/CVPR.2011.5995637 10.1145/358669.358692 10.1109/CVPR46437.2021.01569 10.2307/1932409.JSTOR1932409 10.1007/978-3-030-01231-1_22 10.1006/nimg.2000.0582 10.1016/j.media.2007.06.004 10.1007/978-3-030-32245-8_43 10.1007/978-3-030-20351-1_19 10.1109/TMI.2019.2897538 10.1109/TIP.2021.3058764 10.1109/TMI.2018.2878316 10.1016/j.media.2021.102036 10.1017/cbo9780511811685 10.1145/2185520.2185550 10.1007/978-3-030-58452-8_38 10.1007/978-3-319-24574-4_28 10.3390/s20154203 10.1007/BFb0056301 10.1016/j.neuroimage.2010.09.025 10.1109/CVPR42600.2020.01342 10.1016/j.neuroimage.2008.10.040 10.1023/b:visi.0000029664.99615.94 10.1109/CVPR42600.2020.00767 10.1007/978-3-319-66182-7_31 10.1007/978-3-642-40763-5_80 10.1109/CVPR42600.2020.00908 10.1109/ICCV.2013.194 10.1016/S1361-8415(01)80004-9 10.1016/j.media.2018.11.010 10.1007/978-3-319-66182-7_27 10.1109/CVPR.2019.01044 10.5244/C.30.7 10.1109/ICASSP.2018.8462313 10.1016/j.media.2012.05.008 10.1007/978-3-319-46726-9_2 10.48550/ARXIV.1807.06521 10.1007/978-3-319-67558-9_24 10.1109/ICCV.2011.6126544 10.1109/ISBI.2018.8363845 10.1080/03610927708827533 10.1109/TMI.2002.803111 10.1007/978-3-319-66182-7_40 10.1007/11744023_32 10.1109/ICIP.2010.5651219 10.1136/jnnp.74.3.288 10.1109/42.796284 10.1007/978-3-030-01252-6_9 10.1109/SSIAI.2016.7459174 10.1109/TIP.2010.2046811 10.21236/ada299525 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2023.3240024 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | Technology Research Database MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 1 |
| ExternalDocumentID | 37022244 10_1109_TIP_2023_3240024 10034541 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62001016, 62231002, 62250001 funderid: 10.13039/501100001809 |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS F5P HZ~ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 53G 5VS AAYXX ABFSI AETIX AGSQL AI. AIBXA ALLEH CITATION E.L EJD H~9 ICLAB IFJZH VH1 AAYOK NPM RIG 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c348t-27161e6632af704a5561f76f36a436dff9f9186a2f05cf43031f9b041602b45e3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 77 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000934988000002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Sun Sep 28 08:33:04 EDT 2025 Mon Jun 30 10:22:45 EDT 2025 Sun Apr 06 01:21:17 EDT 2025 Tue Nov 18 22:17:18 EST 2025 Sat Nov 29 03:34:41 EST 2025 Wed Aug 27 02:18:20 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c348t-27161e6632af704a5561f76f36a436dff9f9186a2f05cf43031f9b041602b45e3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-0277-3301 0000-0002-4708-6572 0000-0001-9638-7112 |
| PMID | 37022244 |
| PQID | 2774332540 |
| PQPubID | 85429 |
| PageCount | 1 |
| ParticipantIDs | proquest_journals_2774332540 proquest_miscellaneous_2797148946 ieee_primary_10034541 pubmed_primary_37022244 crossref_primary_10_1109_TIP_2023_3240024 crossref_citationtrail_10_1109_TIP_2023_3240024 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-01-01 |
| PublicationDateYYYYMMDD | 2023-01-01 |
| PublicationDate_xml | – month: 01 year: 2023 text: 2023-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2023 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 Pielawski (ref45) 2020 ref54 Summers (ref6) 2003; 74 ref17 ref16 ref19 ref18 Simonyan (ref57) 2014 ref50 ref46 ref47 ref42 ref41 ref44 ref49 ref8 ref7 ref9 ref4 ref3 ref5 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 Jaderberg (ref43); 28 Wang (ref51) 2019 Kim (ref10) 2018 ref24 (ref62) 2003 ref23 ref67 ref26 DeTone (ref29) 2016 ref25 ref20 ref64 ref22 ref66 ref21 ref65 ref28 ref27 Luo (ref48) 2000; 5 ref60 Klein (ref63) 2008 ref61 |
| References_xml | – ident: ref4 doi: 10.1109/TCSVT.2019.2923901 – ident: ref37 doi: 10.1007/978-3-319-66182-7_35 – ident: ref47 doi: 10.1109/ICCV48922.2021.01462 – ident: ref52 doi: 10.1007/978-3-030-59716-0_22 – ident: ref44 doi: 10.1109/LRA.2018.2809549 – ident: ref7 doi: 10.1109/TCI.2020.2965304 – year: 2018 ident: ref10 article-title: Recurrent transformer networks for semantic correspondence publication-title: arXiv:1810.12155 – ident: ref12 doi: 10.1007/978-3-319-10593-2_21 – ident: ref61 doi: 10.1109/CVPR.2011.5995637 – year: 2016 ident: ref29 article-title: Deep image homography estimation publication-title: arXiv:1606.03798 – ident: ref26 doi: 10.1145/358669.358692 – ident: ref46 doi: 10.1109/CVPR46437.2021.01569 – ident: ref64 doi: 10.2307/1932409.JSTOR1932409 – year: 2019 ident: ref51 article-title: FIRE: Unsupervised bi-directional inter-modality registration using deep networks publication-title: arXiv:1907.05062 – ident: ref9 doi: 10.1007/978-3-030-01231-1_22 – ident: ref34 doi: 10.1006/nimg.2000.0582 – ident: ref66 doi: 10.1016/j.media.2007.06.004 – ident: ref19 doi: 10.1007/978-3-030-32245-8_43 – ident: ref20 doi: 10.1007/978-3-030-20351-1_19 – ident: ref41 doi: 10.1109/TMI.2019.2897538 – ident: ref3 doi: 10.1109/TIP.2021.3058764 – ident: ref40 doi: 10.1109/TMI.2018.2878316 – ident: ref42 doi: 10.1016/j.media.2021.102036 – year: 2020 ident: ref45 article-title: CoMIR: Contrastive multimodal image representation for registration publication-title: arXiv:2006.06325 – ident: ref56 doi: 10.1017/cbo9780511811685 – ident: ref11 doi: 10.1145/2185520.2185550 – ident: ref30 doi: 10.1007/978-3-030-58452-8_38 – ident: ref58 doi: 10.1007/978-3-319-24574-4_28 – ident: ref2 doi: 10.3390/s20154203 – ident: ref14 doi: 10.1007/BFb0056301 – ident: ref65 doi: 10.1016/j.neuroimage.2010.09.025 – ident: ref22 doi: 10.1109/CVPR42600.2020.01342 – ident: ref35 doi: 10.1016/j.neuroimage.2008.10.040 – year: 2014 ident: ref57 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv:1409.1556 – ident: ref23 doi: 10.1023/b:visi.0000029664.99615.94 – ident: ref31 doi: 10.1109/CVPR42600.2020.00767 – ident: ref36 doi: 10.1007/978-3-319-66182-7_31 – volume: 28 start-page: 2017 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref43 article-title: Spatial transformer networks – ident: ref50 doi: 10.1007/978-3-642-40763-5_80 – ident: ref1 doi: 10.1109/CVPR42600.2020.00908 – ident: ref5 doi: 10.1109/ICCV.2013.194 – ident: ref13 doi: 10.1016/S1361-8415(01)80004-9 – ident: ref18 doi: 10.1016/j.media.2018.11.010 – ident: ref39 doi: 10.1007/978-3-319-66182-7_27 – volume-title: Optimisation Methods for Medical Image Registration year: 2008 ident: ref63 – ident: ref28 doi: 10.1109/CVPR.2019.01044 – ident: ref59 doi: 10.5244/C.30.7 – ident: ref53 doi: 10.1109/ICASSP.2018.8462313 – ident: ref16 doi: 10.1016/j.media.2012.05.008 – ident: ref17 doi: 10.1007/978-3-319-46726-9_2 – ident: ref55 doi: 10.48550/ARXIV.1807.06521 – volume: 5 start-page: 551 issue: 7 year: 2000 ident: ref48 article-title: Multi-modality medical image registration based on maximization of mutual information publication-title: J. Image Graph. – ident: ref67 doi: 10.1007/978-3-319-67558-9_24 – ident: ref25 doi: 10.1109/ICCV.2011.6126544 – ident: ref21 doi: 10.1109/ISBI.2018.8363845 – ident: ref27 doi: 10.1080/03610927708827533 – ident: ref32 doi: 10.1109/TMI.2002.803111 – ident: ref38 doi: 10.1007/978-3-319-66182-7_40 – ident: ref24 doi: 10.1007/11744023_32 – ident: ref15 doi: 10.1109/ICIP.2010.5651219 – volume: 74 start-page: 288 issue: 3 year: 2003 ident: ref6 article-title: Harvard whole brain atlas: https://www.med.harvard.edu/ AANLIB/home.html publication-title: J. Neurol., Neurosurg. Psychiatry doi: 10.1136/jnnp.74.3.288 – ident: ref33 doi: 10.1109/42.796284 – ident: ref8 doi: 10.1007/978-3-030-01252-6_9 – ident: ref54 doi: 10.1109/SSIAI.2016.7459174 – ident: ref60 doi: 10.1109/TIP.2010.2046811 – volume-title: Retrospective Image Registration Evaluation year: 2003 ident: ref62 – ident: ref49 doi: 10.21236/ada299525 |
| SSID | ssj0014516 |
| Score | 2.6301844 |
| Snippet | Multi-modal image registration aims to spatially align two images from different modalities to make their feature points match with each other. Captured by... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1 |
| SubjectTerms | Alignment Computed tomography Convolutional codes convolutional sparse coding Feature extraction Generative adversarial networks Image coding Image registration Infrared imagery interpretable network Magnetic resonance imaging Measurement Medical imaging multi-modal image registration Optimization Registration Strain Task analysis |
| Title | Interpretable Multi-modal Image Registration Network Based on Disentangled Convolutional Sparse Coding |
| URI | https://ieeexplore.ieee.org/document/10034541 https://www.ncbi.nlm.nih.gov/pubmed/37022244 https://www.proquest.com/docview/2774332540 https://www.proquest.com/docview/2797148946 |
| Volume | 32 |
| WOSCitedRecordID | wos000934988000002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9QwDLdg4gEeGIwxOsZUJF54yC2Xpk3yCIOJSeg0wUD3VjWNM026tdPubn__nDRXxsOQeGvTfEm265_j2Ab4QMYagVZhmPYamTTOMVtVlvHSl5VXVpqYvvj3dzWb6fncnKVg9RgLg4jx8hlOwmP05bu-XYejMpJwXsgyhKk_VkoNwVqjyyBUnI2uzVIxRbh_45Pk5uj89GwSyoRPQvY5LuRfOigWVXkYX0Y9c7L9nzt8Ac8ToMw_DRzwEh5htwPbCVzmSXSXO_DsXubBV-D_XDa0C8xjGC676h3NdHpFv5j8B16MKXXz2XBXPP9MKs_l1PDlMsYsdRcLej_uu9vEwDT85zWZykiNQSnuwq-Tr-fH31gqucDaQuoVE2Q-TZFQiGi84rIJxTO9qnxRNbKonPfGm6muGuF52XpJ-m_qjeWE6riwssTiNWx1fYdvINdWaaexQW6NVEZYgjoStbAEEp1zPoOjDRHqNuUjD2UxFnW0S7ipiWx1IFudyJbBx3HE9ZCL4x99dwN17vUbCJPBwYbQdZLWZS0IAxcFmco8g_fjZ5Kz4DxpOuzXoY8hrtJGVhnsDQwyTl6oYDZLuf_Aom_hadjbcHJzAFurmzW-gyft7epyeXNIzDzXh5GZ7wALvO66 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dT9swED8hQGJ7gI2xLYNtQdrLHlxcx4ntR8aGqFYqtHUTb1Ec2wipJIi2_P07O25gD0zaW-L4S7q73O98vjuAT2isIWhlikgnLeHKGKKLQhOau7xwQnMV0hf_HovJRF5eqosYrB5iYay14fKZHfjH4Ms3bb30R2Uo4TTjuQ9T38g5Z8MuXKt3Gvias8G5mQsiEPmvvJJUHU1HFwNfKHzg889Rxv_SQqGsytMIM2ia053_3OML2I6QMj3ueOAlrNlmF3YivEyj8M534fmj3IOvwD1cN9Qzm4ZAXHLTGpxpdIM_mfSHveqT6qaT7rZ4-gWVnkmx4et1iFpqrmb4ftI295GFcfjPWzSWLTZ6tbgHv06_TU_OSCy6QOqMywVhaEANLeIQVjlBeeXLZzpRuKyoeFYY55RTQ1lUzNG8dhw14NApTRHXUaZ5brPXsN60jX0LqdRCGmkrS7XiQjGNYIdbyTTCRGOMS-BoRYSyjhnJfWGMWRksE6pKJFvpyVZGsiXwuR9x22Xj-EffPU-dR_06wiRwsCJ0GeV1XjJEwVmGxjJN4LD_jJLm3SdVY9ul76OQq6TiRQJvOgbpJ8-EN5w5f_fEoh9h62x6Pi7Ho8n3fXjm99md4xzA-uJuad_DZn2_uJ7ffQgs_Qf6HPEZ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Interpretable+Multi-modal+Image+Registration+Network+Based+on+Disentangled+Convolutional+Sparse+Coding&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Deng%2C+Xin&rft.au=Liu%2C+Enpeng&rft.au=Li%2C+Shengxi&rft.au=Duan%2C+Yiping&rft.date=2023-01-01&rft.pub=IEEE&rft.issn=1057-7149&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FTIP.2023.3240024&rft.externalDocID=10034541 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |