RGBT Tracking via Challenge-Based Appearance Disentanglement and Interaction
RGB and thermal source data suffer from both shared and specific challenges, and how to explore and exploit them plays a critical role in representing the target appearance in RGBT tracking. In this paper, we propose a novel approach, which performs target appearance representation disentanglement a...
Uložené v:
| Vydané v: | IEEE transactions on image processing Ročník 33; s. 1753 - 1767 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
United States
IEEE
2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Predmet: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | RGB and thermal source data suffer from both shared and specific challenges, and how to explore and exploit them plays a critical role in representing the target appearance in RGBT tracking. In this paper, we propose a novel approach, which performs target appearance representation disentanglement and interaction via both modality-shared and modality-specific challenge attributes, for robust RGBT tracking. In particular, we disentangle the target appearance representations via five challenge-based branches with different structures according to their properties, including three parameter-shared branches to model modality-shared challenges and two parameter-independent branches to model modality-specific challenges. Considering the complementary advantages between modality-specific cues, we propose a guidance interaction module to transfer discriminative features from one modality to another one to enhance the discriminative ability of weak modality. Moreover, we design an aggregation interaction module to combine all challenge-based target representations, which could form more discriminative target representations and fit the challenge-agnostic tracking process. These challenge-based branches are able to model the target appearance under certain challenges so that the target representations can be learned by a few parameters even in the situation of insufficient training data. In addition, to relieve labor costs and avoid label ambiguity, we design a generation strategy to generate training data with different challenge attributes. Comprehensive experiments demonstrate the superiority of the proposed tracker against the state-of-the-art methods on four benchmark datasets. |
|---|---|
| AbstractList | RGB and thermal source data suffer from both shared and specific challenges, and how to explore and exploit them plays a critical role in representing the target appearance in RGBT tracking. In this paper, we propose a novel approach, which performs target appearance representation disentanglement and interaction via both modality-shared and modality-specific challenge attributes, for robust RGBT tracking. In particular, we disentangle the target appearance representations via five challenge-based branches with different structures according to their properties, including three parameter-shared branches to model modality-shared challenges and two parameter-independent branches to model modality-specific challenges. Considering the complementary advantages between modality-specific cues, we propose a guidance interaction module to transfer discriminative features from one modality to another one to enhance the discriminative ability of weak modality. Moreover, we design an aggregation interaction module to combine all challenge-based target representations, which could form more discriminative target representations and fit the challenge-agnostic tracking process. These challenge-based branches are able to model the target appearance under certain challenges so that the target representations can be learned by a few parameters even in the situation of insufficient training data. In addition, to relieve labor costs and avoid label ambiguity, we design a generation strategy to generate training data with different challenge attributes. Comprehensive experiments demonstrate the superiority of the proposed tracker against the state-of-the-art methods on four benchmark datasets. RGB and thermal source data suffer from both shared and specific challenges, and how to explore and exploit them plays a critical role in representing the target appearance in RGBT tracking. In this paper, we propose a novel approach, which performs target appearance representation disentanglement and interaction via both modality-shared and modality-specific challenge attributes, for robust RGBT tracking. In particular, we disentangle the target appearance representations via five challenge-based branches with different structures according to their properties, including three parameter-shared branches to model modality-shared challenges and two parameter-independent branches to model modality-specific challenges. Considering the complementary advantages between modality-specific cues, we propose a guidance interaction module to transfer discriminative features from one modality to another one to enhance the discriminative ability of weak modality. Moreover, we design an aggregation interaction module to combine all challenge-based target representations, which could form more discriminative target representations and fit the challenge-agnostic tracking process. These challenge-based branches are able to model the target appearance under certain challenges so that the target representations can be learned by a few parameters even in the situation of insufficient training data. In addition, to relieve labor costs and avoid label ambiguity, we design a generation strategy to generate training data with different challenge attributes. Comprehensive experiments demonstrate the superiority of the proposed tracker against the state-of-the-art methods on four benchmark datasets.RGB and thermal source data suffer from both shared and specific challenges, and how to explore and exploit them plays a critical role in representing the target appearance in RGBT tracking. In this paper, we propose a novel approach, which performs target appearance representation disentanglement and interaction via both modality-shared and modality-specific challenge attributes, for robust RGBT tracking. In particular, we disentangle the target appearance representations via five challenge-based branches with different structures according to their properties, including three parameter-shared branches to model modality-shared challenges and two parameter-independent branches to model modality-specific challenges. Considering the complementary advantages between modality-specific cues, we propose a guidance interaction module to transfer discriminative features from one modality to another one to enhance the discriminative ability of weak modality. Moreover, we design an aggregation interaction module to combine all challenge-based target representations, which could form more discriminative target representations and fit the challenge-agnostic tracking process. These challenge-based branches are able to model the target appearance under certain challenges so that the target representations can be learned by a few parameters even in the situation of insufficient training data. In addition, to relieve labor costs and avoid label ambiguity, we design a generation strategy to generate training data with different challenge attributes. Comprehensive experiments demonstrate the superiority of the proposed tracker against the state-of-the-art methods on four benchmark datasets. |
| Author | Ruan, Rui Xiao, Yun Liu, Lei Fan, Minghao Li, Chenglong |
| Author_xml | – sequence: 1 givenname: Lei orcidid: 0000-0003-2749-5528 surname: Liu fullname: Liu, Lei email: liulei970507@163.com organization: Information Materials and Intelligent Sensing Laboratory of Anhui Province, the Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, and Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China – sequence: 2 givenname: Chenglong orcidid: 0000-0002-7233-2739 surname: Li fullname: Li, Chenglong email: lcl1314@foxmail.com organization: Information Materials and Intelligent Sensing Laboratory of Anhui Province and Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Artificial Intelligence, Anhui University, Hefei, China – sequence: 3 givenname: Yun orcidid: 0000-0002-5285-8565 surname: Xiao fullname: Xiao, Yun email: xiaoyun@ahu.edu.cn organization: Information Materials and Intelligent Sensing Laboratory of Anhui Province and Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Artificial Intelligence, Anhui University, Hefei, China – sequence: 4 givenname: Rui orcidid: 0000-0001-6822-9256 surname: Ruan fullname: Ruan, Rui email: ruanrui@ahu.edu.cn organization: School of Internet, Anhui University, Hefei, China – sequence: 5 givenname: Minghao surname: Fan fullname: Fan, Minghao email: mhfansfp@163.com organization: Anhui Province Key Laboratory of Electric Fire and Safety Protection, State Grid Anhui Electric Power Research Institute, Hefei, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/38442061$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kc9P2zAUgC0EGj-2O4cJReLCJZ0d24l9hA5YpUpDqDtHL_ZzMUudYqeT9t_jqh2aOHDyO3yfbb3vlByGISAh54xOGKP622L2MKloJSacN4xLeUBOmBaspFRUh3mmsikbJvQxOU3pmVImJKs_kWOuhKhozU7I_PH-ZlEsIpjfPiyLPx6K6RP0PYYlljeQ0BbX6zVChGCw-O4ThhHCssdVHgoItpiFEbM--iF8JkcO-oRf9ucZ-XV3u5j-KOc_72fT63lpuFBj6SQw1ymlrcCmFp2zsu4ct04b2mmjkYKRAKzT1KJrbGNRVZ2jnQCspRL8jFzt7l3H4WWDaWxXPhnsewg4bFJbaa4qxaXgGb18hz4Pmxjy7zIlFJUVF3WmLvbUpluhbdfRryD-bf_tKQN0B5g4pBTRvSGMttsUbU7RblO0-xRZqd8pxo-wXdMYwfcfiV93okfE_94Rda5K-Sv-85UU |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1109_TPAMI_2024_3475472 crossref_primary_10_1109_JSEN_2024_3506929 crossref_primary_10_1016_j_neucom_2025_130240 crossref_primary_10_1016_j_optcom_2024_131394 crossref_primary_10_1109_TIP_2025_3562077 crossref_primary_10_1109_JSTARS_2024_3518460 crossref_primary_10_1007_s11760_025_04672_w crossref_primary_10_1109_TIP_2025_3586467 crossref_primary_10_1109_JSEN_2025_3575188 crossref_primary_10_1109_TIP_2024_3428316 crossref_primary_10_1016_j_knosys_2025_112983 crossref_primary_10_1016_j_patcog_2025_112295 crossref_primary_10_1016_j_eswa_2025_129155 crossref_primary_10_1109_LSP_2025_3569477 crossref_primary_10_1016_j_neucom_2025_130758 crossref_primary_10_1109_TCSVT_2025_3557992 crossref_primary_10_1016_j_inffus_2025_102941 crossref_primary_10_3390_electronics13132517 crossref_primary_10_1038_s41598_025_92957_y crossref_primary_10_1109_TPAMI_2025_3555485 crossref_primary_10_1016_j_inffus_2024_102842 crossref_primary_10_1109_JSEN_2025_3579339 crossref_primary_10_1016_j_eswa_2025_129381 crossref_primary_10_1109_LSP_2025_3563123 crossref_primary_10_1007_s11760_025_04779_0 crossref_primary_10_1007_s11227_024_06443_9 crossref_primary_10_1016_j_patcog_2025_112162 |
| Cites_doi | 10.1007/978-3-030-01246-5_19 10.1007/978-3-030-01237-3_32 10.1109/ICME52920.2022.9860018 10.1609/aaai.v36i3.20187 10.1109/TIP.2021.3060862 10.1007/978-3-319-10602-1_48 10.1109/CVPR52729.2023.01310 10.1109/CVPRW.2019.00252 10.1609/aaai.v32i1.11671 10.1109/CVPR.2017.306 10.1109/CVPR52729.2023.00523 10.5244/C.28.6 10.1109/TIV.2020.2980735 10.1109/TMM.2023.3310295 10.1109/ICCVW.2019.00278 10.1016/j.infrared.2023.104819 10.1109/TIP.2021.3087341 10.1016/j.patcog.2019.106977 10.1109/CVPR52729.2023.00918 10.1109/JSEN.2023.3244834 10.1007/978-3-030-58542-6_14 10.1007/978-3-030-01225-0_6 10.1145/3581783.3612341 10.1109/TIP.2021.3125504 10.1109/ICCV.2015.352 10.1109/CVPR52688.2022.00868 10.1109/CVPR.2009.5206848 10.48550/arXiv.1511.07122 10.1016/j.jvcir.2021.103300 10.1109/TIP.2016.2614135 10.1109/TPAMI.2019.2957464 10.U09/CVPR.2019.00552 10.1109/TNNLS.2022.3157594 10.1109/TMM.2022.3174341 10.48550/ARXIV.1807.06521 10.1109/TIP.2021.3130533 10.1145/3343031.3350928 10.1109/CVPR.2018.00892 10.1109/TIP.2021.3132827 10.1109/TITS.2022.3229830 10.1145/3123266.3123289 10.1109/CVPR42600.2020.00709 10.1145/3503161.3547851 10.1109/ICCV.2017.322 10.1109/CVPR.2018.00070 10.1007/s11263-021-01495-3 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2024.3371355 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | Technology Research Database MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 1767 |
| ExternalDocumentID | 38442061 10_1109_TIP_2024_3371355 10460420 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62376004; 62076003 funderid: 10.13039/501100001809 – fundername: Science and Technology Project of State Grid Corporation of China grantid: 52120524000A funderid: 10.13039/501100010880 – fundername: Natural Science Foundation of Anhui Province grantid: 2208085J18 funderid: 10.13039/501100003995 – fundername: Natural Science Foundation of Anhui Higher Education Institution grantid: 2022AH040014 – fundername: Anhui Provincial Colleges Science Foundation for Distinguished Young Scholars grantid: 2022AH020093 |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION NPM RIG 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c348t-f5a1fb889d4e764bfd56bf3df9c0b9c9e0ac5aa1b90def7d7de82bf0b4ae65843 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 39 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001181446500003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Thu Oct 02 11:34:49 EDT 2025 Mon Jun 30 08:34:39 EDT 2025 Mon Jul 21 06:05:20 EDT 2025 Tue Nov 18 22:35:35 EST 2025 Sat Nov 29 03:34:43 EST 2025 Wed Aug 27 02:12:32 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c348t-f5a1fb889d4e764bfd56bf3df9c0b9c9e0ac5aa1b90def7d7de82bf0b4ae65843 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-2749-5528 0000-0001-6822-9256 0000-0002-5285-8565 0000-0002-7233-2739 |
| PMID | 38442061 |
| PQID | 2948052346 |
| PQPubID | 85429 |
| PageCount | 15 |
| ParticipantIDs | pubmed_primary_38442061 ieee_primary_10460420 crossref_citationtrail_10_1109_TIP_2024_3371355 proquest_journals_2948052346 crossref_primary_10_1109_TIP_2024_3371355 proquest_miscellaneous_2938283543 |
| PublicationCentury | 2000 |
| PublicationDate | 20240000 2024-00-00 20240101 |
| PublicationDateYYYYMMDD | 2024-01-01 |
| PublicationDate_xml | – year: 2024 text: 20240000 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref35 ref12 ref34 ref15 ref37 ref14 ref36 ref31 ref30 ref11 ref33 ref10 ref32 ref2 ref1 ref17 ref39 ref16 ref38 ref19 ref18 Zintgraf (ref26) ref24 ref46 ref23 ref45 ref48 ref25 ref47 ref20 ref42 ref41 ref22 ref44 ref21 ref43 ref28 ref29 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 Requeima (ref27) 2019 |
| References_xml | – ident: ref48 doi: 10.1007/978-3-030-01246-5_19 – ident: ref22 doi: 10.1007/978-3-030-01237-3_32 – ident: ref40 doi: 10.1109/ICME52920.2022.9860018 – ident: ref3 doi: 10.1609/aaai.v36i3.20187 – ident: ref34 doi: 10.1109/TIP.2021.3060862 – ident: ref47 doi: 10.1007/978-3-319-10602-1_48 – ident: ref13 doi: 10.1109/CVPR52729.2023.01310 – ident: ref23 doi: 10.1109/CVPRW.2019.00252 – ident: ref25 doi: 10.1609/aaai.v32i1.11671 – ident: ref20 doi: 10.1109/CVPR.2017.306 – ident: ref43 doi: 10.1109/CVPR52729.2023.00523 – ident: ref15 doi: 10.5244/C.28.6 – ident: ref32 doi: 10.1109/TIV.2020.2980735 – ident: ref39 doi: 10.1109/TMM.2023.3310295 – ident: ref8 doi: 10.1109/ICCVW.2019.00278 – ident: ref44 doi: 10.1016/j.infrared.2023.104819 – ident: ref35 doi: 10.1109/TIP.2021.3087341 – ident: ref7 doi: 10.1016/j.patcog.2019.106977 – ident: ref12 doi: 10.1109/CVPR52729.2023.00918 – ident: ref42 doi: 10.1109/JSEN.2023.3244834 – ident: ref1 doi: 10.1007/978-3-030-58542-6_14 – ident: ref17 doi: 10.1007/978-3-030-01225-0_6 – ident: ref11 doi: 10.1145/3581783.3612341 – ident: ref33 doi: 10.1109/TIP.2021.3125504 – ident: ref18 doi: 10.1109/ICCV.2015.352 – ident: ref9 doi: 10.1109/CVPR52688.2022.00868 – ident: ref30 doi: 10.1109/CVPR.2009.5206848 – ident: ref16 doi: 10.48550/arXiv.1511.07122 – ident: ref24 doi: 10.1016/j.jvcir.2021.103300 – ident: ref5 doi: 10.1109/TIP.2016.2614135 – ident: ref46 doi: 10.1109/TPAMI.2019.2957464 – ident: ref45 doi: 10.U09/CVPR.2019.00552 – ident: ref36 doi: 10.1109/TNNLS.2022.3157594 – ident: ref37 doi: 10.1109/TMM.2022.3174341 – ident: ref21 doi: 10.48550/ARXIV.1807.06521 – ident: ref4 doi: 10.1109/TIP.2021.3130533 – ident: ref10 doi: 10.1145/3343031.3350928 – ident: ref28 doi: 10.1109/CVPR.2018.00892 – ident: ref19 doi: 10.1109/TIP.2021.3132827 – year: 2019 ident: ref27 article-title: Fast and flexible multi-task classification using conditional neural adaptive processes publication-title: arXiv:1906.07697 – start-page: 7693 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref26 article-title: Fast context adaptation via meta-learning – ident: ref38 doi: 10.1109/TITS.2022.3229830 – ident: ref6 doi: 10.1145/3123266.3123289 – ident: ref31 doi: 10.1109/CVPR42600.2020.00709 – ident: ref41 doi: 10.1145/3503161.3547851 – ident: ref14 doi: 10.1109/ICCV.2017.322 – ident: ref29 doi: 10.1109/CVPR.2018.00070 – ident: ref2 doi: 10.1007/s11263-021-01495-3 |
| SSID | ssj0014516 |
| Score | 2.6244714 |
| Snippet | RGB and thermal source data suffer from both shared and specific challenges, and how to explore and exploit them plays a critical role in representing the... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1753 |
| SubjectTerms | Adaptation models aggregation interaction challenge-based appearance disentanglement Data models Feature extraction guidance interaction Lighting Mathematical models Modules Parameters Representations RGBT tracking Target tracking Tracking Training Training data training data generation |
| Title | RGBT Tracking via Challenge-Based Appearance Disentanglement and Interaction |
| URI | https://ieeexplore.ieee.org/document/10460420 https://www.ncbi.nlm.nih.gov/pubmed/38442061 https://www.proquest.com/docview/2948052346 https://www.proquest.com/docview/2938283543 |
| Volume | 33 |
| WOSCitedRecordID | wos001181446500003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LaxsxEB4a00N7aNIkTd04QYVeclC8a2n1OObZFkIIxS2-LXoWQ1mH2M7vr0a7a9JDCrktrKQVmpmdl2Y-gC-RWyFdiDRpI095VTKqTeRUGWFL7plhuR3Drxt5e6tmM33XFavnWpgQQr58Fk7xMefy_cKtMVQ2xnxkYrLkoW9JKdpirU3KABFnc2qzklQmu7_PSRZ6PP1-lzzBCT9lDBHpEK2GKZ4WEuU_6ijjqzxvamaVc739ws3uwLvOtiRnLTO8h1eh2YXtzs4knRQvd-HtkyaEe3Dz4-v5lCSd5TBqTh7nhlz0CCv0PCk5T9ISSSCQP8jlPFcrNb_ba-fENJ7kqGJbILEPP6-vphffaIexQB3jakVjZcpoldKeBym4jb4SNjIftSusdjoUxlXGlFYXPkTppQ9qYmNhuQlovLAPMGgWTfgIhAsRolaRR8V4Ur3aeOlKlzwSgb8JNoRxf9S16xqQIw7Gnzo7IoWuE51qpFPd0WkIJ5sZ923zjf-M3UcaPBnXHv8QRj056048l_VEc4RyYFwM4fPmdRIszJaYJizWOIYpbEbH08YPWjbYLN5zz6dnPnoIb3BvbahmBIPVwzocwWv3uJovH44T987Ucebev7cy6G0 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dT9swED9NMGnsgY6v0a0bnrQXHkyT2nHsx8HGQOsqNBXEW-TPqdKUItry98_nJBV7YBJPiRTHsXx3uS_f_QA-B25EaX2gURs5youcUaUDp1ILk3PHNEvtGG7G5WQib2_VVVusnmphvPfp8Jk_wduUy3dzu8JQ2RDzkZHJooe-WfB4bcq11kkDxJxNyc2ipGW0_LusZKaG08ur6AuO-AljiEmHeDVM4hQi_0chJYSVp43NpHTOe89c7hvYbq1L8qVhhx144etd6LWWJmnleLELrx-1IdyD8a_vp1MStZbFuDl5mGly1mGs0NOo5hyJU0SRQA4hX2epXqn-3Rw8J7p2JMUVmxKJfbg-_zY9u6AtygK1jMslDYXOg5FSOe5LwU1whTCBuaBsZpRVPtO20Do3KnM-lK50Xo5MyAzXHs0XdgAb9bz2h0C4ED4oGXiQjEflq7QrbW6jTyLwR8H6MOy2urJtC3JEwvhTJVckU1WkU4V0qlo69eF4_cZd037jP2P3kQaPxjXb34dBR86qFdBFNVIcwRwYF334tH4cRQvzJbr28xWOYRLb0fG48LcNG6wn77jn3RMfPYJXF9Of42p8OfnxHrZwnU3gZgAby_uV_wAv7cNytrj_mHj4L3qM6sw |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=RGBT+Tracking+via+Challenge-Based+Appearance+Disentanglement+and+Interaction&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Liu%2C+Lei&rft.au=Li%2C+Chenglong&rft.au=Xiao%2C+Yun&rft.au=Ruan%2C+Rui&rft.date=2024&rft.eissn=1941-0042&rft.volume=33&rft.spage=1753&rft_id=info:doi/10.1109%2FTIP.2024.3371355&rft_id=info%3Apmid%2F38442061&rft.externalDocID=38442061 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |