Partial Scene Text Retrieval
The task of partial scene text retrieval involves localizing and searching for text instances that are the same or similar to a given query text from an image gallery. However, existing methods can only handle text-line instances, leaving the problem of searching for partial patches within these tex...
Uložené v:
| Vydané v: | IEEE transactions on pattern analysis and machine intelligence Ročník 47; číslo 3; s. 1548 - 1563 |
|---|---|
| Hlavní autori: | , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
United States
IEEE
01.03.2025
|
| Predmet: | |
| ISSN: | 0162-8828, 1939-3539, 2160-9292, 1939-3539 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | The task of partial scene text retrieval involves localizing and searching for text instances that are the same or similar to a given query text from an image gallery. However, existing methods can only handle text-line instances, leaving the problem of searching for partial patches within these text-line instances unsolved due to a lack of patch annotations in the training data. To address this issue, we propose a network that can simultaneously retrieve both text-line instances and their partial patches. Our method embeds the two types of data (query text and scene text instances) into a shared feature space and measures their cross-modal similarities. To handle partial patches, our proposed approach adopts a Multiple Instance Learning (MIL) approach to learn their similarities with query text, without requiring extra annotations. However, constructing bags, which is a standard step of conventional MIL approaches, can introduce numerous noisy samples for training, and lower inference speed. To address this issue, we propose a Ranking MIL (RankMIL) approach to adaptively filter those noisy samples. Additionally, we present a Dynamic Partial Match Algorithm (DPMA) that can directly search for the target partial patch from a text-line instance during the inference stage, without requiring bags. This greatly improves the search efficiency and the performance of retrieving partial patches. We evaluate the proposed method on both English and Chinese datasets in two tasks: retrieving text-line instances and partial patches. For English text retrieval, our method outperforms state-of-the-art approaches by 8.04% mAP and 12.71% mAP on average, respectively, among three datasets for the two tasks. For Chinese text retrieval, our approach surpasses state-of-the-art approaches by 24.45% mAP and 38.06% mAP on average, respectively, among three datasets for the two tasks. |
|---|---|
| AbstractList | The task of partial scene text retrieval involves localizing and searching for text instances that are the same or similar to a given query text from an image gallery. However, existing methods can only handle text-line instances, leaving the problem of searching for partial patches within these text-line instances unsolved due to a lack of patch annotations in the training data. To address this issue, we propose a network that can simultaneously retrieve both text-line instances and their partial patches. Our method embeds the two types of data (query text and scene text instances) into a shared feature space and measures their cross-modal similarities. To handle partial patches, our proposed approach adopts a Multiple Instance Learning (MIL) approach to learn their similarities with query text, without requiring extra annotations. However, constructing bags, which is a standard step of conventional MIL approaches, can introduce numerous noisy samples for training, and lower inference speed. To address this issue, we propose a Ranking MIL (RankMIL) approach to adaptively filter those noisy samples. Additionally, we present a Dynamic Partial Match Algorithm (DPMA) that can directly search for the target partial patch from a text-line instance during the inference stage, without requiring bags. This greatly improves the search efficiency and the performance of retrieving partial patches. We evaluate the proposed method on both English and Chinese datasets in two tasks: retrieving text-line instances and partial patches. For English text retrieval, our method outperforms state-of-the-art approaches by 8.04% mAP and 12.71% mAP on average, respectively, among three datasets for the two tasks. For Chinese text retrieval, our approach surpasses state-of-the-art approaches by 24.45% mAP and 38.06% mAP on average, respectively, among three datasets for the two tasks. The source code and dataset are available at https://github.com/lanfeng4659/PSTR. The task of partial scene text retrieval involves localizing and searching for text instances that are the same or similar to a given query text from an image gallery. However, existing methods can only handle text-line instances, leaving the problem of searching for partial patches within these text-line instances unsolved due to a lack of patch annotations in the training data. To address this issue, we propose a network that can simultaneously retrieve both text-line instances and their partial patches. Our method embeds the two types of data (query text and scene text instances) into a shared feature space and measures their cross-modal similarities. To handle partial patches, our proposed approach adopts a Multiple Instance Learning (MIL) approach to learn their similarities with query text, without requiring extra annotations. However, constructing bags, which is a standard step of conventional MIL approaches, can introduce numerous noisy samples for training, and lower inference speed. To address this issue, we propose a Ranking MIL (RankMIL) approach to adaptively filter those noisy samples. Additionally, we present a Dynamic Partial Match Algorithm (DPMA) that can directly search for the target partial patch from a text-line instance during the inference stage, without requiring bags. This greatly improves the search efficiency and the performance of retrieving partial patches. We evaluate the proposed method on both English and Chinese datasets in two tasks: retrieving text-line instances and partial patches. For English text retrieval, our method outperforms state-of-the-art approaches by 8.04% mAP and 12.71% mAP on average, respectively, among three datasets for the two tasks. For Chinese text retrieval, our approach surpasses state-of-the-art approaches by 24.45% mAP and 38.06% mAP on average, respectively, among three datasets for the two tasks. The source code and dataset are available at https://github.com/lanfeng4659/PSTR.The task of partial scene text retrieval involves localizing and searching for text instances that are the same or similar to a given query text from an image gallery. However, existing methods can only handle text-line instances, leaving the problem of searching for partial patches within these text-line instances unsolved due to a lack of patch annotations in the training data. To address this issue, we propose a network that can simultaneously retrieve both text-line instances and their partial patches. Our method embeds the two types of data (query text and scene text instances) into a shared feature space and measures their cross-modal similarities. To handle partial patches, our proposed approach adopts a Multiple Instance Learning (MIL) approach to learn their similarities with query text, without requiring extra annotations. However, constructing bags, which is a standard step of conventional MIL approaches, can introduce numerous noisy samples for training, and lower inference speed. To address this issue, we propose a Ranking MIL (RankMIL) approach to adaptively filter those noisy samples. Additionally, we present a Dynamic Partial Match Algorithm (DPMA) that can directly search for the target partial patch from a text-line instance during the inference stage, without requiring bags. This greatly improves the search efficiency and the performance of retrieving partial patches. We evaluate the proposed method on both English and Chinese datasets in two tasks: retrieving text-line instances and partial patches. For English text retrieval, our method outperforms state-of-the-art approaches by 8.04% mAP and 12.71% mAP on average, respectively, among three datasets for the two tasks. For Chinese text retrieval, our approach surpasses state-of-the-art approaches by 24.45% mAP and 38.06% mAP on average, respectively, among three datasets for the two tasks. The source code and dataset are available at https://github.com/lanfeng4659/PSTR. The task of partial scene text retrieval involves localizing and searching for text instances that are the same or similar to a given query text from an image gallery. However, existing methods can only handle text-line instances, leaving the problem of searching for partial patches within these text-line instances unsolved due to a lack of patch annotations in the training data. To address this issue, we propose a network that can simultaneously retrieve both text-line instances and their partial patches. Our method embeds the two types of data (query text and scene text instances) into a shared feature space and measures their cross-modal similarities. To handle partial patches, our proposed approach adopts a Multiple Instance Learning (MIL) approach to learn their similarities with query text, without requiring extra annotations. However, constructing bags, which is a standard step of conventional MIL approaches, can introduce numerous noisy samples for training, and lower inference speed. To address this issue, we propose a Ranking MIL (RankMIL) approach to adaptively filter those noisy samples. Additionally, we present a Dynamic Partial Match Algorithm (DPMA) that can directly search for the target partial patch from a text-line instance during the inference stage, without requiring bags. This greatly improves the search efficiency and the performance of retrieving partial patches. We evaluate the proposed method on both English and Chinese datasets in two tasks: retrieving text-line instances and partial patches. For English text retrieval, our method outperforms state-of-the-art approaches by 8.04% mAP and 12.71% mAP on average, respectively, among three datasets for the two tasks. For Chinese text retrieval, our approach surpasses state-of-the-art approaches by 24.45% mAP and 38.06% mAP on average, respectively, among three datasets for the two tasks. |
| Author | Bai, Xiang Xie, Zhouyi Liao, Minghui Wang, Hao Liu, Wenyu |
| Author_xml | – sequence: 1 givenname: Hao orcidid: 0000-0003-2227-075X surname: Wang fullname: Wang, Hao organization: School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China – sequence: 2 givenname: Minghui orcidid: 0000-0002-2583-4314 surname: Liao fullname: Liao, Minghui organization: Huawei, Shenzhen, China – sequence: 3 givenname: Zhouyi orcidid: 0009-0006-3030-340X surname: Xie fullname: Xie, Zhouyi organization: Huawei, Shenzhen, China – sequence: 4 givenname: Wenyu orcidid: 0000-0002-4582-7488 surname: Liu fullname: Liu, Wenyu organization: School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China – sequence: 5 givenname: Xiang orcidid: 0000-0002-3449-5940 surname: Bai fullname: Bai, Xiang email: xbai@mail.hust.edu.cn organization: School of Software Engineering, Huazhong University of Science and Technology, Wuhan, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/40030386$$D View this record in MEDLINE/PubMed |
| BookMark | eNpNkE1PAjEQQBuDkQ_9A4YYjl4Wp5222x4J8YMEI1E8N90yJGuWXdwuRv-9i6DxNJf3JjOvzzplVRJjlxzGnIO9WS4mj7OxACHHKK1WqT5hPcE1JFZY0WE94FokxgjTZf0Y3wC4VIBnrCsBENDoHhsufN3kvhi9BCpptKTPZvRMTZ3Thy_O2enaF5EujnPAXu9ul9OHZP50P5tO5kkQqWoSrTJJoCGg9pabEAKutfBcEfIMTSrRq4zbldQUhALhzcrbNFgDBCKzKQ7Y9WHvtq7edxQbt8ljoKLwJVW76JCnGtubwbTo1RHdZRtauW2db3z95X4_agFxAEJdxVjT-g_h4PbZ3E82t8_mjtlaaXiQciL6J6TKIEf8Bic4ZWU |
| CODEN | ITPIDJ |
| Cites_doi | 10.1109/TPAMI.2014.2339814 10.1007/978-3-030-01264-9_43 10.1109/TPAMI.2019.2937086 10.1109/CVPR.2016.89 10.1109/CVPR.2017.106 10.1007/978-3-030-58526-6_30 10.1109/ACCESS.2018.2878899 10.1609/aaai.v31i1.11196 10.1109/ICDAR.2015.7333961 10.1109/CVPR46437.2021.00870 10.1109/TIP.2018.2825107 10.1109/CVPR.2018.00619 10.1145/3539597.3570428 10.1109/ICCV.2013.378 10.1109/ICCV.2019.00972 10.1007/978-3-319-46484-8_4 10.1109/TIP.2016.2642781 10.1109/ICCV.2019.00480 10.1109/ICDAR.2019.00253 10.1109/TPAMI.2022.3155612 10.1162/neco.1997.9.8.1735 10.1109/ICDAR.2019.00254 10.1109/ICFHR.2016.0065 10.3115/v1/D14-1162 10.1162/coli.2009.35.4.35403 10.1109/TPAMI.2021.3107437 10.1109/icdar.2019.00077 10.1109/CVPR52688.2022.00455 10.1016/S0004-3702(96)00034-3 10.1109/JCDL.2017.7991581 10.1109/CVPR.2013.115 10.1007/s11263-015-0823-z 10.1109/ICDAR.2019.00252 10.1007/978-3-030-58621-8_41 10.1609/aaai.v34i07.6896 10.1109/ICFHR.2016.0060 10.1109/CVPR42600.2020.00983 10.1109/CVPR52688.2022.00458 10.1109/ICDAR.2019.00250 10.1109/TIP.2022.3206615 10.1609/aaai.v34i07.6812 10.1109/ICCV48922.2021.00986 10.1109/CVPR.2017.283 10.1109/CVPR.2016.254 10.1109/CVPR.2019.00956 10.1109/CVPR46437.2021.00453 10.1109/ICDAR.2017.157 10.1109/cvpr.2018.00527 10.1016/j.patcog.2019.02.002 10.1109/ICDAR.2017.88 10.1016/j.patcog.2020.107656 10.1109/CVPR.2015.7298668 10.1109/ICDAR.2013.108 10.1609/aaai.v34i07.6864 |
| ContentType | Journal Article |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7X8 |
| DOI | 10.1109/TPAMI.2024.3496576 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library (IEL) CrossRef PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Xplore url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 1939-3539 |
| EndPage | 1563 |
| ExternalDocumentID | 40030386 10_1109_TPAMI_2024_3496576 10758313 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Science and Technology Major Project grantid: 2023YFF0905400 funderid: 10.13039/501100018537 – fundername: National Science Fund for Distinguished Young Scholars of China grantid: 62225603 |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E 9M8 AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT ADRHT AENEX AETEA AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P FA8 HZ~ H~9 IBMZZ ICLAB IEDLZ IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNI RNS RXW RZB TAE TN5 UHB VH1 XJT ~02 AAYXX CITATION NPM 7X8 |
| ID | FETCH-LOGICAL-c275t-65b4e060c36a918ccc3f62a15e31b38743a5b19d46ec2502a8da97c980e02b973 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 0 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001414926400016&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0162-8828 1939-3539 |
| IngestDate | Mon Sep 29 05:57:14 EDT 2025 Mon Jul 21 05:51:02 EDT 2025 Sat Nov 29 02:58:28 EST 2025 Wed Aug 27 01:53:40 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 3 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c275t-65b4e060c36a918ccc3f62a15e31b38743a5b19d46ec2502a8da97c980e02b973 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0009-0006-3030-340X 0000-0002-3449-5940 0000-0002-4582-7488 0000-0002-2583-4314 0000-0003-2227-075X |
| PMID | 40030386 |
| PQID | 3176340008 |
| PQPubID | 23479 |
| PageCount | 16 |
| ParticipantIDs | pubmed_primary_40030386 proquest_miscellaneous_3176340008 crossref_primary_10_1109_TPAMI_2024_3496576 ieee_primary_10758313 |
| PublicationCentury | 2000 |
| PublicationDate | 2025-03-01 |
| PublicationDateYYYYMMDD | 2025-03-01 |
| PublicationDate_xml | – month: 03 year: 2025 text: 2025-03-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
| PublicationYear | 2025 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| References | ref13 ref57 Levenshtein (ref18) 1966; 10 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref19 Veit (ref47) 2016 ref51 ref50 ref46 ref45 ref48 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 Mikolov (ref42) ref5 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 ref24 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 Viola (ref12) Pathak (ref23) |
| References_xml | – ident: ref9 doi: 10.1109/TPAMI.2014.2339814 – ident: ref6 doi: 10.1007/978-3-030-01264-9_43 – ident: ref35 doi: 10.1109/TPAMI.2019.2937086 – ident: ref45 doi: 10.1109/CVPR.2016.89 – ident: ref40 doi: 10.1109/CVPR.2017.106 – ident: ref37 doi: 10.1007/978-3-030-58526-6_30 – ident: ref4 doi: 10.1109/ACCESS.2018.2878899 – ident: ref25 doi: 10.1609/aaai.v31i1.11196 – ident: ref19 doi: 10.1109/ICDAR.2015.7333961 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref23 article-title: Fully convolutional multi-class multiple instance learning – ident: ref28 doi: 10.1109/CVPR46437.2021.00870 – ident: ref29 doi: 10.1109/TIP.2018.2825107 – ident: ref30 doi: 10.1109/CVPR.2018.00619 – ident: ref8 doi: 10.1145/3539597.3570428 – ident: ref1 doi: 10.1109/ICCV.2013.378 – ident: ref44 doi: 10.1109/ICCV.2019.00972 – ident: ref26 doi: 10.1007/978-3-319-46484-8_4 – ident: ref22 doi: 10.1109/TIP.2016.2642781 – ident: ref34 doi: 10.1109/ICCV.2019.00480 – ident: ref49 doi: 10.1109/ICDAR.2019.00253 – ident: ref33 doi: 10.1109/TPAMI.2022.3155612 – ident: ref43 doi: 10.1162/neco.1997.9.8.1735 – ident: ref54 doi: 10.1109/ICDAR.2019.00254 – ident: ref16 doi: 10.1109/ICFHR.2016.0065 – ident: ref41 doi: 10.3115/v1/D14-1162 – ident: ref46 doi: 10.1162/coli.2009.35.4.35403 – ident: ref55 doi: 10.1109/TPAMI.2021.3107437 – ident: ref3 doi: 10.1109/icdar.2019.00077 – ident: ref58 doi: 10.1109/CVPR52688.2022.00455 – volume: 10 start-page: 707 issue: 8 year: 1966 ident: ref18 article-title: Binary codes capable of correcting deletions, insertions, and reversals publication-title: Sov. Phys. Doklady – ident: ref20 doi: 10.1016/S0004-3702(96)00034-3 – ident: ref2 doi: 10.1109/JCDL.2017.7991581 – ident: ref21 doi: 10.1109/CVPR.2013.115 – ident: ref10 doi: 10.1007/s11263-015-0823-z – ident: ref48 doi: 10.1109/ICDAR.2019.00252 – ident: ref57 doi: 10.1007/978-3-030-58621-8_41 – ident: ref39 doi: 10.1609/aaai.v34i07.6896 – ident: ref15 doi: 10.1109/ICFHR.2016.0060 – ident: ref38 doi: 10.1109/CVPR42600.2020.00983 – ident: ref5 doi: 10.1109/CVPR52688.2022.00458 – ident: ref50 doi: 10.1109/ICDAR.2019.00250 – start-page: 1417 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref12 article-title: Multiple instance boosting for object detection – ident: ref56 doi: 10.1109/TIP.2022.3206615 – ident: ref32 doi: 10.1609/aaai.v34i07.6812 – ident: ref59 doi: 10.1109/ICCV48922.2021.00986 – ident: ref27 doi: 10.1109/CVPR.2017.283 – ident: ref53 doi: 10.1109/CVPR.2016.254 – ident: ref31 doi: 10.1109/CVPR.2019.00956 – ident: ref13 doi: 10.1109/CVPR46437.2021.00453 – ident: ref51 doi: 10.1109/ICDAR.2017.157 – ident: ref11 doi: 10.1109/cvpr.2018.00527 – ident: ref52 doi: 10.1016/j.patcog.2019.02.002 – ident: ref17 doi: 10.1109/ICDAR.2017.88 – ident: ref7 doi: 10.1016/j.patcog.2020.107656 – ident: ref24 doi: 10.1109/CVPR.2015.7298668 – ident: ref14 doi: 10.1109/ICDAR.2013.108 – ident: ref36 doi: 10.1609/aaai.v34i07.6864 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref42 article-title: Efficient estimation of word representations in vector space – year: 2016 ident: ref47 article-title: COCO-text: Dataset and benchmark for text detection and recognition in natural images |
| SSID | ssj0014503 |
| Score | 2.474601 |
| Snippet | The task of partial scene text retrieval involves localizing and searching for text instances that are the same or similar to a given query text from an image... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Publisher |
| StartPage | 1548 |
| SubjectTerms | Annotations Classification algorithms Cross-modal similarity learning dynamic programming algorithm Extraterrestrial measurements Feature extraction Heuristic algorithms multiple instance learning (MIL) Noise measurement Prediction algorithms Proposals scene text retrieval Similarity learning Training |
| Title | Partial Scene Text Retrieval |
| URI | https://ieeexplore.ieee.org/document/10758313 https://www.ncbi.nlm.nih.gov/pubmed/40030386 https://www.proquest.com/docview/3176340008 |
| Volume | 47 |
| WOSCitedRecordID | wos001414926400016&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Xplore customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwED1BxQADhVIgUKogsaGUJLZje6wQFQxUFRSpW-TYjsTSon7w-zk7SdWlA1uGOIpe7uNd7LsH8FAmUlquRJQVkkSUExUVzJBIlrzE9KxMYoUXm-DjsZjN5KRuVve9MNZaf_jMDtyl38s3C71xv8rQw5HdEqdRe8g5r5q1tlsGlHkZZKQw6OJYRzQdMrF8mk6G729YC6Z04OejuwEjO1nIy6rsZ5g-04za_3zHMzitKWU4rGzgHA7svAPtRq4hrL23Ayc7swcvoDdxRoPrPjWGu3CKQTr88PJaaHtd-Bq9TJ9fo1oqIdIpZ-soYwW1cRZrkimZCK01KbNUJcySpCACaYJiRSINzaxG0pMqYZTkWorYxmkhObmE1nwxt9cQJlyZuGRSUCOo1FYYLL5tKhlDaitiE8BjA13-U03EyH0lEcvcA507oPMa6AC6DqOdOyt4Arhv4M7RoN0uhZrbxWaVI6HJCHXcJICr6jtsV1MXk4jIbvY89RaOU6fP68-I9aC1Xm7sHRzp3_X3atlHq5mJvreaP3OTuwQ |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PT8IwFH4xaKIeRBF1ijoTb2a4re3WHomRQARCFBNuS9d2iRcw_PDv97UbhIsHbzusy_Lt_fje2vc-gMciEsKkkgdJLkhAUyKDnGkSiCItMD1LHRnuxCbS0YhPp2JcNau7XhhjjDt8Ztr20u3l67la219l6OHIbonVqN1nlMZR2a613TSgzAkhI4lBJ8dKYtMjE4rnybgz7GM1GNO2m5BuR4zs5CEnrPI3x3S5plv_51uewklFKv1OaQVnsGdmDahvBBv8yn8bcLwzffAcWmNrNrjuQ2HA8ycYpv13J7CF1teEz-7r5KUXVGIJgYpTtgoSllMTJqEiiRQRV0qRIollxAyJcsKRKEiWR0LTxCikPbHkWopUCR6aMM5FSi6gNpvPzBX4USp1WDDBqeZUKMM1lt8mFowhueWh9uBpA132Xc7EyFwtEYrMAZ1ZoLMKaA-aFqOdO0t4PHjYwJ2hSdt9Cjkz8_UyQ0qTEGrZiQeX5XfYrqY2KhGeXP_x1Hs47E2Gg2zQH73dwFFs1XrdibEW1FaLtbmFA_Wz-lou7pzt_AL1A71j |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Partial+Scene+Text+Retrieval&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Wang%2C+Hao&rft.au=Liao%2C+Minghui&rft.au=Xie%2C+Zhouyi&rft.au=Liu%2C+Wenyu&rft.date=2025-03-01&rft.issn=0162-8828&rft.eissn=2160-9292&rft.volume=47&rft.issue=3&rft.spage=1548&rft.epage=1563&rft_id=info:doi/10.1109%2FTPAMI.2024.3496576&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPAMI_2024_3496576 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |