Adaptive Graph Representation Learning for Video Person Re-Identification
Recent years have witnessed the remarkable progress of applying deep learning models in video person re-identification (Re-ID). A key factor for video person Re-ID is to effectively construct discriminative and robust video feature representations for many complicated situations. Part-based approach...
Saved in:
| Published in: | IEEE transactions on image processing Vol. 29; pp. 8821 - 8830 |
|---|---|
| Main Authors: | , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Recent years have witnessed the remarkable progress of applying deep learning models in video person re-identification (Re-ID). A key factor for video person Re-ID is to effectively construct discriminative and robust video feature representations for many complicated situations. Part-based approaches employ spatial and temporal attention to extract representative local features. While correlations between parts are ignored in the previous methods, to leverage the relations of different parts, we propose an innovative adaptive graph representation learning scheme for video person Re-ID, which enables the contextual interactions between relevant regional features. Specifically, we exploit the pose alignment connection and the feature affinity connection to construct an adaptive structure-aware adjacency graph, which models the intrinsic relations between graph nodes. We perform feature propagation on the adjacency graph to refine regional features iteratively, and the neighbor nodes' information is taken into account for part feature representation. To learn compact and discriminative representations, we further propose a novel temporal resolution-aware regularization, which enforces the consistency among different temporal resolutions for the same identities. We conduct extensive evaluations on four benchmarks, i.e. iLIDS-VID, PRID2011, MARS, and DukeMTMC-VideoReID, experimental results achieve the competitive performance which demonstrates the effectiveness of our proposed method. Code is available at https://github.com/weleen/AGRL.pytorch . |
|---|---|
| AbstractList | Recent years have witnessed the remarkable progress of applying deep learning models in video person re-identification (Re-ID). A key factor for video person Re-ID is to effectively construct discriminative and robust video feature representations for many complicated situations. Part-based approaches employ spatial and temporal attention to extract representative local features. While correlations between parts are ignored in the previous methods, to leverage the relations of different parts, we propose an innovative adaptive graph representation learning scheme for video person Re-ID, which enables the contextual interactions between relevant regional features. Specifically, we exploit the pose alignment connection and the feature affinity connection to construct an adaptive structure-aware adjacency graph, which models the intrinsic relations between graph nodes. We perform feature propagation on the adjacency graph to refine regional features iteratively, and the neighbor nodes' information is taken into account for part feature representation. To learn compact and discriminative representations, we further propose a novel temporal resolution-aware regularization, which enforces the consistency among different temporal resolutions for the same identities. We conduct extensive evaluations on four benchmarks, i.e. iLIDS-VID, PRID2011, MARS, and DukeMTMC-VideoReID, experimental results achieve the competitive performance which demonstrates the effectiveness of our proposed method. Code is available at https://github.com/weleen/AGRL.pytorch. Recent years have witnessed the remarkable progress of applying deep learning models in video person re-identification (Re-ID). A key factor for video person Re-ID is to effectively construct discriminative and robust video feature representations for many complicated situations. Part-based approaches employ spatial and temporal attention to extract representative local features. While correlations between parts are ignored in the previous methods, to leverage the relations of different parts, we propose an innovative adaptive graph representation learning scheme for video person Re-ID, which enables the contextual interactions between relevant regional features. Specifically, we exploit the pose alignment connection and the feature affinity connection to construct an adaptive structure-aware adjacency graph, which models the intrinsic relations between graph nodes. We perform feature propagation on the adjacency graph to refine regional features iteratively, and the neighbor nodes' information is taken into account for part feature representation. To learn compact and discriminative representations, we further propose a novel temporal resolution-aware regularization, which enforces the consistency among different temporal resolutions for the same identities. We conduct extensive evaluations on four benchmarks, i.e. iLIDS-VID, PRID2011, MARS, and DukeMTMC-VideoReID, experimental results achieve the competitive performance which demonstrates the effectiveness of our proposed method. Code is available at https://github.com/weleen/AGRL.pytorch.Recent years have witnessed the remarkable progress of applying deep learning models in video person re-identification (Re-ID). A key factor for video person Re-ID is to effectively construct discriminative and robust video feature representations for many complicated situations. Part-based approaches employ spatial and temporal attention to extract representative local features. While correlations between parts are ignored in the previous methods, to leverage the relations of different parts, we propose an innovative adaptive graph representation learning scheme for video person Re-ID, which enables the contextual interactions between relevant regional features. Specifically, we exploit the pose alignment connection and the feature affinity connection to construct an adaptive structure-aware adjacency graph, which models the intrinsic relations between graph nodes. We perform feature propagation on the adjacency graph to refine regional features iteratively, and the neighbor nodes' information is taken into account for part feature representation. To learn compact and discriminative representations, we further propose a novel temporal resolution-aware regularization, which enforces the consistency among different temporal resolutions for the same identities. We conduct extensive evaluations on four benchmarks, i.e. iLIDS-VID, PRID2011, MARS, and DukeMTMC-VideoReID, experimental results achieve the competitive performance which demonstrates the effectiveness of our proposed method. Code is available at https://github.com/weleen/AGRL.pytorch. |
| Author | Wu, Fei Wu, Yiming Bourahla, Omar El Farouk Li, Xi Tian, Qi Zhou, Xue |
| Author_xml | – sequence: 1 givenname: Yiming surname: Wu fullname: Wu, Yiming email: ymw@zju.edu.cn organization: College of Computer Science, Zhejiang University, Hangzhou, China – sequence: 2 givenname: Omar El Farouk surname: Bourahla fullname: Bourahla, Omar El Farouk email: obourahla@zju.edu.cn organization: College of Computer Science, Zhejiang University, Hangzhou, China – sequence: 3 givenname: Xi orcidid: 0000-0003-3947-4011 surname: Li fullname: Li, Xi email: xilizju@zju.edu.cn organization: College of Computer Science and Technology, Zhejiang University, Hangzhou, China – sequence: 4 givenname: Fei surname: Wu fullname: Wu, Fei email: wufei@cs.zju.edu.cn organization: College of Computer Science, Zhejiang University, Hangzhou, China – sequence: 5 givenname: Qi surname: Tian fullname: Tian, Qi email: qitian@cs.utsa.edu organization: Department of Computer Science, The University of Texas at San Antonio, San Antonio, TX, USA – sequence: 6 givenname: Xue surname: Zhou fullname: Zhou, Xue email: zhouxue@uestc.edu.cn organization: School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/32746239$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kc9LwzAUx4NMnE7vgiAFL14685I0bY4i_hgMHKJeQ5q-asbW1qQT_O_N3PSwg6c8eJ_Pl_C-R2TQtA0Scgp0DEDV1fNkNmaU0TGnFKTie-QQlICUUsEGcaZZnuYg1JAchTCPjMhAHpAhZ7mQjKtDMrmuTNe7T0zuvenekyfsPAZsetO7tkmmaHzjmrekbn3y6ipskxn6EDdPmE6qyLna2R_2mOzXZhHwZPuOyMvd7fPNQzp9vJ_cXE9TK4D3aW1lmVU145myeSlLRQUUVhSl5HlRySKHEpgBFGUlshoEB57ZurAlymgyxUfkcpPb-fZjhaHXSxcsLhamwXYVNBOc8jwrKIvoxQ46b1e-ib-LlBASQBTrwPMttSqXWOnOu6XxX_r3SBGQG8D6NgSPtbZuc5_eG7fQQPW6DR3b0Os29LaNKNId8Tf7H-VsozhE_MMVgCri-htsEZKS |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1007_s40747_024_01474_4 crossref_primary_10_1109_TMM_2022_3140647 crossref_primary_10_1109_TITS_2024_3386914 crossref_primary_10_1109_TIP_2021_3131937 crossref_primary_10_1109_TCSVT_2022_3189027 crossref_primary_10_1109_TCSVT_2024_3483265 crossref_primary_10_3724_SP_J_1089_2022_18852 crossref_primary_10_1007_s11042_023_16286_w crossref_primary_10_1109_TITS_2024_3490582 crossref_primary_10_1145_3487044 crossref_primary_10_1109_TPAMI_2022_3161600 crossref_primary_10_1007_s00138_022_01349_z crossref_primary_10_1016_j_neunet_2025_107946 crossref_primary_10_1109_TIP_2021_3077138 crossref_primary_10_1109_TIP_2024_3372832 crossref_primary_10_1109_TNNLS_2023_3341246 crossref_primary_10_1007_s11042_022_13669_3 crossref_primary_10_1016_j_patcog_2023_109669 crossref_primary_10_3390_e23121686 crossref_primary_10_1109_TIP_2025_3531299 crossref_primary_10_1007_s11042_021_10588_7 crossref_primary_10_1016_j_imavis_2023_104629 crossref_primary_10_1049_cvi2_12100 crossref_primary_10_1109_TIP_2021_3120054 crossref_primary_10_3390_sym15040906 crossref_primary_10_1109_TCSVT_2022_3183011 crossref_primary_10_3390_sym17050672 crossref_primary_10_1016_j_patcog_2024_110981 crossref_primary_10_1016_j_imavis_2025_105518 crossref_primary_10_1016_j_cja_2022_11_017 crossref_primary_10_1016_j_eswa_2024_125429 crossref_primary_10_1049_ipr2_12380 crossref_primary_10_26599_AIR_2025_9150048 crossref_primary_10_38124_ijisrt_25jul706 crossref_primary_10_1007_s13198_024_02517_2 crossref_primary_10_1016_j_engappai_2022_105108 crossref_primary_10_1016_j_patcog_2025_111813 crossref_primary_10_1016_j_imavis_2023_104791 crossref_primary_10_1109_TIP_2021_3112039 crossref_primary_10_1007_s11042_022_12585_w crossref_primary_10_1016_j_image_2024_117240 crossref_primary_10_1109_TCSVT_2023_3276996 crossref_primary_10_1109_TCSVT_2023_3250464 crossref_primary_10_1109_TIP_2021_3079821 crossref_primary_10_1016_j_neucom_2024_128479 crossref_primary_10_1109_TMM_2024_3362136 crossref_primary_10_1109_TCSVT_2025_3531883 crossref_primary_10_1109_TIFS_2025_3539079 crossref_primary_10_1016_j_patcog_2022_108708 crossref_primary_10_1109_TIP_2021_3093759 crossref_primary_10_3390_math12223508 crossref_primary_10_3390_s23198138 crossref_primary_10_1109_ACCESS_2021_3062967 crossref_primary_10_1007_s13042_022_01560_4 crossref_primary_10_1016_j_neucom_2023_03_003 crossref_primary_10_1007_s11042_023_15116_3 crossref_primary_10_3390_app13031289 crossref_primary_10_1109_TIP_2023_3247159 crossref_primary_10_32604_cmc_2024_054895 crossref_primary_10_3390_electronics14153118 crossref_primary_10_3390_s24072229 crossref_primary_10_1109_TIP_2022_3163855 crossref_primary_10_1016_j_imavis_2024_104917 crossref_primary_10_1016_j_patcog_2022_108593 crossref_primary_10_1109_LSP_2021_3132286 crossref_primary_10_3390_s23073384 crossref_primary_10_1007_s40747_023_01229_7 crossref_primary_10_1007_s00521_023_08477_1 crossref_primary_10_1109_TIP_2023_3236144 crossref_primary_10_1007_s00138_023_01489_w crossref_primary_10_1007_s11263_025_02350_5 crossref_primary_10_1016_j_eswa_2025_128123 crossref_primary_10_1049_ell2_12382 crossref_primary_10_1109_TNNLS_2023_3297607 crossref_primary_10_1016_j_neucom_2022_03_032 crossref_primary_10_1109_TCSVT_2021_3119983 crossref_primary_10_1109_TIFS_2021_3075894 crossref_primary_10_1007_s11042_023_15473_z crossref_primary_10_1109_ACCESS_2020_3042644 crossref_primary_10_1109_JIOT_2023_3250652 crossref_primary_10_1016_j_neucom_2021_04_080 crossref_primary_10_1109_TMM_2023_3276167 crossref_primary_10_1016_j_imavis_2022_104394 crossref_primary_10_1109_TCSVT_2023_3340428 crossref_primary_10_1109_TIP_2022_3175593 crossref_primary_10_1109_TIP_2023_3296901 crossref_primary_10_1007_s10489_021_02992_1 crossref_primary_10_1016_j_imavis_2022_104432 crossref_primary_10_1016_j_knosys_2025_113461 crossref_primary_10_1007_s00371_023_03208_y crossref_primary_10_3390_app13179528 |
| Cites_doi | 10.1109/CVPR.2018.00128 10.1109/CVPR.2018.00543 10.1109/CVPR.2016.148 10.1609/aaai.v33i01.33018618 10.1007/978-3-030-01270-0_12 10.1109/TIP.2018.2878505 10.1109/CVPR.2019.00871 10.1109/ROBOT.2005.1570420 10.1109/ICCV.2017.74 10.1109/TNN.2008.2005605 10.1609/aaai.v33i01.33018786 10.1109/ICCV.2019.00380 10.1109/CVPR.2019.00505 10.1109/CVPR.2017.360 10.1109/CVPR.2017.499 10.1109/TIP.2019.2911488 10.1109/CVPR.2018.00051 10.1109/ICCV.2017.427 10.1007/978-3-319-10593-2_45 10.1109/TMM.2018.2877886 10.1007/978-3-642-21227-7_9 10.1109/CVPR.2017.717 10.1007/978-3-030-01216-8_43 10.1609/aaai.v33i01.33018295 10.1109/TNNLS.2019.2891244 10.1109/CVPR.2018.00709 10.1109/ICCV.2017.550 10.1109/CVPR.2018.00046 10.1109/ICCV.2017.127 10.1109/ICCV.2017.507 10.1109/CVPR.2017.11 10.1109/CVPR.2018.00562 10.1109/TCSVT.2017.2715499 10.1109/CVPR.2016.90 10.1016/j.patcog.2018.05.007 10.1109/ICCV.2017.265 10.1109/ICCV.2017.349 10.1007/978-3-030-01267-0_30 10.1109/CVPRW.2019.00190 10.1609/aaai.v33i01.33018287 10.1109/CVPR.2019.00954 10.1007/978-3-319-48881-3_2 10.1007/978-3-319-46466-4_42 10.1109/JAS.2018.7511081 10.1007/978-3-319-46466-4_52 10.1109/WACV.2019.00130 10.1007/978-3-030-01240-3_40 10.1109/CVPR.2019.00226 10.1186/1687-6180-2014-15 10.1109/CVPR.2018.00902 10.1007/978-3-030-01225-0_30 10.1109/TIP.2019.2908062 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2020.3001693 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 8830 |
| ExternalDocumentID | 32746239 10_1109_TIP_2020_3001693 9119869 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: Key Scientific Technological Innovation Research Project by the Ministry of Education – fundername: National Natural Science Foundation of China grantid: 61751209; 6162510; 61972071 funderid: 10.13039/501100001809 – fundername: Zhejiang Laboratory grantid: 2019KD0AB02 – fundername: Zhejiang University K. P. Chao’s High Technology Development Foundation funderid: 10.13039/501100004835 – fundername: Baidu AI Frontier Technology Joint Research Program – fundername: Zhejiang Provincial Natural Science Foundation of China grantid: LR19F020004 funderid: 10.13039/501100004731 |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYXX CITATION NPM Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c413t-fc6b5df2359c7b6b90418c48b6378d6871b12a1e4bd45f143135cf8cbe6c6b293 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 120 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000571723000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Sun Sep 28 07:38:25 EDT 2025 Mon Jun 30 10:13:16 EDT 2025 Wed Feb 19 02:04:26 EST 2025 Tue Nov 18 22:17:42 EST 2025 Sat Nov 29 03:21:12 EST 2025 Wed Aug 27 02:32:28 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c413t-fc6b5df2359c7b6b90418c48b6378d6871b12a1e4bd45f143135cf8cbe6c6b293 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-3947-4011 |
| PMID | 32746239 |
| PQID | 2444611489 |
| PQPubID | 85429 |
| PageCount | 10 |
| ParticipantIDs | proquest_miscellaneous_2430375802 crossref_citationtrail_10_1109_TIP_2020_3001693 proquest_journals_2444611489 crossref_primary_10_1109_TIP_2020_3001693 ieee_primary_9119869 pubmed_primary_32746239 |
| PublicationCentury | 2000 |
| PublicationDate | 2020-01-01 |
| PublicationDateYYYYMMDD | 2020-01-01 |
| PublicationDate_xml | – month: 01 year: 2020 text: 2020-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2020 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref57 ref13 ref56 ref12 ref59 ref15 ref58 ref14 ref53 ref52 ref55 ref11 ref10 ref17 ying (ref39) 2018 ref18 hamilton (ref36) 2017 kipf (ref37) 2017 ref51 veli?kovi? (ref38) 2018 ref46 ref45 ref48 ref47 ref42 ref44 yan (ref41) 2018 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 kingma (ref61) 2015 ref35 ref34 ref31 ref30 ref33 ref32 ref2 ref1 liu (ref16) 2019 song (ref19) 2018 zhang (ref40) 2018 ref24 ref23 ref26 ref25 ref20 ref63 ref22 ref21 wu (ref50) 2019 ref28 hermans (ref54) 2017 ref27 ref29 ref60 ref62 |
| References_xml | – ident: ref7 doi: 10.1109/CVPR.2018.00128 – ident: ref60 doi: 10.1109/CVPR.2018.00543 – ident: ref2 doi: 10.1109/CVPR.2016.148 – ident: ref14 doi: 10.1609/aaai.v33i01.33018618 – ident: ref27 doi: 10.1007/978-3-030-01270-0_12 – ident: ref9 doi: 10.1109/TIP.2018.2878505 – ident: ref25 doi: 10.1109/CVPR.2019.00871 – ident: ref1 doi: 10.1109/ROBOT.2005.1570420 – ident: ref63 doi: 10.1109/ICCV.2017.74 – ident: ref35 doi: 10.1109/TNN.2008.2005605 – ident: ref13 doi: 10.1609/aaai.v33i01.33018786 – ident: ref26 doi: 10.1109/ICCV.2019.00380 – ident: ref34 doi: 10.1109/CVPR.2019.00505 – start-page: 7347 year: 2018 ident: ref19 article-title: Region-based quality estimation network for large-scale person re-identification publication-title: Proc AAAI – ident: ref29 doi: 10.1109/CVPR.2017.360 – ident: ref5 doi: 10.1109/CVPR.2017.499 – ident: ref11 doi: 10.1109/TIP.2019.2911488 – ident: ref30 doi: 10.1109/CVPR.2018.00051 – start-page: 4438 year: 2018 ident: ref40 article-title: An end-to-end deep learning architecture for graph classification publication-title: Proc AAAI – ident: ref32 doi: 10.1109/ICCV.2017.427 – ident: ref56 doi: 10.1007/978-3-319-10593-2_45 – start-page: 7444 year: 2018 ident: ref41 article-title: Spatial temporal graph convolutional networks for skeleton-based action recognition publication-title: Proc AAAI – ident: ref20 doi: 10.1109/TMM.2018.2877886 – ident: ref57 doi: 10.1007/978-3-642-21227-7_9 – start-page: 1 year: 2019 ident: ref16 article-title: Spatially and temporally efficient non-local attention network for video-based person re-identification publication-title: Proc BMVC – ident: ref3 doi: 10.1109/CVPR.2017.717 – year: 2017 ident: ref54 article-title: In defense of the triplet loss for person re-identification publication-title: arXiv 1703 07737 – ident: ref51 doi: 10.1007/978-3-030-01216-8_43 – ident: ref24 doi: 10.1609/aaai.v33i01.33018295 – ident: ref10 doi: 10.1109/TNNLS.2019.2891244 – ident: ref21 doi: 10.1109/CVPR.2018.00709 – ident: ref47 doi: 10.1109/ICCV.2017.550 – ident: ref6 doi: 10.1109/CVPR.2018.00046 – ident: ref43 doi: 10.1109/ICCV.2017.127 – ident: ref4 doi: 10.1109/ICCV.2017.507 – ident: ref53 doi: 10.1109/CVPR.2017.11 – ident: ref8 doi: 10.1109/CVPR.2018.00562 – ident: ref62 doi: 10.1109/TCSVT.2017.2715499 – ident: ref52 doi: 10.1109/CVPR.2016.90 – start-page: 1 year: 2017 ident: ref37 article-title: Semi-supervised classification with graph convolutional networks publication-title: Proc Int Conf Learn Represent – ident: ref42 doi: 10.1016/j.patcog.2018.05.007 – ident: ref28 doi: 10.1109/ICCV.2017.265 – ident: ref22 doi: 10.1109/ICCV.2017.349 – ident: ref44 doi: 10.1007/978-3-030-01267-0_30 – start-page: 4800 year: 2018 ident: ref39 article-title: Hierarchical graph representation learning with differentiable pooling publication-title: Proc Adv Neural Inf Process Syst – ident: ref55 doi: 10.1109/CVPRW.2019.00190 – ident: ref15 doi: 10.1609/aaai.v33i01.33018287 – ident: ref18 doi: 10.1109/CVPR.2019.00954 – start-page: 1 year: 2015 ident: ref61 article-title: Adam: A method for stochastic optimization publication-title: Proc Int Conf Learn Represent – ident: ref59 doi: 10.1007/978-3-319-48881-3_2 – ident: ref33 doi: 10.1007/978-3-319-46466-4_42 – ident: ref48 doi: 10.1109/JAS.2018.7511081 – start-page: 1024 year: 2017 ident: ref36 article-title: Inductive representation learning on large graphs publication-title: Proc Adv Neural Inf Process Syst – ident: ref58 doi: 10.1007/978-3-319-46466-4_52 – start-page: 274 year: 2019 ident: ref50 article-title: Spatial-temporal graph attention network for video-based gait recognition publication-title: Proc Asian Conf Pattern Recognit – ident: ref17 doi: 10.1109/WACV.2019.00130 – ident: ref31 doi: 10.1007/978-3-030-01240-3_40 – ident: ref46 doi: 10.1109/CVPR.2019.00226 – start-page: 1 year: 2018 ident: ref38 article-title: Graph attention networks publication-title: Proc Int Conf Learn Represent – ident: ref49 doi: 10.1186/1687-6180-2014-15 – ident: ref45 doi: 10.1109/CVPR.2018.00902 – ident: ref23 doi: 10.1007/978-3-030-01225-0_30 – ident: ref12 doi: 10.1109/TIP.2019.2908062 |
| SSID | ssj0014516 |
| Score | 2.6457255 |
| Snippet | Recent years have witnessed the remarkable progress of applying deep learning models in video person re-identification (Re-ID). A key factor for video person... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 8821 |
| SubjectTerms | Adaptation models Adaptive structures consistency Context modeling Deep learning Feature extraction graph neural network Graph representations Graphical representations Machine learning Nodes Regularization Temporal resolution Three-dimensional displays Video person re-identification Visualization |
| Title | Adaptive Graph Representation Learning for Video Person Re-Identification |
| URI | https://ieeexplore.ieee.org/document/9119869 https://www.ncbi.nlm.nih.gov/pubmed/32746239 https://www.proquest.com/docview/2444611489 https://www.proquest.com/docview/2430375802 |
| Volume | 29 |
| WOSCitedRecordID | wos000571723000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED-24YM-ON38mM4RwRfBurVJP_I4xOlAxpApeyttkspAurEP_34vaVYUVPClFHJJQ-6u-V0udwdwxXT4puspJ0qEcBCBh04ifOlQykK0mpVLE5Nn9ikcjaLplI8rcFPGwiilzOUzdatfjS9fzsVGH5V1UTF5FPAqVHGcIlar9BjogrPGs4lfCxH2b12SPd6dDMdoCHpon_ZM7pFvW5CpqfI7vDTbzKD-vwkewL6Fk6Rf8P8QKipvQN1CS2IVd9WAvS95B5sw7Mtkof9z5EHnqybP5jqsjULKic25-kYQ0JLXmVRzMjbAHAmdIrI3s0d9R_AyuJ_cPTq2poIjcLtaO5kIUl9mHvW5CNMg5T3mRoJFaUDDSAZoPqWul7iKpZL5GYIpl_oii0SqAuyJ2OAYavk8V6dAJPUUp8xHjIZPmSaCsgQVWigEWTILWtDdLnMsbMJxXffiPTaGR4_HyJhYMya2jGnBddljUSTb-IO2qde_pLNL34L2lpOx1cZVjBCGBdrww-bLshn1SDtHklzNN5qG6nLAKLotOCkkoByboumOMJGf_fzNc9jVMysOZtpQWy836gJ2xMd6tlp2UFinUccI6ye0LeMz |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED_mFNQHP-bXdGoEXwTr2ib9yKOI0-EcQ6b4VtokFUE62Yd_v5c0Kwoq-FIKuTQhd9f8Lpe7AzhlOnzT85UTp0I4iMAjJxWBdChlEVrNyqOpyTPbi_r9-PmZD2pwXsXCKKXM5TN1oV-NL1-OxEwflbVRMXkc8gVYDBjz3TJaq_IZ6JKzxreJ40UI_OdOSZe3h90BmoI-WqiuyT7ybRMyVVV-B5hmo-ms_2-KG7BmASW5LCVgE2qqaMC6BZfEqu6kAatfMg9uQfdSpu_6T0dudMZq8mAuxNo4pILYrKsvBCEteXqVakQGBpojoVPG9ub2sG8bHjvXw6tbx1ZVcARuWFMnF2EWyNynARdRFmbcZV4sWJyFNIpliAZU5vmpp1gmWZAjnPJoIPJYZCrEnogOdqBejAq1B0RSX3HKAkRp-JRZKihLUaWFQpgl87AJ7fkyJ8KmHNeVL94SY3q4PEHGJJoxiWVME86qHu9luo0_aLf0-ld0dumb0JpzMrH6OEkQxLBQm37YfFI1oyZp90haqNFM01BdEBiFtwm7pQRU36ZovCNQ5Ps_j3kMy7fD-17S6_bvDmBFz7I8pmlBfTqeqUNYEh_T18n4yIjsJ1DO5ZI |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Adaptive+Graph+Representation+Learning+for+Video+Person+Re-Identification&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Wu%2C+Yiming&rft.au=Bourahla%2C+Omar+El+Farouk&rft.au=Li%2C+Xi&rft.au=Wu%2C+Fei&rft.date=2020-01-01&rft.issn=1057-7149&rft.eissn=1941-0042&rft.volume=29&rft.spage=8821&rft.epage=8830&rft_id=info:doi/10.1109%2FTIP.2020.3001693&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TIP_2020_3001693 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |