Improving person re-identification by attribute and identity learning

•We annotate attribute labels on two large-scale person re-identification datasets.•We propose APR to improve re-ID by exploiting global and detailed information.•We introduce a module to leverage the correlation between attributes.•We speed-up the retrieval of re-ID by ten times with only a 2.92% a...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Pattern recognition Ročník 95; s. 151 - 161
Hlavní autori: Lin, Yutian, Zheng, Liang, Zheng, Zhedong, Wu, Yu, Hu, Zhilan, Yan, Chenggang, Yang, Yi
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Ltd 01.11.2019
Predmet:
ISSN:0031-3203, 1873-5142
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract •We annotate attribute labels on two large-scale person re-identification datasets.•We propose APR to improve re-ID by exploiting global and detailed information.•We introduce a module to leverage the correlation between attributes.•We speed-up the retrieval of re-ID by ten times with only a 2.92% accuracy drop.•We achieve competitive re-ID performance with the state-of-the-art methods. Person re-identification (re-ID) and attribute recognition share a common target at learning pedestrian descriptions. Their difference consists in the granularity. Most existing re-ID methods only take identity labels of pedestrians into consideration. However, we find the attributes, containing detailed local descriptions, are beneficial in allowing the re-ID model to learn more discriminative feature representations. In this paper, based on the complementarity of attribute labels and ID labels, we propose an attribute-person recognition (APR) network, a multi-task network which learns a re-ID embedding and at the same time predicts pedestrian attributes. We manually annotate attribute labels for two large-scale re-ID datasets, and systematically investigate how person re-ID and attribute recognition benefit from each other. In addition, we re-weight the attribute predictions considering the dependencies and correlations among the attributes. The experimental results on two large-scale re-ID benchmarks demonstrate that by learning a more discriminative representation, APR achieves competitive re-ID performance compared with the state-of-the-art methods. We use APR to speed up the retrieval process by ten times with a minor accuracy drop of 2.92% on Market-1501. Besides, we also apply APR on the attribute recognition task and demonstrate improvement over the baselines.
AbstractList •We annotate attribute labels on two large-scale person re-identification datasets.•We propose APR to improve re-ID by exploiting global and detailed information.•We introduce a module to leverage the correlation between attributes.•We speed-up the retrieval of re-ID by ten times with only a 2.92% accuracy drop.•We achieve competitive re-ID performance with the state-of-the-art methods. Person re-identification (re-ID) and attribute recognition share a common target at learning pedestrian descriptions. Their difference consists in the granularity. Most existing re-ID methods only take identity labels of pedestrians into consideration. However, we find the attributes, containing detailed local descriptions, are beneficial in allowing the re-ID model to learn more discriminative feature representations. In this paper, based on the complementarity of attribute labels and ID labels, we propose an attribute-person recognition (APR) network, a multi-task network which learns a re-ID embedding and at the same time predicts pedestrian attributes. We manually annotate attribute labels for two large-scale re-ID datasets, and systematically investigate how person re-ID and attribute recognition benefit from each other. In addition, we re-weight the attribute predictions considering the dependencies and correlations among the attributes. The experimental results on two large-scale re-ID benchmarks demonstrate that by learning a more discriminative representation, APR achieves competitive re-ID performance compared with the state-of-the-art methods. We use APR to speed up the retrieval process by ten times with a minor accuracy drop of 2.92% on Market-1501. Besides, we also apply APR on the attribute recognition task and demonstrate improvement over the baselines.
Author Wu, Yu
Hu, Zhilan
Yan, Chenggang
Zheng, Zhedong
Zheng, Liang
Lin, Yutian
Yang, Yi
Author_xml – sequence: 1
  givenname: Yutian
  surname: Lin
  fullname: Lin, Yutian
  organization: Center for Artificial Intelligence, University of Technology Sydney, Australia
– sequence: 2
  givenname: Liang
  orcidid: 0000-0002-1109-3893
  surname: Zheng
  fullname: Zheng, Liang
  organization: Australian National University, Australia
– sequence: 3
  givenname: Zhedong
  surname: Zheng
  fullname: Zheng, Zhedong
  organization: Center for Artificial Intelligence, University of Technology Sydney, Australia
– sequence: 4
  givenname: Yu
  surname: Wu
  fullname: Wu, Yu
  organization: Center for Artificial Intelligence, University of Technology Sydney, Australia
– sequence: 5
  givenname: Zhilan
  surname: Hu
  fullname: Hu, Zhilan
  organization: Center for Artificial Intelligence, University of Technology Sydney, Australia
– sequence: 6
  givenname: Chenggang
  surname: Yan
  fullname: Yan, Chenggang
  organization: Hangzhou Dianzi University, China
– sequence: 7
  givenname: Yi
  surname: Yang
  fullname: Yang, Yi
  email: yi.yang@uts.edu.au
  organization: Center for Artificial Intelligence, University of Technology Sydney, Australia
BookMark eNqFkE1Lw0AQhhepYFv9Bx7yBxJnP7JJPQhSai0UvOh52ezOli3tJmzWQv-9qfHkQU8D8_K8zDwzMgltQELuKRQUqHzYF51Opt0VDOiiAFkAyCsypXXF85IKNiFTAE5zzoDfkFnf7wFoNQRTstocu9iefNhlHca-DVnE3FsMyTtvdPLDpjlnOqXom8-EmQ42G_N0zg6oYxjYW3Lt9KHHu585Jx8vq_fla759W2-Wz9vccJAprwGYaLTEUlTCMV3yCmuDYC2tS2z0QpaOGmCls7VFh4IxgbUQ6HhjTYN8Th7HXhPbvo_olPHp-8gUtT8oCuoiRO3VKERdhCiQahAywOIX3EV_1PH8H_Y0Yjg8dvIYVW88BoPWRzRJ2db_XfAFy52A5A
CitedBy_id crossref_primary_10_1007_s11042_020_08972_w
crossref_primary_10_1016_j_knosys_2020_106154
crossref_primary_10_1007_s10489_021_02450_y
crossref_primary_10_1155_2022_4330804
crossref_primary_10_1007_s00530_022_01024_3
crossref_primary_10_1007_s10489_020_01907_w
crossref_primary_10_1007_s11760_023_02530_1
crossref_primary_10_1016_j_displa_2025_103189
crossref_primary_10_1109_TIFS_2023_3311584
crossref_primary_10_1016_j_displa_2025_103187
crossref_primary_10_1109_JIOT_2020_2963996
crossref_primary_10_1016_j_patcog_2020_107480
crossref_primary_10_1145_3487044
crossref_primary_10_1016_j_neunet_2022_06_017
crossref_primary_10_1016_j_jvcir_2021_103418
crossref_primary_10_1109_TMM_2023_3251097
crossref_primary_10_3390_rs16071216
crossref_primary_10_1007_s11042_020_09987_z
crossref_primary_10_1007_s11063_022_10946_y
crossref_primary_10_1016_j_imavis_2024_105038
crossref_primary_10_1109_TNNLS_2022_3214834
crossref_primary_10_1109_TIP_2024_3372832
crossref_primary_10_1109_TMM_2020_2972125
crossref_primary_10_1007_s00138_021_01239_w
crossref_primary_10_1007_s11063_021_10736_y
crossref_primary_10_3390_math12162495
crossref_primary_10_1007_s11042_021_11420_y
crossref_primary_10_1007_s11042_023_15303_2
crossref_primary_10_1016_j_neunet_2025_107463
crossref_primary_10_1007_s00530_024_01530_6
crossref_primary_10_1117_1_JEI_34_2_023056
crossref_primary_10_1109_JIOT_2024_3354822
crossref_primary_10_1109_JSEN_2019_2936916
crossref_primary_10_1016_j_heliyon_2022_e12086
crossref_primary_10_1109_TCSVT_2021_3137216
crossref_primary_10_1109_TIFS_2020_3036800
crossref_primary_10_1007_s00521_021_05936_5
crossref_primary_10_1145_3383184
crossref_primary_10_1007_s11042_022_12124_7
crossref_primary_10_1109_ACCESS_2023_3274473
crossref_primary_10_1109_TIFS_2022_3208811
crossref_primary_10_3390_app11052010
crossref_primary_10_1007_s40747_024_01565_2
crossref_primary_10_3390_ijgi10080561
crossref_primary_10_1016_j_jvcir_2021_103302
crossref_primary_10_1109_TETCI_2018_2876556
crossref_primary_10_1016_j_ins_2025_122012
crossref_primary_10_1109_TCSS_2024_3403691
crossref_primary_10_1016_j_neucom_2021_11_016
crossref_primary_10_1016_j_knosys_2023_111224
crossref_primary_10_1109_TITS_2025_3553838
crossref_primary_10_1007_s11042_020_08944_0
crossref_primary_10_1016_j_neucom_2021_11_013
crossref_primary_10_1016_j_patcog_2025_111489
crossref_primary_10_1016_j_patcog_2025_111486
crossref_primary_10_1186_s13634_021_00747_1
crossref_primary_10_1016_j_patcog_2023_109369
crossref_primary_10_1007_s12652_020_02312_4
crossref_primary_10_1007_s10489_020_01752_x
crossref_primary_10_1109_ACCESS_2024_3511034
crossref_primary_10_1109_ACCESS_2019_2957336
crossref_primary_10_1016_j_neucom_2021_12_047
crossref_primary_10_1007_s00530_022_00893_y
crossref_primary_10_1109_TCSVT_2021_3073718
crossref_primary_10_1007_s11760_024_03257_3
crossref_primary_10_1007_s10044_025_01482_1
crossref_primary_10_1007_s10044_025_01421_0
crossref_primary_10_1145_3454130
crossref_primary_10_1007_s00521_022_07300_7
crossref_primary_10_1109_TIP_2023_3270741
crossref_primary_10_1007_s11042_023_15473_z
crossref_primary_10_1016_j_neucom_2022_05_028
crossref_primary_10_1016_j_patcog_2020_107352
crossref_primary_10_1016_j_comnet_2020_107779
crossref_primary_10_1109_TNNLS_2021_3059515
crossref_primary_10_1016_j_neucom_2023_02_019
crossref_primary_10_1016_j_patrec_2020_08_012
crossref_primary_10_1016_j_neucom_2022_01_043
crossref_primary_10_1016_j_patcog_2021_108503
crossref_primary_10_1109_ACCESS_2019_2937509
crossref_primary_10_1016_j_knosys_2025_114193
crossref_primary_10_3390_fi17080371
crossref_primary_10_1109_TMM_2020_3014488
crossref_primary_10_1109_ACCESS_2020_3029180
crossref_primary_10_1016_j_cviu_2023_103813
crossref_primary_10_1016_j_neucom_2019_04_098
crossref_primary_10_1016_j_ipm_2023_103295
crossref_primary_10_1109_TCSVT_2022_3189027
crossref_primary_10_1007_s10489_021_02896_0
crossref_primary_10_1109_TITS_2024_3490582
crossref_primary_10_1109_TMM_2024_3521677
crossref_primary_10_1109_TMM_2020_3009476
crossref_primary_10_1109_TPAMI_2025_3526930
crossref_primary_10_1109_TMM_2020_3023784
crossref_primary_10_1007_s11263_023_01841_7
crossref_primary_10_1007_s11263_022_01591_y
crossref_primary_10_1109_TMM_2020_3042068
crossref_primary_10_1109_ACCESS_2019_2958126
crossref_primary_10_1007_s11760_021_02056_4
crossref_primary_10_1177_27527263221129366
crossref_primary_10_1109_TIP_2020_3036762
crossref_primary_10_1007_s11760_024_03674_4
crossref_primary_10_1007_s11042_023_17309_2
crossref_primary_10_1007_s11063_021_10540_8
crossref_primary_10_1016_j_jobe_2023_108126
crossref_primary_10_1016_j_neucom_2020_03_057
crossref_primary_10_1109_TCSVT_2022_3178144
crossref_primary_10_1109_TPAMI_2021_3054775
crossref_primary_10_1109_ACCESS_2025_3561201
crossref_primary_10_1088_1757_899X_1077_1_012046
crossref_primary_10_3390_s25061819
crossref_primary_10_26634_jit_10_2_18432
crossref_primary_10_1007_s00371_024_03738_z
crossref_primary_10_1109_TMM_2021_3103605
crossref_primary_10_1155_2021_7557361
crossref_primary_10_1007_s10489_021_02885_3
crossref_primary_10_1016_j_jvcir_2020_102753
crossref_primary_10_1109_TIFS_2024_3353078
crossref_primary_10_1109_TIP_2020_3004267
crossref_primary_10_1155_2021_2891303
crossref_primary_10_4018_IJSKD_299050
crossref_primary_10_1109_TNNLS_2024_3384446
crossref_primary_10_1145_3467889
crossref_primary_10_1109_TMM_2023_3312939
crossref_primary_10_1109_JSTSP_2023_3260627
crossref_primary_10_1007_s11633_022_1321_8
crossref_primary_10_1016_j_patcog_2022_108708
crossref_primary_10_1177_23780231221100392
crossref_primary_10_1016_j_patcog_2023_109682
crossref_primary_10_1109_JIOT_2022_3203559
crossref_primary_10_1109_ACCESS_2020_2991838
crossref_primary_10_1109_TIP_2023_3247159
crossref_primary_10_1016_j_neucom_2021_05_059
crossref_primary_10_1016_j_patcog_2022_109246
crossref_primary_10_1080_02564602_2024_2327566
crossref_primary_10_7717_peerj_cs_447
crossref_primary_10_1109_TCSS_2024_3398696
crossref_primary_10_1007_s11263_025_02405_7
crossref_primary_10_1080_17517575_2021_1941274
crossref_primary_10_1109_TMM_2023_3297391
crossref_primary_10_1145_3506708
crossref_primary_10_1155_2022_3852243
crossref_primary_10_1002_cav_1964
crossref_primary_10_3390_fractalfract8100551
crossref_primary_10_1016_j_imavis_2022_104401
crossref_primary_10_3390_math10101654
crossref_primary_10_1109_ACCESS_2025_3582382
crossref_primary_10_1007_s00530_021_00752_2
crossref_primary_10_1109_TIM_2021_3120799
crossref_primary_10_1109_TIFS_2020_2970590
crossref_primary_10_3390_s24020616
crossref_primary_10_1016_j_neucom_2019_12_120
crossref_primary_10_1016_j_patcog_2019_107074
crossref_primary_10_1007_s00530_025_01748_y
crossref_primary_10_1016_j_neucom_2019_06_013
crossref_primary_10_1016_j_cviu_2024_104092
crossref_primary_10_1109_ACCESS_2019_2951164
crossref_primary_10_1007_s11431_021_1903_4
crossref_primary_10_3390_electronics12244950
crossref_primary_10_1109_TCSVT_2023_3328712
crossref_primary_10_1007_s10489_020_01880_4
crossref_primary_10_1007_s11042_021_11292_2
crossref_primary_10_1007_s11042_023_15719_w
crossref_primary_10_1007_s00138_023_01473_4
crossref_primary_10_1016_j_neucom_2021_08_126
crossref_primary_10_1177_03611981231187643
crossref_primary_10_1145_3632624
crossref_primary_10_1016_j_knosys_2020_106554
crossref_primary_10_1109_TCYB_2022_3173356
crossref_primary_10_1007_s40747_022_00699_5
crossref_primary_10_1016_j_patrec_2021_08_007
crossref_primary_10_1016_j_neucom_2021_09_054
crossref_primary_10_1109_JIOT_2022_3233183
crossref_primary_10_1016_j_patcog_2022_108615
crossref_primary_10_3390_app13095576
crossref_primary_10_1007_s10044_022_01123_x
crossref_primary_10_3390_bdcc6040108
crossref_primary_10_1109_TIP_2023_3341762
crossref_primary_10_1049_ipr2_12688
crossref_primary_10_1109_TETCI_2021_3127906
crossref_primary_10_3724_SP_J_1089_2022_18927
crossref_primary_10_1109_TITS_2021_3107587
crossref_primary_10_1016_j_patcog_2022_109151
crossref_primary_10_1109_ACCESS_2023_3243553
crossref_primary_10_1049_ipr2_12465
crossref_primary_10_1016_j_patrec_2022_03_020
crossref_primary_10_1088_1742_6596_1576_1_012007
crossref_primary_10_3390_s20164431
crossref_primary_10_1016_j_patcog_2019_107173
crossref_primary_10_1109_TCSVT_2021_3061412
crossref_primary_10_1007_s42690_025_01560_1
crossref_primary_10_3390_app11041775
crossref_primary_10_1016_j_image_2022_116744
crossref_primary_10_1049_cvi2_12309
crossref_primary_10_1016_j_patcog_2020_107414
crossref_primary_10_1109_ACCESS_2025_3548321
crossref_primary_10_1007_s11227_025_07192_z
crossref_primary_10_1016_j_imavis_2025_105602
crossref_primary_10_1109_TITS_2022_3190959
crossref_primary_10_1155_2022_3490919
crossref_primary_10_1016_j_patcog_2022_108865
crossref_primary_10_1007_s40747_022_00709_6
crossref_primary_10_1016_j_neucom_2025_129587
crossref_primary_10_1109_TNNLS_2023_3329384
crossref_primary_10_1016_j_inffus_2024_102879
crossref_primary_10_1109_ACCESS_2022_3223426
crossref_primary_10_3390_math10193503
crossref_primary_10_1109_ACCESS_2019_2939071
crossref_primary_10_3390_electronics12102266
crossref_primary_10_1016_j_knosys_2020_106691
crossref_primary_10_1016_j_patcog_2019_106995
crossref_primary_10_1007_s10489_021_02959_2
crossref_primary_10_1080_08839514_2022_2031818
crossref_primary_10_1016_j_patrec_2022_10_003
crossref_primary_10_1016_j_future_2020_02_028
crossref_primary_10_1109_ACCESS_2021_3069915
crossref_primary_10_1109_ACCESS_2022_3159102
crossref_primary_10_1016_j_imavis_2024_104917
crossref_primary_10_1016_j_patcog_2019_106991
crossref_primary_10_1016_j_jvcir_2023_103849
crossref_primary_10_1016_j_iot_2023_100793
crossref_primary_10_1109_TPAMI_2019_2954313
crossref_primary_10_1016_j_patcog_2021_108239
crossref_primary_10_1109_TIP_2019_2946975
crossref_primary_10_1016_j_imavis_2020_104000
crossref_primary_10_1145_3628452
crossref_primary_10_1016_j_ins_2021_03_028
crossref_primary_10_1016_j_knosys_2024_111455
crossref_primary_10_3390_s20123419
crossref_primary_10_1007_s00521_021_06734_9
crossref_primary_10_1016_j_patcog_2021_107827
crossref_primary_10_1016_j_patcog_2023_110194
crossref_primary_10_1049_ipr2_12468
crossref_primary_10_1109_TBIOM_2024_3520030
crossref_primary_10_3390_bdcc6010020
crossref_primary_10_1007_s10489_021_03097_5
crossref_primary_10_3390_s20030811
crossref_primary_10_1016_j_patcog_2024_110363
crossref_primary_10_1007_s00371_022_02398_1
crossref_primary_10_1007_s11042_022_12581_0
crossref_primary_10_1016_j_patcog_2025_111510
crossref_primary_10_1007_s00371_023_02914_x
crossref_primary_10_1016_j_patcog_2022_108763
crossref_primary_10_1007_s10489_021_02198_5
crossref_primary_10_3390_drones7090590
crossref_primary_10_1016_j_neucom_2020_05_106
crossref_primary_10_1016_j_patcog_2021_108220
crossref_primary_10_1016_j_knosys_2022_109354
crossref_primary_10_1016_j_engappai_2025_110376
crossref_primary_10_1109_ACCESS_2022_3157857
crossref_primary_10_1109_TIM_2022_3158998
crossref_primary_10_1109_TIP_2022_3202370
crossref_primary_10_1016_j_patcog_2021_107937
crossref_primary_10_1109_LSP_2023_3313088
crossref_primary_10_3390_jimaging7120264
crossref_primary_10_3390_e26060436
crossref_primary_10_1049_cvi2_12215
crossref_primary_10_1016_j_imavis_2023_104658
crossref_primary_10_1109_TCSVT_2024_3408645
crossref_primary_10_1016_j_patcog_2019_107143
crossref_primary_10_1016_j_eng_2025_07_022
crossref_primary_10_1109_TCSVT_2025_3545859
crossref_primary_10_1155_2021_9463092
crossref_primary_10_1016_j_patcog_2022_109068
crossref_primary_10_1145_3743680
crossref_primary_10_1016_j_neucom_2020_03_109
crossref_primary_10_1109_TIP_2023_3272173
crossref_primary_10_1016_j_image_2021_116558
crossref_primary_10_1007_s10489_021_02820_6
crossref_primary_10_1007_s11263_024_02124_5
crossref_primary_10_1109_ACCESS_2023_3260149
crossref_primary_10_1108_LODJ_05_2023_0215
crossref_primary_10_3390_electronics11030454
crossref_primary_10_1016_j_ins_2021_03_048
crossref_primary_10_1109_TIP_2020_2986878
crossref_primary_10_1007_s11042_021_10983_0
crossref_primary_10_3390_electronics10182264
crossref_primary_10_3390_s20185279
crossref_primary_10_1016_j_patcog_2023_109973
crossref_primary_10_1016_j_patcog_2023_110169
crossref_primary_10_1117_1_JEI_33_4_043020
crossref_primary_10_1109_TVT_2024_3391834
crossref_primary_10_1109_TCSVT_2020_2982962
crossref_primary_10_1002_cpe_8346
crossref_primary_10_1016_j_patcog_2021_108124
crossref_primary_10_1016_j_jvcir_2020_102914
crossref_primary_10_1109_TIP_2021_3101158
crossref_primary_10_3390_app14188244
crossref_primary_10_1016_j_patcog_2022_109197
crossref_primary_10_1145_3538490
crossref_primary_10_1109_ACCESS_2019_2961933
crossref_primary_10_1016_j_imavis_2024_104930
crossref_primary_10_1109_LSP_2021_3139571
crossref_primary_10_1109_TMM_2024_3521730
crossref_primary_10_1145_3626240
crossref_primary_10_1007_s11042_020_08927_1
crossref_primary_10_1109_ACCESS_2023_3283495
crossref_primary_10_1109_ACCESS_2024_3418348
crossref_primary_10_1109_ACCESS_2023_3241334
crossref_primary_10_3390_s21124127
crossref_primary_10_3390_app14114593
crossref_primary_10_1109_ACCESS_2020_2991440
crossref_primary_10_1007_s00521_022_07071_1
crossref_primary_10_1007_s11227_023_05169_4
crossref_primary_10_1109_TPAMI_2023_3273712
crossref_primary_10_1109_JIOT_2023_3263240
crossref_primary_10_1016_j_patcog_2022_108672
crossref_primary_10_3390_s25010192
crossref_primary_10_1016_j_image_2021_116335
crossref_primary_10_1109_TCSVT_2021_3088446
crossref_primary_10_1109_TCSVT_2019_2939564
crossref_primary_10_1016_j_neucom_2019_11_062
crossref_primary_10_1007_s10489_022_04245_1
crossref_primary_10_3390_electronics14081626
crossref_primary_10_3390_app13179751
crossref_primary_10_1016_j_knosys_2025_113998
crossref_primary_10_1016_j_cscm_2022_e01410
crossref_primary_10_1007_s11633_024_1504_6
crossref_primary_10_1007_s00138_024_01612_5
crossref_primary_10_3390_su142215183
crossref_primary_10_1109_LSP_2020_3016528
crossref_primary_10_1007_s11042_021_10545_4
crossref_primary_10_1109_ACCESS_2022_3140311
crossref_primary_10_1007_s11042_022_12728_z
crossref_primary_10_1007_s10044_024_01219_6
crossref_primary_10_1109_JSTSP_2022_3233716
crossref_primary_10_1109_TMM_2024_3459637
crossref_primary_10_3724_SP_J_1089_2022_19218
crossref_primary_10_1016_j_patrec_2020_09_009
crossref_primary_10_1109_ACCESS_2020_2976849
crossref_primary_10_1109_TPAMI_2021_3122444
crossref_primary_10_1109_TIM_2023_3269117
crossref_primary_10_1109_TIFS_2024_3414667
crossref_primary_10_1007_s11263_021_01499_z
crossref_primary_10_3390_s20236902
crossref_primary_10_3390_a15040120
crossref_primary_10_25159_2663_659X_11661
crossref_primary_10_1007_s00521_022_07496_8
crossref_primary_10_1016_j_patcog_2022_108568
crossref_primary_10_1016_j_patcog_2022_108567
crossref_primary_10_1109_TMM_2024_3369904
crossref_primary_10_1109_TCSVT_2020_3033165
crossref_primary_10_3390_app10165608
crossref_primary_10_1007_s10489_020_01844_8
crossref_primary_10_3390_ijgi9110687
crossref_primary_10_1016_j_eswa_2024_125320
crossref_primary_10_1016_j_imavis_2021_104298
crossref_primary_10_1109_ACCESS_2020_2978344
crossref_primary_10_1109_TNNLS_2021_3082701
Cites_doi 10.1016/j.patcog.2012.05.019
10.1016/j.patcog.2016.07.013
10.1016/j.patcog.2015.04.005
10.1007/s11263-015-0816-y
10.1109/TIP.2017.2740564
10.1016/j.patcog.2017.01.006
10.1109/TIP.2014.2331755
10.1016/j.imavis.2016.07.004
10.1109/TMM.2015.2477680
10.1109/TIP.2016.2545929
10.1016/j.patcog.2016.12.022
10.1016/j.patcog.2017.07.005
10.1016/j.patcog.2017.08.029
10.1007/s11263-017-1033-7
10.1016/j.patcog.2017.06.037
10.1109/TPAMI.2017.2679002
10.1109/TIP.2017.2700762
10.1016/j.patcog.2017.10.005
ContentType Journal Article
Copyright 2019 Elsevier Ltd
Copyright_xml – notice: 2019 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.patcog.2019.06.006
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-5142
EndPage 161
ExternalDocumentID 10_1016_j_patcog_2019_06_006
S0031320319302377
GroupedDBID --K
--M
-D8
-DT
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABFRF
ABHFT
ABJNI
ABMAC
ABTAH
ABXDB
ABYKQ
ACBEA
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADMXK
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FD6
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
H~9
IHE
J1W
JJJVA
KOM
KZ1
LG9
LMP
LY1
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SBC
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UNMZH
VOH
WUQ
XJE
XPP
ZMT
ZY4
~G-
9DU
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
CITATION
EFKBS
~HD
ID FETCH-LOGICAL-c306t-80024ba6e5474f2a537e8ce0dd185eba965f1c025fd8defe4224e844ef3bdcbe3
ISICitedReferencesCount 564
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000478710600013&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0031-3203
IngestDate Tue Nov 18 22:29:38 EST 2025
Sat Nov 29 07:26:22 EST 2025
Fri Feb 23 02:25:25 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Attribute recognition
Person re-identification
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c306t-80024ba6e5474f2a537e8ce0dd185eba965f1c025fd8defe4224e844ef3bdcbe3
ORCID 0000-0002-1109-3893
PageCount 11
ParticipantIDs crossref_citationtrail_10_1016_j_patcog_2019_06_006
crossref_primary_10_1016_j_patcog_2019_06_006
elsevier_sciencedirect_doi_10_1016_j_patcog_2019_06_006
PublicationCentury 2000
PublicationDate November 2019
2019-11-00
PublicationDateYYYYMMDD 2019-11-01
PublicationDate_xml – month: 11
  year: 2019
  text: November 2019
PublicationDecade 2010
PublicationTitle Pattern recognition
PublicationYear 2019
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Zheng, Zheng, Yang (bib0020) 2017; 14
Layne, Hospedales, Gong, Mary (bib0039) 2012; 2
Su, Zhang, Xing, Gao, Tian (bib0014) 2018; 75
Matsukawa, Suzuki (bib0016) 2016
Xu, Zhao, Zhu, Wang, Ouyang (bib0032) 2018
Sudowe, Spitzer, Leibe (bib0056) 2015
He, Zhang, Ren, Sun (bib0021) 2016
Fan, Zheng, Yan, Yang (bib0022) 2018; 14
Franco, Oliveira (bib0013) 2017; 61
He, Liang, Li, Sun (bib0054) 2018
Deng, Luo, Loy, Tang (bib0006) 2014
D. Li, Z. Zhang, X. Chen, H. Ling, K. Huang, A richly annotated dataset for pedestrian attribute recognition, (2016) arXiv
Zhao, Li, Zhuang, Wang (bib0051) 2017
Zhu, Wu, Huang, Zheng (bib0001) 2018; 27
Zhu, Xu, Yang, Hauptmann (bib0026) 2017; 124
Ahmed, Jones, Marks (bib0030) 2015
Wei, Zhang, Gao, Tian (bib0034) 2018
Wu, Lin, Dong, Yan, Bian, Yang (bib0037) 2019
Martinel, Dunnhofer, Foresti, Micheloni (bib0036) 2017
Liao, Hu, Zhu, Li (bib0055) 2015
Chen, Guo, Lai (bib0018) 2016; 25
Wu, Lin, Dong, Yan, Ouyang, Yang (bib0019) 2018
Zhou, Wang, Meng, Xin, Li, Gong, Zheng (bib0025) 2018; 76
.
Ristani, Solera, Zou, Cucchiara, Tomasi (bib0044) 2016
Zheng, Zheng, Yang (bib0035) 2017
Li, Zhao, Xiao, Wang (bib0028) 2014
Sun, Zheng, Deng, Wang (bib0052) 2017
Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein (bib0046) 2015; 115
Wu, Wang, Gao, Li (bib0031) 2018; 73
Ustinova, Ganin, Lempitsky (bib0047) 2017
Jose, Fleuret (bib0048) 2016
Ma, Yang, Tao (bib0027) 2014; 23
Li, Chen, Zhang, Huang (bib0050) 2017
Lin, Dong, Zheng, Yan, Yang (bib0038) 2019
Peng, Tian, Xiang, Wang, Huang (bib0041) 2016
Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, Bengio (bib0033) 2014
Ding, Lin, Wang, Chao (bib0029) 2015; 48
Tian, Yi, Li, Li, Zhang, Shi, Yan, Wang (bib0053) 2018
Schumann, Stiefelhagen (bib0015) 2017
Wu, Shen, van den Hengel (bib0003) 2017; 65
Liu, Song, Zhao, Tao, Chen, Bu (bib0040) 2012; 45
Khamis, Kuo, Singh, Shet, Davis (bib0010) 2014
Zhu, Liao, Lei, Li (bib0005) 2017; 58
Su, Zhang, Yang, Zhang, Tian, Gao, Davis (bib0011) 2017; 66
Su, Yang, Zhang, Tian, Davis, Gao (bib0012) 2018; 40
Wang, Zhu, Gong, Li (bib0043) 2018
Xiao, Li, Ouyang, Wang (bib0024) 2016
Jia, Shelhamer, Donahue, Karayev, Long, Girshick, Guadarrama, Darrell (bib0045) 2014
Liu, Feng, Qi, Jiang, Yan (bib0002) 2017; 26
Zheng, Shen, Tian, Wang, Wang, Tian (bib0017) 2015
Ren, Lu, Feng, Zhou (bib0004) 2017; 72
Abdulnabi, Wang, Lu, Jia (bib0007) 2015; 17
Chen, Yuan, Chen, Zheng (bib0049) 2016
Layne, Hospedales, Gong (bib0009) 2014
Yin, Zheng, Wu, Yu, Wan, Guo, Huang, Lai (bib0042) 2018
Varior, Haloi, Wang (bib0023) 2016
Su (10.1016/j.patcog.2019.06.006_bib0014) 2018; 75
Ustinova (10.1016/j.patcog.2019.06.006_bib0047) 2017
Matsukawa (10.1016/j.patcog.2019.06.006_bib0016) 2016
Chen (10.1016/j.patcog.2019.06.006_bib0018) 2016; 25
Sun (10.1016/j.patcog.2019.06.006_bib0052) 2017
Zhou (10.1016/j.patcog.2019.06.006_bib0025) 2018; 76
Zheng (10.1016/j.patcog.2019.06.006_bib0020) 2017; 14
Goodfellow (10.1016/j.patcog.2019.06.006_bib0033) 2014
Franco (10.1016/j.patcog.2019.06.006_bib0013) 2017; 61
Zhu (10.1016/j.patcog.2019.06.006_bib0005) 2017; 58
Jose (10.1016/j.patcog.2019.06.006_bib0048) 2016
Ding (10.1016/j.patcog.2019.06.006_bib0029) 2015; 48
Xiao (10.1016/j.patcog.2019.06.006_bib0024) 2016
Wang (10.1016/j.patcog.2019.06.006_bib0043) 2018
Wu (10.1016/j.patcog.2019.06.006_bib0031) 2018; 73
Liao (10.1016/j.patcog.2019.06.006_bib0055) 2015
Su (10.1016/j.patcog.2019.06.006_bib0012) 2018; 40
Ren (10.1016/j.patcog.2019.06.006_bib0004) 2017; 72
Deng (10.1016/j.patcog.2019.06.006_bib0006) 2014
Russakovsky (10.1016/j.patcog.2019.06.006_bib0046) 2015; 115
Sudowe (10.1016/j.patcog.2019.06.006_bib0056) 2015
Liu (10.1016/j.patcog.2019.06.006_bib0040) 2012; 45
Zhao (10.1016/j.patcog.2019.06.006_bib0051) 2017
Li (10.1016/j.patcog.2019.06.006_bib0050) 2017
Zheng (10.1016/j.patcog.2019.06.006_bib0035) 2017
Xu (10.1016/j.patcog.2019.06.006_bib0032) 2018
Li (10.1016/j.patcog.2019.06.006_bib0028) 2014
Schumann (10.1016/j.patcog.2019.06.006_bib0015) 2017
Wu (10.1016/j.patcog.2019.06.006_bib0019) 2018
Wei (10.1016/j.patcog.2019.06.006_bib0034) 2018
Tian (10.1016/j.patcog.2019.06.006_bib0053) 2018
Khamis (10.1016/j.patcog.2019.06.006_bib0010) 2014
Chen (10.1016/j.patcog.2019.06.006_bib0049) 2016
Varior (10.1016/j.patcog.2019.06.006_bib0023) 2016
Martinel (10.1016/j.patcog.2019.06.006_bib0036) 2017
Wu (10.1016/j.patcog.2019.06.006_sbref0036) 2019
Abdulnabi (10.1016/j.patcog.2019.06.006_bib0007) 2015; 17
10.1016/j.patcog.2019.06.006_bib0008
Layne (10.1016/j.patcog.2019.06.006_bib0009) 2014
Yin (10.1016/j.patcog.2019.06.006_bib0042) 2018
Ma (10.1016/j.patcog.2019.06.006_bib0027) 2014; 23
Zhu (10.1016/j.patcog.2019.06.006_bib0001) 2018; 27
Wu (10.1016/j.patcog.2019.06.006_bib0003) 2017; 65
Peng (10.1016/j.patcog.2019.06.006_bib0041) 2016
Jia (10.1016/j.patcog.2019.06.006_bib0045) 2014
Ahmed (10.1016/j.patcog.2019.06.006_bib0030) 2015
Layne (10.1016/j.patcog.2019.06.006_bib0039) 2012; 2
Lin (10.1016/j.patcog.2019.06.006_bib0038) 2019
He (10.1016/j.patcog.2019.06.006_bib0021) 2016
Zheng (10.1016/j.patcog.2019.06.006_bib0017) 2015
Liu (10.1016/j.patcog.2019.06.006_bib0002) 2017; 26
Fan (10.1016/j.patcog.2019.06.006_bib0022) 2018; 14
Ristani (10.1016/j.patcog.2019.06.006_bib0044) 2016
Su (10.1016/j.patcog.2019.06.006_bib0011) 2017; 66
He (10.1016/j.patcog.2019.06.006_bib0054) 2018
Zhu (10.1016/j.patcog.2019.06.006_bib0026) 2017; 124
References_xml – year: 2019
  ident: bib0037
  article-title: Progressive learning for person re-identification with one example
  publication-title: IEEE Trans. Image Process.
– volume: 124
  start-page: 409
  year: 2017
  end-page: 421
  ident: bib0026
  article-title: Uncovering the temporal context for video question answering
  publication-title: Int. J. Comput. Vis.
– volume: 2
  start-page: 8
  year: 2012
  ident: bib0039
  article-title: Person re-identification by attributes
  publication-title: The British Machine Vision Conference
– start-page: 2428
  year: 2016
  end-page: 2433
  ident: bib0016
  article-title: Person re-identification using CNN features learned from combination of attributes
  publication-title: The IEEE International Conference on Pattern Recognition
– start-page: 789
  year: 2014
  end-page: 792
  ident: bib0006
  article-title: Pedestrian attribute recognition at far distance
  publication-title: Proceedings of the ACM international conference on Multimedia
– year: 2018
  ident: bib0053
  article-title: Eliminating background-bias for robust person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– volume: 14
  start-page: 13
  year: 2017
  ident: bib0020
  article-title: A discriminatively learned CNN embedding for person reidentification
  publication-title: ACM Trans. Multim. Comput.Commun. Appl.
– start-page: 1116
  year: 2015
  end-page: 1124
  ident: bib0017
  article-title: Scalable person re-identification: a benchmark
  publication-title: The IEEE International Conference on Computer Vision
– year: 2018
  ident: bib0054
  article-title: Deep spatial feature reconstruction for partial person re-identification: alignment-free approach
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– volume: 65
  start-page: 238
  year: 2017
  end-page: 250
  ident: bib0003
  article-title: Deep linear discriminant analysis on fisher networks: a hybrid architecture for person re-identification
  publication-title: Pattern Recognit.
– start-page: 7398
  year: 2017
  end-page: 7407
  ident: bib0050
  article-title: Learning deep context-aware features over body and latent parts for person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– volume: 75
  start-page: 77
  year: 2018
  end-page: 89
  ident: bib0014
  article-title: Multi-type attributes driven multi-camera person re-identification
  publication-title: Pattern Recognit.
– volume: 58
  start-page: 224
  year: 2017
  end-page: 229
  ident: bib0005
  article-title: Multi-label convolutional neural network based pedestrian attribute classification
  publication-title: Image Vis. Comput.
– start-page: 1435
  year: 2017
  end-page: 1443
  ident: bib0015
  article-title: Person re-identification by deep learning attribute-complementary information
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition Workshops
– start-page: 791
  year: 2016
  end-page: 808
  ident: bib0023
  article-title: Gated siamese convolutional neural network architecture for human re-identification
  publication-title: European Conference on Computer Vision
– start-page: 770
  year: 2016
  end-page: 778
  ident: bib0021
  article-title: Deep residual learning for image recognition
  publication-title: Conference on Computer Vision and Pattern Recognition
– year: 2014
  ident: bib0009
  article-title: Re-id: hunting attributes in the wild
  publication-title: The British Machine Vision Conference
– volume: 14
  year: 2018
  ident: bib0022
  article-title: Unsupervised person re-identification: clustering and fine-tuning
  publication-title: ACM Trans. Multim. Comput. Commun. Appl. TOMCCAP
– volume: 25
  start-page: 2353
  year: 2016
  end-page: 2367
  ident: bib0018
  article-title: Deep ranking for person re-identification via joint representation learning
  publication-title: IEEE Trans. Image Process.
– start-page: 2672
  year: 2014
  end-page: 2680
  ident: bib0033
  article-title: Generative adversarial nets
  publication-title: Advances in neural information processing systems
– start-page: 3239
  year: 2017
  end-page: 3248
  ident: bib0051
  article-title: Deeply-learned part-aligned representations for person re-identification
  publication-title: The IEEE International Conference on Computer Vision
– reference: D. Li, Z. Zhang, X. Chen, H. Ling, K. Huang, A richly annotated dataset for pedestrian attribute recognition, (2016) arXiv:
– start-page: 3908
  year: 2015
  end-page: 3916
  ident: bib0030
  article-title: An improved deep learning architecture for person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 3774
  year: 2017
  end-page: 3782
  ident: bib0035
  article-title: Unlabeled samples generated by gan improve the person re-identification baseline in vitro
  publication-title: The IEEE International Conference on Computer Vision
– start-page: 1
  year: 2017
  end-page: 6
  ident: bib0047
  article-title: Multi-region bilinear convolutional neural networks for person re-identification
  publication-title: The IEEE International Conference on Advanced Video and Signal Based Surveillance
– start-page: 3820
  year: 2017
  end-page: 3828
  ident: bib0052
  article-title: Svdnet for pedestrian retrieval
  publication-title: The IEEE International Conference on Computer Vision
– volume: 27
  start-page: 2286
  year: 2018
  end-page: 2300
  ident: bib0001
  article-title: Fast open-world person re-identification
  publication-title: IEEE Trans. Image Process.
– volume: 76
  start-page: 739
  year: 2018
  end-page: 751
  ident: bib0025
  article-title: Deep self-paced learning for person re-identification
  publication-title: Pattern Recognit.
– volume: 40
  start-page: 1167
  year: 2018
  end-page: 1181
  ident: bib0012
  article-title: Multi-task learning with low rank attribute embedding for multi-camera person re-identification
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 151
  year: 2017
  end-page: 156
  ident: bib0036
  article-title: Person re-identification via unsupervised transfer of learned visual representations
  publication-title: Proceedings of the 11th International Conference on Distributed Smart Cameras
– year: 2018
  ident: bib0032
  article-title: Attention-aware compositional network for person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 875
  year: 2016
  end-page: 890
  ident: bib0048
  article-title: Scalable metric learning via weighted approximate rank component analysis
  publication-title: European Conference on Computer Vision
– year: 2018
  ident: bib0034
  article-title: Person transfer gan to bridge domain gap for person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– volume: 45
  start-page: 4204
  year: 2012
  end-page: 4213
  ident: bib0040
  article-title: Attribute-restricted latent topic model for person re-identification
  publication-title: Pattern Recognit.
– start-page: 134
  year: 2014
  end-page: 146
  ident: bib0010
  article-title: Joint learning for attribute-consistent person re-identification
  publication-title: European Conference on Computer Vision
– start-page: 1268
  year: 2016
  end-page: 1277
  ident: bib0049
  article-title: Similarity learning with spatial constraints for person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 1100
  year: 2018
  end-page: 1106
  ident: bib0042
  article-title: Adversarial attribute-image person re-identification
  publication-title: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
– start-page: 152
  year: 2014
  end-page: 159
  ident: bib0028
  article-title: Deepreid: Deep filter pairing neural network for person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 17
  year: 2016
  end-page: 35
  ident: bib0044
  article-title: Performance measures and a data set for multi-target, multi-camera tracking
  publication-title: European Conference on Computer Vision
– start-page: 1249
  year: 2016
  end-page: 1258
  ident: bib0024
  article-title: Learning deep feature representations with domain guided dropout for person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– volume: 72
  start-page: 446
  year: 2017
  end-page: 457
  ident: bib0004
  article-title: Multi-modal uniform deep learning for RGB-D person re-identification
  publication-title: Pattern Recognit.
– volume: 48
  start-page: 2993
  year: 2015
  end-page: 3003
  ident: bib0029
  article-title: Deep feature learning with relative distance comparison for person re-identification
  publication-title: Pattern Recognit.
– year: 2018
  ident: bib0019
  article-title: Exploit the unknown gradually: one-shot video-based person re-identification by stepwise learning
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– volume: 115
  start-page: 211
  year: 2015
  end-page: 252
  ident: bib0046
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vis.
– start-page: 87
  year: 2015
  end-page: 95
  ident: bib0056
  article-title: Person attribute recognition with a jointly-trained holistic CNN model
  publication-title: The IEEE International Conference on Computer Vision Workshops
– volume: 66
  start-page: 4
  year: 2017
  end-page: 15
  ident: bib0011
  article-title: Attributes driven tracklet-to-tracklet person re-identification using latent prototypes space mapping
  publication-title: Pattern Recognit.
– volume: 23
  start-page: 3656
  year: 2014
  end-page: 3670
  ident: bib0027
  article-title: Person re-identification over camera networks using multi-task distance metric learning
  publication-title: IEEE Trans. Image Process.
– reference: .
– volume: 61
  start-page: 593
  year: 2017
  end-page: 609
  ident: bib0013
  article-title: Convolutional covariance features: conception, integration and performance in person re-identification
  publication-title: Pattern Recognit.
– volume: 17
  start-page: 1949
  year: 2015
  end-page: 1959
  ident: bib0007
  article-title: Multi-task CNN model for attribute prediction
  publication-title: IEEE Trans. Multim.
– volume: 73
  start-page: 275
  year: 2018
  end-page: 288
  ident: bib0031
  article-title: Deep adaptive feature embedding with local sample distributions for person re-identification
  publication-title: Pattern Recognit.
– year: 2019
  ident: bib0038
  article-title: A bottom-up clustering approach to unsupervised person re-identification
  publication-title: AAAI Conference on Artificial Intelligence
– volume: 26
  start-page: 3492
  year: 2017
  end-page: 3506
  ident: bib0002
  article-title: End-to-end comparative attention networks for person re-identification
  publication-title: IEEE Trans. Image Process.
– start-page: 2197
  year: 2015
  end-page: 2206
  ident: bib0055
  article-title: Person re-identification by local maximal occurrence representation and metric learning
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 675
  year: 2014
  end-page: 678
  ident: bib0045
  article-title: Caffe: convolutional architecture for fast feature embedding
  publication-title: Proceedings of the 22nd ACM international conference on Multimedia
– year: 2018
  ident: bib0043
  article-title: Transferable joint attribute-identity deep learning for unsupervised person re-identification
  publication-title: The IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 336
  year: 2016
  end-page: 353
  ident: bib0041
  article-title: Joint learning of semantic and latent attributes
  publication-title: European Conference on Computer Vision
– start-page: 151
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0036
  article-title: Person re-identification via unsupervised transfer of learned visual representations
– volume: 45
  start-page: 4204
  issue: 12
  year: 2012
  ident: 10.1016/j.patcog.2019.06.006_bib0040
  article-title: Attribute-restricted latent topic model for person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2012.05.019
– volume: 61
  start-page: 593
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0013
  article-title: Convolutional covariance features: conception, integration and performance in person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.07.013
– start-page: 3774
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0035
  article-title: Unlabeled samples generated by gan improve the person re-identification baseline in vitro
– start-page: 3820
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0052
  article-title: Svdnet for pedestrian retrieval
– start-page: 2428
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0016
  article-title: Person re-identification using CNN features learned from combination of attributes
– volume: 48
  start-page: 2993
  issue: 10
  year: 2015
  ident: 10.1016/j.patcog.2019.06.006_bib0029
  article-title: Deep feature learning with relative distance comparison for person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2015.04.005
– volume: 115
  start-page: 211
  issue: 3
  year: 2015
  ident: 10.1016/j.patcog.2019.06.006_bib0046
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-015-0816-y
– volume: 27
  start-page: 2286
  issue: 5
  year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0001
  article-title: Fast open-world person re-identification
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2017.2740564
– volume: 66
  start-page: 4
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0011
  article-title: Attributes driven tracklet-to-tracklet person re-identification using latent prototypes space mapping
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.01.006
– start-page: 17
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0044
  article-title: Performance measures and a data set for multi-target, multi-camera tracking
– start-page: 875
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0048
  article-title: Scalable metric learning via weighted approximate rank component analysis
– year: 2019
  ident: 10.1016/j.patcog.2019.06.006_bib0038
  article-title: A bottom-up clustering approach to unsupervised person re-identification
– start-page: 789
  year: 2014
  ident: 10.1016/j.patcog.2019.06.006_bib0006
  article-title: Pedestrian attribute recognition at far distance
– start-page: 87
  year: 2015
  ident: 10.1016/j.patcog.2019.06.006_bib0056
  article-title: Person attribute recognition with a jointly-trained holistic CNN model
– start-page: 336
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0041
  article-title: Joint learning of semantic and latent attributes
– volume: 23
  start-page: 3656
  issue: 8
  year: 2014
  ident: 10.1016/j.patcog.2019.06.006_bib0027
  article-title: Person re-identification over camera networks using multi-task distance metric learning
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2014.2331755
– start-page: 675
  year: 2014
  ident: 10.1016/j.patcog.2019.06.006_bib0045
  article-title: Caffe: convolutional architecture for fast feature embedding
– start-page: 1249
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0024
  article-title: Learning deep feature representations with domain guided dropout for person re-identification
– volume: 58
  start-page: 224
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0005
  article-title: Multi-label convolutional neural network based pedestrian attribute classification
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2016.07.004
– start-page: 1268
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0049
  article-title: Similarity learning with spatial constraints for person re-identification
– year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0019
  article-title: Exploit the unknown gradually: one-shot video-based person re-identification by stepwise learning
– year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0054
  article-title: Deep spatial feature reconstruction for partial person re-identification: alignment-free approach
– year: 2014
  ident: 10.1016/j.patcog.2019.06.006_bib0009
  article-title: Re-id: hunting attributes in the wild
– year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0032
  article-title: Attention-aware compositional network for person re-identification
– volume: 17
  start-page: 1949
  issue: 11
  year: 2015
  ident: 10.1016/j.patcog.2019.06.006_bib0007
  article-title: Multi-task CNN model for attribute prediction
  publication-title: IEEE Trans. Multim.
  doi: 10.1109/TMM.2015.2477680
– volume: 25
  start-page: 2353
  issue: 5
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0018
  article-title: Deep ranking for person re-identification via joint representation learning
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2016.2545929
– start-page: 1116
  year: 2015
  ident: 10.1016/j.patcog.2019.06.006_bib0017
  article-title: Scalable person re-identification: a benchmark
– year: 2019
  ident: 10.1016/j.patcog.2019.06.006_sbref0036
  article-title: Progressive learning for person re-identification with one example
  publication-title: IEEE Trans. Image Process.
– year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0053
  article-title: Eliminating background-bias for robust person re-identification
– start-page: 1100
  year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0042
  article-title: Adversarial attribute-image person re-identification
– volume: 65
  start-page: 238
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0003
  article-title: Deep linear discriminant analysis on fisher networks: a hybrid architecture for person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.12.022
– volume: 14
  issue: 4
  year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0022
  article-title: Unsupervised person re-identification: clustering and fine-tuning
  publication-title: ACM Trans. Multim. Comput. Commun. Appl. TOMCCAP
– start-page: 1
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0047
  article-title: Multi-region bilinear convolutional neural networks for person re-identification
– volume: 75
  start-page: 77
  year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0014
  article-title: Multi-type attributes driven multi-camera person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.07.005
– volume: 14
  start-page: 13
  issue: 1
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0020
  article-title: A discriminatively learned CNN embedding for person reidentification
  publication-title: ACM Trans. Multim. Comput.Commun. Appl.
– ident: 10.1016/j.patcog.2019.06.006_bib0008
– start-page: 1435
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0015
  article-title: Person re-identification by deep learning attribute-complementary information
– start-page: 770
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0021
  article-title: Deep residual learning for image recognition
– start-page: 2672
  year: 2014
  ident: 10.1016/j.patcog.2019.06.006_bib0033
  article-title: Generative adversarial nets
– volume: 73
  start-page: 275
  year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0031
  article-title: Deep adaptive feature embedding with local sample distributions for person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.08.029
– year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0043
  article-title: Transferable joint attribute-identity deep learning for unsupervised person re-identification
– start-page: 134
  year: 2014
  ident: 10.1016/j.patcog.2019.06.006_bib0010
  article-title: Joint learning for attribute-consistent person re-identification
– volume: 124
  start-page: 409
  issue: 3
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0026
  article-title: Uncovering the temporal context for video question answering
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-017-1033-7
– year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0034
  article-title: Person transfer gan to bridge domain gap for person re-identification
– volume: 2
  start-page: 8
  year: 2012
  ident: 10.1016/j.patcog.2019.06.006_bib0039
  article-title: Person re-identification by attributes
– volume: 72
  start-page: 446
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0004
  article-title: Multi-modal uniform deep learning for RGB-D person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.06.037
– volume: 40
  start-page: 1167
  issue: 5
  year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0012
  article-title: Multi-task learning with low rank attribute embedding for multi-camera person re-identification
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2017.2679002
– start-page: 791
  year: 2016
  ident: 10.1016/j.patcog.2019.06.006_bib0023
  article-title: Gated siamese convolutional neural network architecture for human re-identification
– volume: 26
  start-page: 3492
  issue: 7
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0002
  article-title: End-to-end comparative attention networks for person re-identification
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2017.2700762
– start-page: 2197
  year: 2015
  ident: 10.1016/j.patcog.2019.06.006_bib0055
  article-title: Person re-identification by local maximal occurrence representation and metric learning
– volume: 76
  start-page: 739
  year: 2018
  ident: 10.1016/j.patcog.2019.06.006_bib0025
  article-title: Deep self-paced learning for person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.10.005
– start-page: 7398
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0050
  article-title: Learning deep context-aware features over body and latent parts for person re-identification
– start-page: 3908
  year: 2015
  ident: 10.1016/j.patcog.2019.06.006_bib0030
  article-title: An improved deep learning architecture for person re-identification
– start-page: 152
  year: 2014
  ident: 10.1016/j.patcog.2019.06.006_bib0028
  article-title: Deepreid: Deep filter pairing neural network for person re-identification
– start-page: 3239
  year: 2017
  ident: 10.1016/j.patcog.2019.06.006_bib0051
  article-title: Deeply-learned part-aligned representations for person re-identification
SSID ssj0017142
Score 2.7295313
Snippet •We annotate attribute labels on two large-scale person re-identification datasets.•We propose APR to improve re-ID by exploiting global and detailed...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 151
SubjectTerms Attribute recognition
Person re-identification
Title Improving person re-identification by attribute and identity learning
URI https://dx.doi.org/10.1016/j.patcog.2019.06.006
Volume 95
WOSCitedRecordID wos000478710600013&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: ScienceDirect database
  customDbUrl:
  eissn: 1873-5142
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017142
  issn: 0031-3203
  databaseCode: AIEXJ
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELag5cCFlpcoL_nADRklGydOjhXaCqpVVYkiFi5RHI-XVlV2tc2i8u8Ze5wHLOIlcYkia-2NPJ_G39jjbxh7EZkUWTCA0OjrMECJrEBWb0VWYSwx0RZJhlfXn6mTk3w-L07Dhv6VLyegmia_vi5W_9XU2IbGdldn_8Lc_aDYgO9odHyi2fH5R4YftglWnk2_XIM4NyEpiMyNlLNqqdRVOD2g67p9EYnFmLOeeglON07INRpO7mekQPBx045Q9ukzkAOZYeNiqxVfzHJo_7ChEcbbD3ER7uH1e2LdvZghCcn72SQWySQi1wXkWnOVCKRn3_neIh05zzhIz9I6HJNI-5aLp92Gi1crXKqWC5ecV3gF1ugHRW2_Rr8jbUp3VctVR1LqJtudqLRA_7d7-HY6P-5PnFQsSVk-fHl3zdLnAm7_189pzIianO2zOyGm4IeEhbvsBjT32F5Xr4MH932fTXtocIIG34IG1195Dw2O0OAdNHgHjQfs_dH07PUbEcpoiBrjwVa4kEDqKoNUKmknVZooyGuIjEGuBroqstTGNXJfa3IDFiSyOsilBJtoU2tIHrKdZtnAI8YTG-ussk62zUgt8yq2BgN6wKAihwz0AUu6WSnroDHvSp1cll0y4UVJc1m6uSx9TmV2wETfa0UaK7_5veomvAw8kfhfiRj5Zc_H_9zzCbs9oP8p22nXG3jGbtVf2vOr9fMApm8pfJBY
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Improving+person+re-identification+by+attribute+and+identity+learning&rft.jtitle=Pattern+recognition&rft.au=Lin%2C+Yutian&rft.au=Zheng%2C+Liang&rft.au=Zheng%2C+Zhedong&rft.au=Wu%2C+Yu&rft.date=2019-11-01&rft.pub=Elsevier+Ltd&rft.issn=0031-3203&rft.eissn=1873-5142&rft.volume=95&rft.spage=151&rft.epage=161&rft_id=info:doi/10.1016%2Fj.patcog.2019.06.006&rft.externalDocID=S0031320319302377
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon