FairMOT: On the Fairness of Detection and Re-identification in Multiple Object Tracking
Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficie...
Uloženo v:
| Vydáno v: | International journal of computer vision Ročník 129; číslo 11; s. 3069 - 3087 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
Springer US
01.11.2021
Springer Springer Nature B.V |
| Témata: | |
| ISSN: | 0920-5691, 1573-1405 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency. However, we find that the two tasks tend to compete with each other which need to be carefully addressed. In particular, previous works usually treat re-ID as a secondary task whose accuracy is heavily affected by the primary detection task. As a result, the network is biased to the primary detection task which is not
fair
to the re-ID task. To solve the problem, we present a simple yet effective approach termed as
FairMOT
based on the anchor-free object detection architecture CenterNet. Note that it is not a naive combination of CenterNet and re-ID. Instead, we present a bunch of detailed designs which are critical to achieve good tracking results by thorough empirical studies. The resulting approach achieves high accuracy for both detection and tracking. The approach outperforms the state-of-the-art methods by a large margin on several public datasets. The source code and pre-trained models are released at
https://github.com/ifzhang/FairMOT
. |
|---|---|
| AbstractList | Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency. However, we find that the two tasks tend to compete with each other which need to be carefully addressed. In particular, previous works usually treat re-ID as a secondary task whose accuracy is heavily affected by the primary detection task. As a result, the network is biased to the primary detection task which is not fair to the re-ID task. To solve the problem, we present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet. Note that it is not a naive combination of CenterNet and re-ID. Instead, we present a bunch of detailed designs which are critical to achieve good tracking results by thorough empirical studies. The resulting approach achieves high accuracy for both detection and tracking. The approach outperforms the state-of-the-art methods by a large margin on several public datasets. The source code and pre-trained models are released at Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency. However, we find that the two tasks tend to compete with each other which need to be carefully addressed. In particular, previous works usually treat re-ID as a secondary task whose accuracy is heavily affected by the primary detection task. As a result, the network is biased to the primary detection task which is not fair to the re-ID task. To solve the problem, we present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet. Note that it is not a naive combination of CenterNet and re-ID. Instead, we present a bunch of detailed designs which are critical to achieve good tracking results by thorough empirical studies. The resulting approach achieves high accuracy for both detection and tracking. The approach outperforms the state-of-the-art methods by a large margin on several public datasets. The source code and pre-trained models are released at https://github.com/ifzhang/FairMOT. Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object detection and re-ID in a single network is appealing since it allows joint optimization of the two tasks and enjoys high computation efficiency. However, we find that the two tasks tend to compete with each other which need to be carefully addressed. In particular, previous works usually treat re-ID as a secondary task whose accuracy is heavily affected by the primary detection task. As a result, the network is biased to the primary detection task which is not fair to the re-ID task. To solve the problem, we present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet. Note that it is not a naive combination of CenterNet and re-ID. Instead, we present a bunch of detailed designs which are critical to achieve good tracking results by thorough empirical studies. The resulting approach achieves high accuracy for both detection and tracking. The approach outperforms the state-of-the-art methods by a large margin on several public datasets. The source code and pre-trained models are released at https://github.com/ifzhang/FairMOT . |
| Audience | Academic |
| Author | Zeng, Wenjun Wang, Xinggang Zhang, Yifu Wang, Chunyu Liu, Wenyu |
| Author_xml | – sequence: 1 givenname: Yifu surname: Zhang fullname: Zhang, Yifu organization: Huazhong University of Science and Technology – sequence: 2 givenname: Chunyu surname: Wang fullname: Wang, Chunyu organization: Microsoft Research Asia – sequence: 3 givenname: Xinggang orcidid: 0000-0001-6732-7823 surname: Wang fullname: Wang, Xinggang email: xgwang@hust.edu.cn organization: Huazhong University of Science and Technology – sequence: 4 givenname: Wenjun surname: Zeng fullname: Zeng, Wenjun organization: Microsoft Research Asia – sequence: 5 givenname: Wenyu surname: Liu fullname: Liu, Wenyu organization: Huazhong University of Science and Technology |
| BookMark | eNp9kU1rXCEUhqWk0EnaP9CVkFUWpser98PsQj7aQMJAOqVLca7HqZMb70QdSP99nbmFkCyCC_HwPB497yE5CGNAQr5yOOUA7bfEedUIBhVnwGsumPxAZrxuBeMS6gMyA1UBqxvFP5HDlNYAUHWVmJHf18bHu_nijM4DzX-Q7s4BU6Kjo5eYsc9-DNQES--ReYshe-d7s6_6QO-2Q_abAel8uS4sXUTTP_iw-kw-OjMk_PJ_PyK_rq8WFz_Y7fz7zcX5Letl3WUmWwALHVdLIcUS28YZK5RtlFXSoXMADiSCMiBbq4zkrTHApZXOoeGoxBE5nu7dxPFpiynr9biNobTUVd1VTdPIekedTtTKDKh9cGMu7yzL4qPvyyidL_XzplWijIXXRTh5JRQm43NemW1K-ubn_Wu2m9g-jilFdLr3eT-g0sQPmoPeZaSnjHTJSO8z0rKo1Rt1E_2jiX_fl8QkpQKHFcaXL79j_QPaDaOg |
| CitedBy_id | crossref_primary_10_3390_app12094429 crossref_primary_10_1109_ACCESS_2024_3429171 crossref_primary_10_48084_etasr_9529 crossref_primary_10_1007_s11554_024_01513_w crossref_primary_10_1016_j_eswa_2025_126846 crossref_primary_10_1038_s41598_025_88149_3 crossref_primary_10_3390_s23177613 crossref_primary_10_1109_TII_2024_3366983 crossref_primary_10_1016_j_compag_2025_110887 crossref_primary_10_1016_j_oceaneng_2024_117243 crossref_primary_10_1109_JIOT_2023_3235148 crossref_primary_10_3390_app14167071 crossref_primary_10_1109_JSEN_2024_3391510 crossref_primary_10_1016_j_neucom_2024_128026 crossref_primary_10_1016_j_ecoinf_2022_101794 crossref_primary_10_3390_s23208439 crossref_primary_10_1007_s11227_025_07730_9 crossref_primary_10_26599_BDMA_2025_9020024 crossref_primary_10_1109_ACCESS_2023_3336592 crossref_primary_10_1109_TCSVT_2024_3403166 crossref_primary_10_1038_s41598_022_15000_4 crossref_primary_10_1111_mice_13295 crossref_primary_10_3390_rs16010070 crossref_primary_10_1038_s41598_025_07276_z crossref_primary_10_1016_j_compag_2025_110535 crossref_primary_10_1109_TIP_2022_3203607 crossref_primary_10_1016_j_aquaeng_2022_102246 crossref_primary_10_3390_app15179727 crossref_primary_10_3390_s22228650 crossref_primary_10_1007_s11227_025_07381_w crossref_primary_10_1109_TCSS_2023_3293882 crossref_primary_10_1007_s11263_025_02439_x crossref_primary_10_3390_app15148089 crossref_primary_10_1016_j_engappai_2025_110198 crossref_primary_10_1109_TGRS_2024_3385406 crossref_primary_10_1109_TRO_2023_3299517 crossref_primary_10_1177_03611981231170591 crossref_primary_10_1088_1742_6596_2816_1_012097 crossref_primary_10_1109_JSEN_2024_3379990 crossref_primary_10_3390_fi16060179 crossref_primary_10_1016_j_optlaseng_2025_109125 crossref_primary_10_1109_TIP_2025_3592524 crossref_primary_10_3390_app14031136 crossref_primary_10_1007_s11263_022_01711_8 crossref_primary_10_3390_rs16173347 crossref_primary_10_1016_j_photonics_2024_101318 crossref_primary_10_1109_JSEN_2025_3596067 crossref_primary_10_3390_s24123862 crossref_primary_10_3390_vetsci12070616 crossref_primary_10_1016_j_oceaneng_2024_118560 crossref_primary_10_1016_j_sigpro_2025_110058 crossref_primary_10_1109_JIOT_2023_3317422 crossref_primary_10_1016_j_compag_2024_109275 crossref_primary_10_3390_s22145267 crossref_primary_10_3390_s24072185 crossref_primary_10_1109_TIM_2024_3449952 crossref_primary_10_3390_s23073782 crossref_primary_10_3390_app122110741 crossref_primary_10_1016_j_jvcir_2024_104144 crossref_primary_10_1049_ipr2_13053 crossref_primary_10_1049_ipr2_13297 crossref_primary_10_1109_TMM_2023_3279670 crossref_primary_10_1109_ACCESS_2024_3411617 crossref_primary_10_1109_TIP_2022_3227814 crossref_primary_10_1016_j_birob_2024_100203 crossref_primary_10_1016_j_sysarc_2025_103349 crossref_primary_10_1016_j_trip_2025_101366 crossref_primary_10_3390_ani15111543 crossref_primary_10_3390_app15095119 crossref_primary_10_1016_j_eswa_2024_124391 crossref_primary_10_1142_S0218001424520165 crossref_primary_10_3390_s22207943 crossref_primary_10_1109_TGRS_2025_3608713 crossref_primary_10_3390_agriculture12111907 crossref_primary_10_1007_s10489_024_05682_w crossref_primary_10_1007_s11042_024_20360_2 crossref_primary_10_1007_s11042_023_17983_2 crossref_primary_10_1109_TPAMI_2024_3457886 crossref_primary_10_1109_TIP_2022_3226414 crossref_primary_10_1007_s00521_025_11076_x crossref_primary_10_1007_s11554_024_01514_9 crossref_primary_10_1007_s13042_023_02070_7 crossref_primary_10_1007_s11760_025_04504_x crossref_primary_10_1109_ACCESS_2023_3296731 crossref_primary_10_3390_electronics14112238 crossref_primary_10_1007_s00371_025_03890_0 crossref_primary_10_1007_s00371_024_03544_7 crossref_primary_10_1109_TIP_2022_3192706 crossref_primary_10_1016_j_trpro_2023_11_758 crossref_primary_10_3389_fpls_2022_1047356 crossref_primary_10_1109_JSEN_2024_3435856 crossref_primary_10_1016_j_eswa_2025_126414 crossref_primary_10_3390_rs17030371 crossref_primary_10_1109_TPAMI_2025_3560033 crossref_primary_10_1109_TITS_2024_3360875 crossref_primary_10_1109_ACCESS_2025_3596684 crossref_primary_10_1109_LRA_2022_3187264 crossref_primary_10_1038_s41598_024_75581_0 crossref_primary_10_3390_s22228693 crossref_primary_10_1109_ACCESS_2024_3503673 crossref_primary_10_1109_ACCESS_2022_3196492 crossref_primary_10_3390_rs17132167 crossref_primary_10_1007_s00371_024_03300_x crossref_primary_10_1109_TGRS_2023_3336665 crossref_primary_10_1007_s10044_025_01492_z crossref_primary_10_1109_ACCESS_2022_3220635 crossref_primary_10_1109_TPAMI_2022_3163709 crossref_primary_10_1109_TCSVT_2022_3202574 crossref_primary_10_3389_fpls_2022_1003243 crossref_primary_10_1007_s00371_023_03207_z crossref_primary_10_1016_j_eswa_2025_126772 crossref_primary_10_1109_ACCESS_2023_3283411 crossref_primary_10_1371_journal_pone_0331850 crossref_primary_10_1016_j_eswa_2025_127877 crossref_primary_10_1007_s11063_023_11397_9 crossref_primary_10_3390_electronics13010091 crossref_primary_10_1109_JSTARS_2023_3296451 crossref_primary_10_1109_TCSVT_2022_3224699 crossref_primary_10_1109_JSEN_2024_3455572 crossref_primary_10_1109_TMM_2023_3256761 crossref_primary_10_1016_j_atech_2025_101408 crossref_primary_10_1016_j_compag_2025_110742 crossref_primary_10_1109_TITS_2023_3316691 crossref_primary_10_1016_j_engappai_2025_111494 crossref_primary_10_1109_ACCESS_2025_3551672 crossref_primary_10_1109_COMST_2023_3323091 crossref_primary_10_1109_TIP_2024_3403497 crossref_primary_10_1109_TPAMI_2024_3350380 crossref_primary_10_1155_2023_9959178 crossref_primary_10_1007_s11227_022_04776_x crossref_primary_10_1109_TITS_2023_3285651 crossref_primary_10_1016_j_jvcir_2022_103750 crossref_primary_10_1007_s10489_024_06037_1 crossref_primary_10_1109_ACCESS_2025_3554706 crossref_primary_10_1109_TIV_2022_3216102 crossref_primary_10_1109_ACCESS_2025_3581041 crossref_primary_10_1155_2022_6177973 crossref_primary_10_1016_j_jvcir_2024_104064 crossref_primary_10_1109_ACCESS_2025_3602953 crossref_primary_10_1049_ipr2_13099 crossref_primary_10_1007_s11801_023_2070_9 crossref_primary_10_1016_j_jag_2023_103295 crossref_primary_10_1007_s00530_023_01052_7 crossref_primary_10_1007_s11760_025_04395_y crossref_primary_10_1016_j_knosys_2024_112075 crossref_primary_10_1063_5_0109807 crossref_primary_10_1016_j_engappai_2025_111265 crossref_primary_10_3390_s25103013 crossref_primary_10_1016_j_eswa_2025_127881 crossref_primary_10_1038_s41598_025_88924_2 crossref_primary_10_3390_rs14164106 crossref_primary_10_3390_ai5030047 crossref_primary_10_1007_s11042_024_20296_7 crossref_primary_10_1109_ACCESS_2024_3431244 crossref_primary_10_1109_ACCESS_2025_3546818 crossref_primary_10_1016_j_patcog_2025_111623 crossref_primary_10_1049_ipr2_12576 crossref_primary_10_3390_electronics14163187 crossref_primary_10_1109_TMM_2024_3394683 crossref_primary_10_1016_j_autcon_2024_105389 crossref_primary_10_1109_TCSVT_2022_3182709 crossref_primary_10_1109_TGRS_2024_3383870 crossref_primary_10_1109_TMM_2023_3274369 crossref_primary_10_1155_2022_7437289 crossref_primary_10_1016_j_compeleceng_2024_109078 crossref_primary_10_1109_TIV_2022_3217490 crossref_primary_10_1109_ACCESS_2024_3362673 crossref_primary_10_1007_s11760_025_04723_2 crossref_primary_10_1016_j_imavis_2024_105317 crossref_primary_10_1109_ACCESS_2023_3287147 crossref_primary_10_1016_j_compag_2023_107871 crossref_primary_10_11361_journalcpij_58_187 crossref_primary_10_1016_j_compag_2023_108600 crossref_primary_10_1109_TITS_2023_3315222 crossref_primary_10_1016_j_compeleceng_2025_110116 crossref_primary_10_1007_s11760_024_03676_2 crossref_primary_10_3390_app12073334 crossref_primary_10_1016_j_compag_2022_107347 crossref_primary_10_1007_s12204_022_2540_4 crossref_primary_10_1109_JSEN_2023_3255924 crossref_primary_10_1109_TII_2023_3261890 crossref_primary_10_1109_TITS_2024_3504405 crossref_primary_10_3390_buildings14082324 crossref_primary_10_1109_TIP_2022_3166638 crossref_primary_10_1016_j_ins_2023_118967 crossref_primary_10_1109_TIM_2024_3385824 crossref_primary_10_1109_ACCESS_2025_3557947 crossref_primary_10_1590_0001_3765202520240623 crossref_primary_10_1016_j_imavis_2024_105303 crossref_primary_10_1016_j_neucom_2022_04_087 crossref_primary_10_3390_s22228922 crossref_primary_10_1109_TITS_2022_3147770 crossref_primary_10_1016_j_engappai_2022_105032 crossref_primary_10_1016_j_measurement_2024_114180 crossref_primary_10_1002_aisy_202500100 crossref_primary_10_1007_s11042_023_17297_3 crossref_primary_10_1007_s11633_022_1344_1 crossref_primary_10_1016_j_inffus_2025_103349 crossref_primary_10_1109_TCSVT_2023_3263884 crossref_primary_10_1007_s11760_024_03336_5 crossref_primary_10_1016_j_pmcj_2023_101860 crossref_primary_10_1371_journal_pone_0291538 crossref_primary_10_3390_rs15082088 crossref_primary_10_3390_app15136969 crossref_primary_10_1007_s11263_022_01678_6 crossref_primary_10_1109_ACCESS_2023_3327262 crossref_primary_10_3390_s23083993 crossref_primary_10_1109_TII_2024_3424329 crossref_primary_10_1016_j_imavis_2024_105336 crossref_primary_10_1109_TCSVT_2024_3524670 crossref_primary_10_1109_TITS_2023_3264796 crossref_primary_10_3390_agriculture14071158 crossref_primary_10_1016_j_imavis_2024_105217 crossref_primary_10_1007_s11760_025_04240_2 crossref_primary_10_1007_s00521_022_08079_3 crossref_primary_10_1109_TITS_2021_3136918 crossref_primary_10_1145_3735510 crossref_primary_10_1109_JSEN_2024_3522021 crossref_primary_10_1109_TSP_2024_3472068 crossref_primary_10_1007_s10341_025_01385_9 crossref_primary_10_1088_1742_6596_2547_1_012022 crossref_primary_10_3390_su15031950 crossref_primary_10_54097_2efrm195 crossref_primary_10_1016_j_patcog_2024_110369 crossref_primary_10_1016_j_patcog_2024_111330 crossref_primary_10_3390_rs17061014 crossref_primary_10_3390_electronics11071010 crossref_primary_10_1007_s11760_025_04748_7 crossref_primary_10_1016_j_media_2024_103438 crossref_primary_10_1109_JSTARS_2025_3582679 crossref_primary_10_3390_s24186015 crossref_primary_10_1007_s00371_024_03367_6 crossref_primary_10_1016_j_isprsjprs_2025_08_013 crossref_primary_10_1109_TCSVT_2025_3528971 crossref_primary_10_1016_j_seta_2022_102216 crossref_primary_10_3390_s22155688 crossref_primary_10_1016_j_trc_2025_105205 crossref_primary_10_1109_TIP_2025_3526066 crossref_primary_10_3390_app15020736 crossref_primary_10_1109_ACCESS_2025_3577797 crossref_primary_10_1016_j_isprsjprs_2024_06_006 crossref_primary_10_3390_drones9040300 crossref_primary_10_1007_s10846_022_01657_6 crossref_primary_10_1016_j_fmre_2023_02_003 crossref_primary_10_1109_TGRS_2024_3416326 crossref_primary_10_1109_TMM_2022_3140919 crossref_primary_10_3390_drones9050341 crossref_primary_10_1109_ACCESS_2023_3297190 crossref_primary_10_1007_s11831_025_10314_8 crossref_primary_10_1016_j_compag_2022_107434 crossref_primary_10_1016_j_measurement_2024_115476 crossref_primary_10_1007_s11227_024_06753_y crossref_primary_10_1109_TII_2023_3290184 crossref_primary_10_1109_TMM_2022_3205407 crossref_primary_10_1002_cav_2190 crossref_primary_10_1016_j_neucom_2023_126558 crossref_primary_10_1109_ACCESS_2024_3356864 crossref_primary_10_1109_JSEN_2022_3226490 crossref_primary_10_1016_j_cja_2025_103558 crossref_primary_10_1016_j_neucom_2023_126321 crossref_primary_10_1016_j_eswa_2023_122205 crossref_primary_10_3390_s23062956 crossref_primary_10_1061_JCEMD4_COENG_15259 crossref_primary_10_1109_TGRS_2024_3438245 crossref_primary_10_1109_LSP_2022_3191549 crossref_primary_10_1007_s11042_024_20435_0 crossref_primary_10_1109_TMM_2023_3234822 crossref_primary_10_1016_j_compag_2023_107831 crossref_primary_10_1109_JSEN_2024_3361509 crossref_primary_10_3390_s23094546 crossref_primary_10_1016_j_compeleceng_2024_109392 crossref_primary_10_1177_00202940231187922 crossref_primary_10_1186_s12911_025_02877_0 crossref_primary_10_1109_ACCESS_2024_3432156 crossref_primary_10_3389_fpls_2024_1409194 crossref_primary_10_3390_drones7100623 crossref_primary_10_3389_fmars_2022_1071618 crossref_primary_10_1186_s10033_025_01204_y crossref_primary_10_1049_cvi2_12106 crossref_primary_10_1007_s40747_023_01273_3 crossref_primary_10_3390_drones7060389 crossref_primary_10_1109_LRA_2023_3322089 crossref_primary_10_1016_j_jvcir_2025_104549 crossref_primary_10_1016_j_neucom_2024_128906 crossref_primary_10_3390_app12199597 crossref_primary_10_1016_j_displa_2024_102682 crossref_primary_10_1016_j_sigpro_2023_109367 crossref_primary_10_1088_1361_6501_ad5c8b crossref_primary_10_1109_JSEN_2025_3571871 crossref_primary_10_3390_wevj16060325 crossref_primary_10_1109_ACCESS_2022_3196650 crossref_primary_10_3390_agriculture15181970 crossref_primary_10_3390_math13081333 crossref_primary_10_1016_j_compag_2024_109161 crossref_primary_10_1016_j_neunet_2023_09_047 crossref_primary_10_1109_TCSVT_2023_3249162 crossref_primary_10_1007_s11760_024_03547_w crossref_primary_10_1016_j_knosys_2024_112930 crossref_primary_10_3390_app13095361 crossref_primary_10_1109_TGRS_2023_3260254 crossref_primary_10_3233_JIFS_223516 crossref_primary_10_1371_journal_pone_0302277 crossref_primary_10_1109_TMM_2023_3241548 crossref_primary_10_3390_s24196489 crossref_primary_10_3390_electronics12143133 crossref_primary_10_1016_j_dsp_2025_105230 crossref_primary_10_1016_j_ecoinf_2024_102556 crossref_primary_10_1109_JIOT_2022_3219627 crossref_primary_10_1109_TRO_2023_3273180 crossref_primary_10_1080_02640414_2025_2510774 crossref_primary_10_1109_JSEN_2025_3558588 crossref_primary_10_1155_2024_5548146 crossref_primary_10_1007_s00521_024_09773_0 crossref_primary_10_1016_j_aej_2024_11_065 crossref_primary_10_1016_j_knosys_2023_110442 crossref_primary_10_1038_s41598_024_69058_3 crossref_primary_10_1007_s40747_025_01918_5 crossref_primary_10_3390_info14040218 crossref_primary_10_1109_TRO_2024_3420799 crossref_primary_10_1109_TCSVT_2024_3404275 crossref_primary_10_3390_computation10030035 crossref_primary_10_1007_s11263_021_01512_5 crossref_primary_10_1007_s10586_024_05059_1 crossref_primary_10_1109_JSTARS_2022_3159528 crossref_primary_10_1016_j_imavis_2025_105543 crossref_primary_10_1109_ACCESS_2025_3596264 crossref_primary_10_2478_ttj_2023_0007 crossref_primary_10_3390_s24154773 crossref_primary_10_11648_j_sr_20251304_15 crossref_primary_10_3390_app12073220 crossref_primary_10_1007_s00530_025_01694_9 crossref_primary_10_3390_drones7110681 crossref_primary_10_1109_LSP_2024_3371331 crossref_primary_10_3390_electronics13153033 crossref_primary_10_3390_app13179528 crossref_primary_10_1016_j_neucom_2025_130182 crossref_primary_10_3390_technologies12120239 crossref_primary_10_1109_JIOT_2025_3539852 crossref_primary_10_1109_TVT_2023_3243358 crossref_primary_10_1016_j_sigpro_2025_109958 crossref_primary_10_3390_aerospace12030194 crossref_primary_10_1016_j_aquaeng_2025_102534 crossref_primary_10_1016_j_neunet_2025_107907 crossref_primary_10_1016_j_neunet_2024_106539 crossref_primary_10_3390_jimaging11050144 crossref_primary_10_3390_s23239510 crossref_primary_10_1109_TCSVT_2023_3275813 crossref_primary_10_1109_LRA_2022_3193465 crossref_primary_10_3390_drones9090612 crossref_primary_10_1007_s11042_023_17255_z crossref_primary_10_1080_21642583_2024_2409119 crossref_primary_10_1007_s11263_023_01933_4 crossref_primary_10_1016_j_eswa_2023_120185 crossref_primary_10_1016_j_patcog_2024_110785 crossref_primary_10_1049_ipr2_12861 crossref_primary_10_3390_agronomy15051135 crossref_primary_10_1016_j_eswa_2025_128359 crossref_primary_10_1109_TMM_2023_3240881 crossref_primary_10_1109_TPAMI_2024_3461778 crossref_primary_10_1007_s10462_022_10358_3 crossref_primary_10_1109_ACCESS_2025_3567349 crossref_primary_10_1016_j_inffus_2024_102247 crossref_primary_10_1109_TPAMI_2024_3485644 crossref_primary_10_1038_s41598_025_91674_w crossref_primary_10_1007_s10462_025_11212_y crossref_primary_10_3390_s22155863 crossref_primary_10_1049_cvi2_70010 crossref_primary_10_1016_j_dsp_2023_103929 crossref_primary_10_1007_s11704_023_2418_0 crossref_primary_10_3390_app142311098 crossref_primary_10_3390_sym15061194 crossref_primary_10_1007_s11263_023_01922_7 crossref_primary_10_1007_s10489_024_05439_5 crossref_primary_10_1007_s44443_025_00190_4 crossref_primary_10_3390_app14166888 crossref_primary_10_3390_rs16091604 crossref_primary_10_1016_j_fmre_2024_06_015 crossref_primary_10_1177_09544054241245476 crossref_primary_10_1016_j_engappai_2022_105770 crossref_primary_10_1007_s11227_023_05478_8 crossref_primary_10_1038_s41598_023_31806_2 crossref_primary_10_1007_s00138_024_01531_5 crossref_primary_10_1016_j_animal_2025_101503 crossref_primary_10_20965_jrm_2025_p0466 crossref_primary_10_1145_3760525 crossref_primary_10_1007_s40747_022_00946_9 crossref_primary_10_1080_23311916_2022_2151553 crossref_primary_10_1016_j_actaastro_2025_02_027 crossref_primary_10_1007_s13748_022_00290_6 crossref_primary_10_1109_JSEN_2023_3265659 crossref_primary_10_1016_j_imavis_2025_105695 crossref_primary_10_1038_s41598_022_19697_1 crossref_primary_10_3390_s24061978 crossref_primary_10_1016_j_cviu_2022_103569 crossref_primary_10_1016_j_knosys_2024_111859 crossref_primary_10_1109_TCE_2025_3541839 crossref_primary_10_1016_j_neucom_2022_11_094 crossref_primary_10_1080_23311916_2025_2532808 crossref_primary_10_1109_TCSVT_2024_3423411 crossref_primary_10_1007_s11432_022_4097_y crossref_primary_10_1007_s11554_025_01758_z crossref_primary_10_1109_ACCESS_2024_3428438 crossref_primary_10_1109_JSAC_2023_3242708 crossref_primary_10_1109_TMM_2024_3401548 crossref_primary_10_1007_s00530_024_01407_8 crossref_primary_10_1016_j_knosys_2025_114072 crossref_primary_10_3390_rs14163853 crossref_primary_10_1109_TMM_2025_3557619 crossref_primary_10_1016_j_measurement_2022_112183 crossref_primary_10_1109_TAES_2023_3289164 crossref_primary_10_3390_rs14020320 crossref_primary_10_1016_j_ergon_2023_103440 crossref_primary_10_3390_mi13010072 crossref_primary_10_1109_TIM_2024_3522698 crossref_primary_10_3390_app13010440 crossref_primary_10_1016_j_biosystemseng_2024_12_011 crossref_primary_10_1109_LRA_2024_3511438 crossref_primary_10_1007_s40747_024_01687_7 crossref_primary_10_3390_f16071101 crossref_primary_10_1007_s00521_024_09799_4 crossref_primary_10_1016_j_csbj_2024_02_025 crossref_primary_10_1109_TGRS_2022_3152556 crossref_primary_10_1109_ACCESS_2025_3569732 crossref_primary_10_1109_ACCESS_2023_3287101 crossref_primary_10_1016_j_knosys_2024_111432 crossref_primary_10_1109_TMM_2021_3130414 crossref_primary_10_1007_s11760_025_04432_w crossref_primary_10_1109_TMM_2022_3150169 crossref_primary_10_1016_j_knosys_2024_112760 crossref_primary_10_1007_s10489_024_05866_4 crossref_primary_10_3390_rs16193684 crossref_primary_10_1016_j_knosys_2024_112528 crossref_primary_10_1109_ACCESS_2025_3541177 crossref_primary_10_1145_3700443 crossref_primary_10_1007_s13042_024_02182_8 crossref_primary_10_1109_ACCESS_2025_3555321 crossref_primary_10_3390_electronics10202479 crossref_primary_10_3390_math10152606 crossref_primary_10_1016_j_neucom_2024_128997 crossref_primary_10_1109_TNNLS_2024_3384446 crossref_primary_10_1007_s00371_023_02983_y crossref_primary_10_1109_TCSVT_2024_3392939 crossref_primary_10_1007_s11119_024_10132_1 crossref_primary_10_3390_ani15131835 crossref_primary_10_3389_fpls_2023_1175743 crossref_primary_10_1007_s10905_022_09802_7 crossref_primary_10_1007_s00138_024_01644_x crossref_primary_10_1109_JSTARS_2023_3308042 crossref_primary_10_3390_wevj15080374 crossref_primary_10_1016_j_compag_2022_107018 crossref_primary_10_1016_j_knosys_2025_113909 crossref_primary_10_1109_TITS_2022_3229978 crossref_primary_10_1016_j_inffus_2024_102496 crossref_primary_10_1016_j_imavis_2022_104514 crossref_primary_10_1007_s11042_023_16910_9 crossref_primary_10_1007_s11760_024_03810_0 crossref_primary_10_3390_e25020380 crossref_primary_10_3390_app14135900 crossref_primary_10_1109_TVT_2023_3332132 crossref_primary_10_3390_app12157473 crossref_primary_10_1007_s11042_023_17397_0 crossref_primary_10_3390_app151810112 crossref_primary_10_3390_electronics11172765 crossref_primary_10_1007_s11042_023_16094_2 crossref_primary_10_3390_ani15162448 crossref_primary_10_1007_s00138_025_01675_y crossref_primary_10_1109_ACCESS_2023_3247143 crossref_primary_10_1049_ipr2_12665 crossref_primary_10_1109_TMM_2023_3323852 crossref_primary_10_32604_cmc_2024_056824 crossref_primary_10_1038_s41598_023_27696_z crossref_primary_10_1109_TITS_2024_3373370 crossref_primary_10_3390_electronics13030590 crossref_primary_10_1007_s10055_023_00937_2 crossref_primary_10_1109_ACCESS_2022_3160424 crossref_primary_10_1007_s40747_023_01009_3 crossref_primary_10_1109_JIOT_2021_3097590 crossref_primary_10_1016_j_scs_2022_104064 crossref_primary_10_1038_s41598_025_16389_4 crossref_primary_10_1109_TCSVT_2023_3339609 crossref_primary_10_1109_TMM_2025_3543018 crossref_primary_10_1016_j_neucom_2024_127328 crossref_primary_10_1093_comjnl_bxaf075 crossref_primary_10_1109_TGRS_2023_3278075 crossref_primary_10_1145_3733240 crossref_primary_10_1109_TPAMI_2024_3466915 crossref_primary_10_1109_TPAMI_2023_3320125 crossref_primary_10_1007_s11760_023_02511_4 crossref_primary_10_1117_1_JEI_32_5_053007 crossref_primary_10_1016_j_imavis_2023_104737 crossref_primary_10_1109_TIP_2024_3468901 crossref_primary_10_1109_TMC_2025_3529501 crossref_primary_10_1109_ACCESS_2024_3464575 crossref_primary_10_3390_s23187938 crossref_primary_10_1109_ACCESS_2022_3197157 crossref_primary_10_1145_3749105 crossref_primary_10_3390_s23084024 crossref_primary_10_1007_s11042_022_13780_5 crossref_primary_10_3390_electronics12234720 crossref_primary_10_3390_wevj14100272 crossref_primary_10_1016_j_displa_2025_103229 crossref_primary_10_1109_JSEN_2023_3293519 crossref_primary_10_1109_ACCESS_2024_3509447 crossref_primary_10_1007_s11042_025_20910_2 crossref_primary_10_1109_TII_2024_3488786 crossref_primary_10_1109_TCAD_2024_3443002 crossref_primary_10_1109_TPAMI_2021_3119563 crossref_primary_10_3390_s24113311 crossref_primary_10_7746_jkros_2023_18_4_463 crossref_primary_10_1109_ACCESS_2023_3333366 crossref_primary_10_1016_j_neucom_2024_128420 crossref_primary_10_1007_s10489_023_05081_7 crossref_primary_10_34133_plantphenomics_0174 crossref_primary_10_1109_TCE_2023_3319498 crossref_primary_10_1109_LRA_2021_3139369 crossref_primary_10_1109_TIP_2024_3364828 crossref_primary_10_3390_jimaging10100253 crossref_primary_10_1109_TMM_2022_3222614 crossref_primary_10_1007_s11263_024_02237_x crossref_primary_10_1109_ACCESS_2025_3561769 crossref_primary_10_1109_LGRS_2022_3228527 crossref_primary_10_1016_j_jag_2024_103771 crossref_primary_10_1016_j_neunet_2023_06_029 crossref_primary_10_3390_electronics13142729 crossref_primary_10_3390_jmse11061141 crossref_primary_10_1080_24725579_2024_2398592 crossref_primary_10_3390_rs17122042 crossref_primary_10_1007_s11760_024_03715_y crossref_primary_10_1016_j_knosys_2024_111369 crossref_primary_10_3390_s24041199 crossref_primary_10_1109_TPAMI_2024_3362401 crossref_primary_10_1109_JSTARS_2023_3289293 crossref_primary_10_3390_s24227332 crossref_primary_10_1016_j_eswa_2024_125653 crossref_primary_10_1109_JSEN_2024_3465019 crossref_primary_10_1007_s11760_024_03397_6 crossref_primary_10_1016_j_isprsjprs_2021_05_005 crossref_primary_10_1109_TMC_2025_3526573 crossref_primary_10_1016_j_measurement_2025_118009 crossref_primary_10_1016_j_oceaneng_2023_114198 crossref_primary_10_1007_s10499_024_01713_y crossref_primary_10_1177_02783649241227448 crossref_primary_10_1007_s11263_025_02407_5 crossref_primary_10_1051_jnwpu_20224040944 crossref_primary_10_1515_eng_2024_0056 crossref_primary_10_1016_j_compeleceng_2022_108201 crossref_primary_10_1109_TCSVT_2024_3417810 crossref_primary_10_1109_ACCESS_2024_3523411 crossref_primary_10_3390_sym15122145 crossref_primary_10_1109_ACCESS_2024_3477513 crossref_primary_10_1109_TASE_2025_3529283 crossref_primary_10_7717_peerj_cs_2187 crossref_primary_10_1109_JSEN_2025_3583383 crossref_primary_10_1109_JIOT_2025_3585134 crossref_primary_10_1016_j_neucom_2025_131527 crossref_primary_10_3390_jmse13071256 crossref_primary_10_1016_j_eswa_2023_122194 crossref_primary_10_3390_s25113400 crossref_primary_10_1007_s00530_025_01722_8 crossref_primary_10_1007_s11042_023_15379_w crossref_primary_10_1007_s11227_025_07466_6 crossref_primary_10_1109_TGRS_2025_3539462 crossref_primary_10_1016_j_jvcir_2023_103954 crossref_primary_10_3233_AIC_220277 crossref_primary_10_1007_s00371_023_02901_2 crossref_primary_10_1109_TPAMI_2022_3225382 crossref_primary_10_1016_j_procs_2022_11_179 crossref_primary_10_1145_3626825 crossref_primary_10_3390_s23073390 crossref_primary_10_1007_s11390_022_2204_8 crossref_primary_10_1016_j_neucom_2025_129563 crossref_primary_10_1109_LSP_2023_3329419 crossref_primary_10_1109_TGRS_2024_3457517 crossref_primary_10_1016_j_eswa_2024_123581 crossref_primary_10_1063_5_0222856 crossref_primary_10_1109_ACCESS_2023_3340914 crossref_primary_10_1109_ACCESS_2025_3549790 crossref_primary_10_1186_s10033_023_00962_x crossref_primary_10_1007_s11042_022_13805_z crossref_primary_10_1007_s11554_023_01301_y crossref_primary_10_1016_j_biosystemseng_2025_02_001 crossref_primary_10_1016_j_neucom_2023_03_068 crossref_primary_10_1109_ACCESS_2024_3450370 crossref_primary_10_1007_s40747_022_00922_3 crossref_primary_10_1109_ACCESS_2024_3371860 crossref_primary_10_1109_ACCESS_2023_3274662 crossref_primary_10_1007_s11227_025_07664_2 crossref_primary_10_3390_ani14223299 crossref_primary_10_1109_ACCESS_2024_3371989 crossref_primary_10_1109_TPAMI_2025_3529926 crossref_primary_10_1109_TIP_2023_3263104 crossref_primary_10_1016_j_neucom_2024_129312 crossref_primary_10_1109_TASE_2023_3342791 crossref_primary_10_1109_LGRS_2025_3592711 crossref_primary_10_3390_rs14163862 crossref_primary_10_1007_s11042_025_20857_4 crossref_primary_10_1007_s10115_024_02237_w crossref_primary_10_1016_j_compag_2023_108009 crossref_primary_10_1109_JSTARS_2022_3213438 crossref_primary_10_3390_electronics13152968 crossref_primary_10_1016_j_patcog_2025_112225 crossref_primary_10_1109_TGRS_2024_3420160 crossref_primary_10_1007_s00138_023_01398_y crossref_primary_10_1007_s11801_024_4139_5 crossref_primary_10_1109_TGRS_2024_3355933 crossref_primary_10_1002_ece3_71996 crossref_primary_10_1007_s00521_023_09287_1 crossref_primary_10_1016_j_knosys_2024_112130 crossref_primary_10_3390_app132212228 crossref_primary_10_1007_s10489_022_03622_0 crossref_primary_10_1145_3587931 crossref_primary_10_1371_journal_pone_0317286 crossref_primary_10_1007_s11263_023_01943_2 crossref_primary_10_1109_TCSVT_2023_3238716 crossref_primary_10_1007_s11760_025_04655_x crossref_primary_10_1109_TITS_2025_3565334 crossref_primary_10_3390_app15063052 crossref_primary_10_1109_TCSVT_2024_3481425 crossref_primary_10_1016_j_cviu_2024_103957 crossref_primary_10_1109_TPAMI_2022_3213073 crossref_primary_10_1002_rob_22279 crossref_primary_10_1007_s40747_024_01475_3 crossref_primary_10_1109_TMM_2025_3557710 crossref_primary_10_1007_s00521_025_11518_6 crossref_primary_10_1007_s10489_023_04617_1 crossref_primary_10_1016_j_measurement_2025_118567 crossref_primary_10_1109_TGRS_2024_3401573 crossref_primary_10_1007_s00371_025_03872_2 crossref_primary_10_1038_s41598_025_01898_z crossref_primary_10_1016_j_asoc_2025_113845 crossref_primary_10_1016_j_compag_2023_108580 crossref_primary_10_3390_s23239499 crossref_primary_10_1007_s11263_023_01857_z crossref_primary_10_1016_j_imavis_2024_104964 crossref_primary_10_1109_TIP_2022_3153170 crossref_primary_10_3390_jimaging8110308 crossref_primary_10_1016_j_displa_2022_102317 crossref_primary_10_1055_a_2577_2739 crossref_primary_10_1109_TITS_2024_3430811 crossref_primary_10_1109_JSTARS_2025_3563060 crossref_primary_10_3390_app12031061 crossref_primary_10_1109_TIP_2023_3298538 crossref_primary_10_1109_TGRS_2023_3278107 crossref_primary_10_1016_j_eswa_2025_126719 crossref_primary_10_1109_ACCESS_2025_3586901 crossref_primary_10_1007_s00371_025_03861_5 crossref_primary_10_3390_rs15010146 crossref_primary_10_3390_electronics14091896 crossref_primary_10_3390_electronics13010242 crossref_primary_10_3390_math12101467 crossref_primary_10_3390_s25165088 crossref_primary_10_1109_LRA_2023_3346807 crossref_primary_10_1109_ACCESS_2025_3526775 crossref_primary_10_1007_s00530_025_01679_8 crossref_primary_10_1109_TITS_2023_3317372 crossref_primary_10_1016_j_engappai_2024_108882 crossref_primary_10_1155_2021_2292128 crossref_primary_10_1007_s10586_024_04935_0 crossref_primary_10_1117_1_JEI_31_6_063025 crossref_primary_10_3389_fmars_2025_1524134 crossref_primary_10_1016_j_patcog_2024_111091 crossref_primary_10_1155_2023_3895703 crossref_primary_10_3390_app15010107 crossref_primary_10_1016_j_neucom_2025_130988 crossref_primary_10_1109_JSEN_2023_3339212 crossref_primary_10_3390_s24072094 |
| Cites_doi | 10.1109/CVPR.2008.4587581 10.1007/978-3-030-58621-8_7 10.1007/978-3-030-58548-8_28 10.1007/s11042-018-6467-6 10.1109/CVPR42600.2020.00628 10.1109/CVPR.2016.90 10.1109/ICCV.2017.324 10.1109/CVPR.2019.00813 10.1109/CVPR.2019.00197 10.1109/CVPRW.2019.00190 10.1109/AVSS.2017.8078516 10.1109/CVPR.2017.106 10.1109/TPAMI.2011.21 10.1007/978-3-319-48881-3_3 10.1109/CVPR.2018.00255 10.1109/TPAMI.2017.2691769 10.1109/ICCV.2019.00667 10.1109/ICIP.2016.7533003 10.1109/WACV.2018.00057 10.1007/978-3-319-10602-1_48 10.1109/TPAMI.2013.103 10.1109/ICIP.2017.8296962 10.1007/978-3-030-01228-1_23 10.1145/3159171 10.1109/ICCV.2019.00975 10.1109/TPAMI.2020.2983686 10.1109/WACV.2019.00023 10.1002/nav.3800020109 10.1109/TPAMI.2014.2345390 10.1109/CVPR.2017.360 10.1109/CVPR.2016.95 10.1109/ICCV.2017.41 10.1109/CVPR.2018.00644 10.1109/ICCV.2019.00103 10.1109/CVPR42600.2020.00634 10.1007/978-3-030-01270-0_17 10.1007/978-3-319-48881-3_2 10.1109/TPAMI.2021.3054719 10.1109/CVPR.2016.234 10.1109/CVPR.2010.5539960 10.1007/978-3-030-01264-9_45 10.1109/CVPR.2008.4587597 10.1109/CVPR.2017.357 10.1109/ICCV.2019.00365 10.1109/CVPR.2017.394 10.1007/978-3-030-58548-8_9 10.1109/ICIP.2018.8451174 10.1109/TPAMI.2017.2781233 10.1109/CVPR.2014.167 10.1109/CVPR.2014.159 10.1109/CVPR.2019.00094 10.1109/CVPRW.2009.5206631 10.1109/ICCV.2019.00409 10.1109/CVPR42600.2020.00543 10.1109/CVPRW.2019.00105 10.1109/CVPR.2017.579 10.1109/CVPR42600.2020.01053 10.1007/978-3-319-48881-3_7 10.1609/aaai.v33i01.33018803 10.1109/CVPR.2017.474 10.1109/CVPR42600.2020.01468 10.1109/CVPR.2008.4587584 10.1109/ICCV.2015.534 10.1007/978-3-642-33709-3_25 10.1109/ICCV.2017.330 10.1007/s11263-015-0816-y 10.1109/ICCV.2019.00627 10.1109/CVPR46437.2021.00023 10.1109/ICIP.2017.8296360 10.1109/ICCV.2015.347 10.1109/CVPR42600.2020.01044 10.1109/TIP.2020.2993073 10.1109/TPAMI.2019.2910529 10.1109/ICPR.2018.8545450 10.1155/2008/246309 10.1109/ICCV.2017.322 10.1109/CVPR.2017.101 10.1109/ICME.2018.8486597 |
| ContentType | Journal Article |
| Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 COPYRIGHT 2021 Springer The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021. |
| Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 – notice: COPYRIGHT 2021 Springer – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021. |
| DBID | AAYXX CITATION ISR 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8FD 8FE 8FG 8FK 8FL ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI PYYUZ Q9U |
| DOI | 10.1007/s11263-021-01513-4 |
| DatabaseName | CrossRef Gale In Context: Science ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Complete ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Collection Computing Database (Alumni Edition) Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni) ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials - QC ProQuest Central Business Premium Collection ProQuest Technology Collection ProQuest One Community College ProQuest Central Korea Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student SciTech Premium Collection (UHCL Subscription) ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ABI/INFORM Collection China ProQuest Central Basic |
| DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ABI/INFORM China ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest Central (Alumni) ProQuest One Academic (New) Business Premium Collection (Alumni) |
| DatabaseTitleList | ABI/INFORM Global (Corporate) |
| Database_xml | – sequence: 1 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Computer Science |
| EISSN | 1573-1405 |
| EndPage | 3087 |
| ExternalDocumentID | A679328215 10_1007_s11263_021_01513_4 |
| GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29J 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 6TJ 78A 7WY 8FE 8FG 8FL 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDBF ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACIHN ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACUHS ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEAQA AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. B0M BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EAS EBLON EBS EDO EIOEI EJD EMK EPL ESBYG ESX F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IAO IHE IJ- IKXTQ ISR ITC ITM IWAJR IXC IZIGR IZQ I~X I~Y I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TEORI TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7V Z7W Z7X Z7Y Z7Z Z83 Z86 Z88 Z8M Z8N Z8P Z8Q Z8R Z8S Z8T Z8W Z92 ZMTXR ~8M ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG ADKFA AEZWR AFDZB AFFHD AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION ICD PHGZM PHGZT PQGLB 7SC 7XB 8AL 8FD 8FK JQ2 L.- L7M L~C L~D PKEHL PQEST PQUKI Q9U |
| ID | FETCH-LOGICAL-c458t-4700d0819b343be76fad39d69d94feff00f04e09a047d9a417aa014d4ffea1e93 |
| IEDL.DBID | M0C |
| ISICitedReferencesCount | 1039 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000692902100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0920-5691 |
| IngestDate | Wed Nov 05 00:49:49 EST 2025 Sat Nov 29 10:20:29 EST 2025 Wed Nov 26 09:43:56 EST 2025 Sat Nov 29 06:42:28 EST 2025 Tue Nov 18 21:05:09 EST 2025 Fri Feb 21 02:47:33 EST 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 11 |
| Keywords | FairMOT Multi-object tracking One-shot Anchor-free Real-time inference |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c458t-4700d0819b343be76fad39d69d94feff00f04e09a047d9a417aa014d4ffea1e93 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0001-6732-7823 |
| PQID | 2582666459 |
| PQPubID | 1456341 |
| PageCount | 19 |
| ParticipantIDs | proquest_journals_2582666459 gale_infotracacademiconefile_A679328215 gale_incontextgauss_ISR_A679328215 crossref_citationtrail_10_1007_s11263_021_01513_4 crossref_primary_10_1007_s11263_021_01513_4 springer_journals_10_1007_s11263_021_01513_4 |
| PublicationCentury | 2000 |
| PublicationDate | 2021-11-01 |
| PublicationDateYYYYMMDD | 2021-11-01 |
| PublicationDate_xml | – month: 11 year: 2021 text: 2021-11-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | International journal of computer vision |
| PublicationTitleAbbrev | Int J Comput Vis |
| PublicationYear | 2021 |
| Publisher | Springer US Springer Springer Nature B.V |
| Publisher_xml | – name: Springer US – name: Springer – name: Springer Nature B.V |
| References | Tang, S., Andriluka, M., Andres, B., & Schiele, B. (2017). Multiple people tracking by lifted multicut and person re-identification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3539–3548). Zheng, Z., Zheng, L., & Yang, Y. (2017b). A discriminatively learned cnn embedding for person reidentification. ACM Transactions on Multimedia Computing, Communications, and Applications, 14(1), 1–20. Bolme, D. S., Beveridge, J. R., Draper, B. A., & Lui, Y. M. (2010). Visual object tracking using adaptive correlation filters. In CVPR (pp. 2544–2550). IEEE. Vandenhende, S., Georgoulis, S., Van Gansbeke, W., Proesmans, M., Dai, D. & Van Gool, L. (2021). Multi–Task learning for dense prediction tasks: A survey. In IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3054719. Yang, Z., Liu, S., Hu, H., Wang, L., & Lin, S. (2019). Reppoints: Point set representation for object detection. In ICCV (pp. 9657–9666). Han, S., Huang, P., Wang, H., Yu, E., Liu, D., Pan, X., & Zhao, J. (2020) Mat: Motion-aware multi-object tracking. arXiv preprint arXiv:2009.04794 Zhou, X., Wang, D., & Krähenbühl, P. (2019a). Objects as points. arXiv preprint arXiv:1904.07850. Bewley, A., Ge, Z., Ott, L., Ramos, F., & Upcroft, B. (2016). Simple online and realtime tracking. In ICIP (pp. 3464–3468). IEEE. Zhou, X., Zhuo, J., & Krahenbuhl, P. (2019b). Bottom-up object detection by grouping extreme and center points. In CVPR (pp. 850–859). Peng, J., Wang, C., Wan, F., Wu, Y., Wang, Y., Tai, Y., Wang, C., Li, J., Huang, F., & Fu, Y. (2020). Chained-tracker: Chaining paired attentive regression results for end-to-end joint multiple-object detection and tracking. In European conference on computer vision (pp. 145–161). Springer. Guo, M., Haque, A., Huang, D. A., Yeung, S., & Fei-Fei, L. (2018). Dynamic task prioritization for multitask learning. In Proceedings of the European conference on computer vision (ECCV) (pp. 270–287). Felzenszwalb, P., McAllester, D., & Ramanan, D. (2008). A discriminatively trained, multiscale, deformable part model. In CVPR (pp. 1–8). IEEE. Zamir, A. R., Dehghan, A., & Shah, M. (2012). Gmcp-tracker: Global multi-object tracking using generalized minimum clique graphs. In European conference on computer vision (pp. 343–356). Springer. Sener, O., & Koltun, V. (2018). Multi-task learning as multi-objective optimization. In NIPS (pp. 527–538). KalmanREA new approach to linear filtering and prediction problemsJournal of Fluids Engineering196082135453931993 MahmoudiNAhadiSMRahmatiMMulti-target tracking using cnn-based features: CnnmttMultimedia Tools and Applications20197867077709610.1007/s11042-018-6467-6 Shan, C., Wei, C., Deng, B., Huang, J., Hua, X. S., Cheng, X., & Liang, K. (2020). Fgagt: Flow-guided adaptive graph tracking. arXiv preprint arXiv:2010.09015. RussakovskyODengJSuHKrauseJSatheeshSMaSHuangZKarpathyAKhoslaABernsteinMImagenet large scale visual recognition challengeInternational Journal of Computer Vision20151153211252342248210.1007/s11263-015-0816-y Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., & Tian, Q. (2019). Centernet: Keypoint triplets for object detection. In ICCV (pp. 6569–6578). Wojke, N., Bewley, A., & Paulus, D. (2017). Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP) (pp. 3645–3649). IEEE. Brasó, G., & Leal-Taixé, L. (2020). Learning a neural solver for multiple object tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6247–6257). Leal-Taixé, L., Milan, A., Reid, I., Roth, S., & Schindler, K. (2015). Motchallenge 2015: Towards a benchmark for multi-target tracking. arXiv preprint arXiv:1504.01942. Zheng, L., Zhang, H., Sun, S., Chandraker, M., Yang, Y., & Tian, Q. (2017a). Person re-identification in the wild. In CVPR (pp. 1367–1376). Kokkinos, I. (2017). Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In CVPR (pp. 6129–6138). Chen, L., Ai, H., Zhuang, Z., & Shang, C. (2018a). Real-time multiple people tracking with deeply learned candidate selection and person re-identification. In 2018 IEEE international conference on multimedia and expo (ICME) (pp. 1–6). IEEE. Chu, P., Fan, H., Tan, C. C., & Ling, H. (2019). Online multi-object tracking with instance-aware tracker and dynamic model refreshment. In 2019 IEEE winter conference on applications of computer vision (WACV) (pp. 161–170). IEEE. Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., et al. (2020). Deep high–resolution representation learning for visual recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2020.2983686. Valmadre, J., Bewley, A., Huang, J., Sun, C., Sminchisescu, C., & Schmid, C. (2021). Local metrics for multi-object tracking. arXiv preprint arXiv:2104.02631. Yu, F., Wang, D., Shelhamer, E., & Darrel, l. T. (2018). Deep layer aggregation. In CVPR (pp. 2403–2412). Shao, S., Zhao, Z., Li, B., Xiao, T., Yu, G., Zhang, X., & Sun, J. (2018). Crowdhuman: A benchmark for detecting human in a crowd. arXiv preprint arXiv:1805.00123. Radosavovic, I., Kosaraju, R.P., Girshick, R., He, K., Dollár, P. (2020). Designing network design spaces. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10428–10436 Wan, X., Wang, J., Kong, Z., Zhao, Q., & Deng, S. (2018). Multi-object tracking using online metric learning with long short-term memory. In 2018 25th IEEE international conference on image processing (ICIP) (pp. 788–792). IEEE. Luo, H., Gu, Y., Liao, X., Lai, S., & Jiang, W. (2019a). Bag of tricks and a strong baseline for deep person re-identification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. Bae, S. H., & Yoon, K. J. (2014). Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1218–1225). Sanchez-Matilla, R., Poiesi, F., & Cavallaro, A. (2016). Online multi-target tracking with strong and weak detections. In ECCV (pp. 84–99). Springer. Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Zhou, Z., Xing, J., Zhang, M., & Hu, W. (2018). Online multi-target tracking with tensor-based high-order graph matching. In 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 1809–1814). IEEE. Bochinski, E., Eiselein, V., & Sikora, T. (2017). High-speed tracking-by-detection without using image information. In 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS) (pp. 1–6). IEEE. BernardinKStiefelhagenREvaluating multiple object tracking performance: The clear mot metricsEURASIP Journal on Image and Video Processing2008200811010.1155/2008/246309 Chu, P., & Ling, H. (2019). Famnet: Joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. In ICCV (pp. 6172–6181). Kendall, A., Gal, Y., & Cipolla, R. (2018). Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR (pp. 7482–7491). ZhangYShengHWuYWangSLyuWKeWXiongZLong-term tracking with deep tracklet associationIEEE Transactions on Image Processing2020296694670610.1109/TIP.2020.2993073 Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In ECCV (pp. 17–35). Springer. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. SunSAkhtarNSongHMianASShahMDeep affinity network for multiple object trackingIEEE Transactions on Pattern Analysis and Machine Intelligence201943104119 Zhou, X., Koltun, V., & Krähenbühl, P. (2020). Tracking objects as points. In European conference on computer vision (pp. 474–490). Springer. Ess, A., Leibe, B., Schindler, K., & Van Gool, L. (2008). A mobile vision system for robust multi-person tracking. In CVPR (pp. 1–8). IEEE. Feichtenhofer, C., Pinz, A., & Zisserman, A. (2017). Detect to track and track to detect. In Proceedings of the IEEE international conference on computer vision (pp. 3038–3046). Zhu, J., Yang, H., Liu, N., Kim, M., Zhang, W., & Yang, M. H. (2018). Online multi-object tracking with dual matching attention networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 366–382). Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017a). Feature pyramid networks for object detection. In CVPR (pp. 2117–2125). Choi, W. (2015). Near-online multi-target tracking with aggregated local flow descriptor. In Proceedings of the IEEE international conference on computer vision (pp. 3029–3037). Liang, C., Zhang, Z., Lu, Y., Zhou, X., Li, B., Ye, X., & Zou, J. (2020). Rethinking the competition between detection and reid in multi-object tracking. arXiv preprint arXiv:2010.12138. Luo, H., Xie, W., Wang, X., & Zeng, W. (2019b). Detect or track: Towards cost-effective video object detection/tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 8803–8810. Zhang, L., Li, Y., & Nevatia, R. (2008). Global data association for multi-object tracking using network flows. In 2008 IEEE conference on computer vision and pattern recognition (pp. 1–8). IEEE. Liu, S., Johns, E., & Davison, A. J.(2019). End-to-end multi-task learning with attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1871–1880). Zhang, S., Benenson, R., & Schiele, B. (2017). Citypersons: A diverse dataset for pedestrian detection. In CVPR (pp. 3213–3221). Bergmann, P., Meinhardt, T., & Leal-Taixe, L. (2019). Tracking without bells and whistles. In ICCV (pp. 941–951). Kang, K., Ouyang, W., Li, H. 1513_CR13 1513_CR14 1513_CR11 1513_CR99 1513_CR12 1513_CR17 1513_CR18 1513_CR15 RE Kalman (1513_CR36) 1960; 82 1513_CR16 1513_CR93 1513_CR94 1513_CR91 1513_CR97 1513_CR10 1513_CR98 1513_CR95 1513_CR96 1513_CR19 1513_CR88 HW Kuhn (1513_CR42) 1955; 2 1513_CR89 1513_CR82 1513_CR83 1513_CR80 1513_CR81 1513_CR86 1513_CR87 1513_CR84 1513_CR85 J Berclaz (1513_CR3) 2011; 33 1513_CR90 1513_CR35 K Bernardin (1513_CR5) 2008; 2008 1513_CR33 1513_CR34 1513_CR39 1513_CR37 1513_CR38 1513_CR31 R Ranjan (1513_CR60) 2017; 41 1513_CR30 1513_CR24 1513_CR25 1513_CR22 1513_CR23 1513_CR28 1513_CR29 1513_CR26 1513_CR27 1513_CR8 1513_CR20 1513_CR9 1513_CR21 1513_CR4 1513_CR6 1513_CR7 1513_CR1 1513_CR57 1513_CR58 1513_CR56 O Russakovsky (1513_CR64) 2015; 115 1513_CR59 1513_CR50 1513_CR54 1513_CR51 1513_CR52 Y Zhang (1513_CR92) 2020; 29 1513_CR46 1513_CR47 1513_CR44 1513_CR45 1513_CR48 JF Henriques (1513_CR32) 2014; 37 1513_CR49 A Milan (1513_CR55) 2013; 36 1513_CR43 1513_CR40 1513_CR41 SH Bae (1513_CR2) 2017; 40 1513_CR79 1513_CR77 1513_CR78 1513_CR70 1513_CR75 1513_CR76 1513_CR73 1513_CR74 1513_CR68 1513_CR69 1513_CR66 1513_CR67 1513_CR61 1513_CR65 1513_CR62 1513_CR63 N Mahmoudi (1513_CR53) 2019; 78 P Tang (1513_CR72) 2019; 42 S Sun (1513_CR71) 2019; 43 |
| References_xml | – reference: Zhang, L., Li, Y., & Nevatia, R. (2008). Global data association for multi-object tracking using network flows. In 2008 IEEE conference on computer vision and pattern recognition (pp. 1–8). IEEE. – reference: Luo, H., Gu, Y., Liao, X., Lai, S., & Jiang, W. (2019a). Bag of tricks and a strong baseline for deep person re-identification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. – reference: Kokkinos, I. (2017). Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In CVPR (pp. 6129–6138). – reference: Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. – reference: Voigtlaender, P., Krause, M., Osep, A., Luiten, J., Sekar, B. B. G., Geiger, A., & Leibe, B. (2019). Mots: Multi-object tracking and segmentation. In CVPR (pp. 7942–7951). – reference: Zhu, J., Yang, H., Liu, N., Kim, M., Zhang, W., & Yang, M. H. (2018). Online multi-object tracking with dual matching attention networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 366–382). – reference: He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In ICCV (pp. 2961–2969). – reference: Radosavovic, I., Kosaraju, R.P., Girshick, R., He, K., Dollár, P. (2020). Designing network design spaces. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 10428–10436 – reference: Zheng, Z., Zheng, L., & Yang, Y. (2017b). A discriminatively learned cnn embedding for person reidentification. ACM Transactions on Multimedia Computing, Communications, and Applications, 14(1), 1–20. – reference: Ess, A., Leibe, B., Schindler, K., & Van Gool, L. (2008). A mobile vision system for robust multi-person tracking. In CVPR (pp. 1–8). IEEE. – reference: Zheng, L., Zhang, H., Sun, S., Chandraker, M., Yang, Y., & Tian, Q. (2017a). Person re-identification in the wild. In CVPR (pp. 1367–1376). – reference: Lu, Z., Rathod, V., Votel, R., & Huang, J. (2020). Retinatrack: Online single stage joint detection and tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14668–14678). – reference: TangPWangCWangXLiuWZengWWangJObject detection in videos by high quality object linkingIEEE Transactions on Pattern Analysis and Machine Intelligence20194251272127810.1109/TPAMI.2019.2910529 – reference: RanjanRPatelVMChellappaRHyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognitionT-PAMI201741112113510.1109/TPAMI.2017.2781233 – reference: Han, W., Khorrami, P., Paine, T. L., Ramachandran, P., Babaeizadeh, M., Shi, H., Li, J., Yan, S., & Huang, T. S. (2016). Seq-nms for video object detection. arXiv preprint arXiv:1602.08465. – reference: Shao, S., Zhao, Z., Li, B., Xiao, T., Yu, G., Zhang, X., & Sun, J. (2018). Crowdhuman: A benchmark for detecting human in a crowd. arXiv preprint arXiv:1805.00123. – reference: Liu, S., Johns, E., & Davison, A. J.(2019). End-to-end multi-task learning with attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1871–1880). – reference: MahmoudiNAhadiSMRahmatiMMulti-target tracking using cnn-based features: CnnmttMultimedia Tools and Applications20197867077709610.1007/s11042-018-6467-6 – reference: Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91–99). – reference: Zhou, Z., Xing, J., Zhang, M., & Hu, W. (2018). Online multi-target tracking with tensor-based high-order graph matching. In 2018 24th International Conference on Pattern Recognition (ICPR) (pp. 1809–1814). IEEE. – reference: Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. – reference: HenriquesJFCaseiroRMartinsPBatistaJHigh-speed tracking with kernelized correlation filtersIEEE Transactions on Pattern Analysis and Machine Intelligence201437358359610.1109/TPAMI.2014.2345390 – reference: Bergmann, P., Meinhardt, T., & Leal-Taixe, L. (2019). Tracking without bells and whistles. In ICCV (pp. 941–951). – reference: Felzenszwalb, P., McAllester, D., & Ramanan, D. (2008). A discriminatively trained, multiscale, deformable part model. In CVPR (pp. 1–8). IEEE. – reference: Bochinski, E., Eiselein, V., & Sikora, T. (2017). High-speed tracking-by-detection without using image information. In 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS) (pp. 1–6). IEEE. – reference: Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In ECCV (pp. 17–35). Springer. – reference: Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., & Tian, Q. (2019). Centernet: Keypoint triplets for object detection. In ICCV (pp. 6569–6578). – reference: Wan, X., Wang, J., Kong, Z., Zhao, Q., & Deng, S. (2018). Multi-object tracking using online metric learning with long short-term memory. In 2018 25th IEEE international conference on image processing (ICIP) (pp. 788–792). IEEE. – reference: BerclazJFleuretFTuretkenEFuaPMultiple object tracking using k-shortest paths optimizationIEEE Transactions on Pattern Analysis and Machine Intelligence20113391806181910.1109/TPAMI.2011.21 – reference: Sadeghian, A., Alahi, A., & Savarese, S. (2017). Tracking the untrackable: Learning to track multiple cues with long-term dependencies. In Proceedings of the IEEE international conference on computer vision (pp. 300–311). – reference: ZhangYShengHWuYWangSLyuWKeWXiongZLong-term tracking with deep tracklet associationIEEE Transactions on Image Processing2020296694670610.1109/TIP.2020.2993073 – reference: Dendorfer, P., Rezatofighi, H., Milan, A., Shi, J., Cremers, D., Reid, I., Roth, S., Schindler, K., & Leal-Taixé, L. (2020). Mot20: A benchmark for multi object tracking in crowded scenes. arXiv preprint arXiv:2003.09003. – reference: Hermans, A., Beyer, L., & Leibe, B. (2017). In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737. – reference: Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. – reference: Henschel, R., Zou, Y., & Rosenhahn, B. (2019). Multiple people tracking using body and joint detections. In CVPRW. – reference: SunSAkhtarNSongHMianASShahMDeep affinity network for multiple object trackingIEEE Transactions on Pattern Analysis and Machine Intelligence201943104119 – reference: Sanchez-Matilla, R., Poiesi, F., & Cavallaro, A. (2016). Online multi-target tracking with strong and weak detections. In ECCV (pp. 84–99). Springer. – reference: Kang, K., Li, H., Xiao, T., Ouyang, W., Yan, J., Liu, X., & Wang, X. (2017). Object detection in videos with tubelet proposal networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 727–735). – reference: Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017a). Feature pyramid networks for object detection. In CVPR (pp. 2117–2125). – reference: Wang, Z., Zheng, L., Liu, Y., Li, Y., & Wang, S. (2020b). Towards real-time multi-object tracking. In Computer vision–ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16 (pp. 107–122). Springer. – reference: Valmadre, J., Bewley, A., Huang, J., Sun, C., Sminchisescu, C., & Schmid, C. (2021). Local metrics for multi-object tracking. arXiv preprint arXiv:2104.02631. – reference: Chen, L., Ai, H., Shang, C., Zhuang, Z., & Bai, B. (2017). Online multi-object tracking with convolutional neural networks. In 2017 IEEE international conference on image processing (ICIP) (pp. 645–649). IEEE. – reference: Wen, L., Li, W., Yan, J., Lei, Z., Yi, D., & Li, S. Z. (2014). Multiple target tracking based on undirected hierarchical relation hypergraph. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1282–1289). – reference: Leal-Taixé, L., Milan, A., Reid, I., Roth, S., & Schindler, K. (2015). Motchallenge 2015: Towards a benchmark for multi-target tracking. arXiv preprint arXiv:1504.01942. – reference: Dollár, P., Wojek, C., Schiele, B., & Perona, P. (2009). Pedestrian detection: A benchmark. In CVPR (pp. 304–311). IEEE. – reference: Bolme, D. S., Beveridge, J. R., Draper, B. A., & Lui, Y. M. (2010). Visual object tracking using adaptive correlation filters. In CVPR (pp. 2544–2550). IEEE. – reference: Han, S., Huang, P., Wang, H., Yu, E., Liu, D., Pan, X., & Zhao, J. (2020) Mat: Motion-aware multi-object tracking. arXiv preprint arXiv:2009.04794 – reference: Yu, F., Wang, D., Shelhamer, E., & Darrel, l. T. (2018). Deep layer aggregation. In CVPR (pp. 2403–2412). – reference: Yang, Z., Liu, S., Hu, H., Wang, L., & Lin, S. (2019). Reppoints: Point set representation for object detection. In ICCV (pp. 9657–9666). – reference: Wojke, N., Bewley, A., & Paulus, D. (2017). Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP) (pp. 3645–3649). IEEE. – reference: Zamir, A. R., Dehghan, A., & Shah, M. (2012). Gmcp-tracker: Global multi-object tracking using generalized minimum clique graphs. In European conference on computer vision (pp. 343–356). Springer. – reference: Guo, M., Haque, A., Huang, D. A., Yeung, S., & Fei-Fei, L. (2018). Dynamic task prioritization for multitask learning. In Proceedings of the European conference on computer vision (ECCV) (pp. 270–287). – reference: Zhou, X., Wang, D., & Krähenbühl, P. (2019a). Objects as points. arXiv preprint arXiv:1904.07850. – reference: Chu, P., & Ling, H. (2019). Famnet: Joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. In ICCV (pp. 6172–6181). – reference: BernardinKStiefelhagenREvaluating multiple object tracking performance: The clear mot metricsEURASIP Journal on Image and Video Processing2008200811010.1155/2008/246309 – reference: Xiang, Y., Alahi, A., & Savarese, S. (2015). Learning to track: Online multi-object tracking by decision making. In ICCV (pp. 4705–4713). – reference: Pang, B., Li, Y., Zhang, Y., Li, M., & Lu, C. (2020). Tubetk: Adopting tubes to track multi-object in a one-step training model. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6308–6318). – reference: Law, H., & Deng, J. (2018). Cornernet: Detecting objects as paired keypoints. In ECCV (pp. 734–750). – reference: Kang, K., Ouyang, W., Li, H., & Wang, X. (2016). Object detection from video tubelets with convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 817–825). – reference: MilanARothSSchindlerKContinuous energy minimization for multitarget trackingIEEE Transactions on Pattern Analysis and Machine Intelligence2013361587210.1109/TPAMI.2013.103 – reference: Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., et al. (2020). Deep high–resolution representation learning for visual recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2020.2983686. – reference: Bae, S. H., & Yoon, K. J. (2014). Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1218–1225). – reference: Xiao, T., Li, S., Wang, B., Lin, L., & Wang, X. (2017). Joint detection and identification feature learning for person search. In CVPR (pp. 3415–3424). – reference: Dong, Z., Li, G., Liao, Y., Wang, F., Ren, P., & Qian, C. (2020). Centripetalnet: Pursuing high-quality keypoint pairs for object detection. In CVPR (pp. 10519–10528). – reference: BaeSHYoonKJConfidence-based data association and discriminative deep appearance learning for robust online multi-object trackingIEEE Transactions on Pattern Analysis and Machine Intelligence201740359561010.1109/TPAMI.2017.2691769 – reference: Peng, J., Wang, C., Wan, F., Wu, Y., Wang, Y., Tai, Y., Wang, C., Li, J., Huang, F., & Fu, Y. (2020). Chained-tracker: Chaining paired attentive regression results for end-to-end joint multiple-object detection and tracking. In European conference on computer vision (pp. 145–161). Springer. – reference: Vandenhende, S., Georgoulis, S., Van Gansbeke, W., Proesmans, M., Dai, D. & Van Gool, L. (2021). Multi–Task learning for dense prediction tasks: A survey. In IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3054719. – reference: Hornakova, A., Henschel, R., Rosenhahn, B., & Swoboda, P. (2020). Lifted disjoint paths with application in multiple object tracking. In International conference on machine learning, PMLR (pp. 4364–4375). – reference: Feichtenhofer, C., Pinz, A., & Zisserman, A. (2017). Detect to track and track to detect. In Proceedings of the IEEE international conference on computer vision (pp. 3038–3046). – reference: Pang, J., Qiu, L., Li, X., Chen, H., Li, Q., Darrell, T., & Yu, F. (2021). Quasi-dense similarity learning for multiple object tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 164–173). – reference: Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017b). Focal loss for dense object detection. In ICCV (pp. 2980–2988). – reference: Yu, F., Li, W., Li, Q., Liu, Y., Shi, X., & Yan, J. (2016). Poi: Multiple object tracking with high performance detection and appearance feature. In ECCV (pp. 36–42). Springer. – reference: Zhou, X., Zhuo, J., & Krahenbuhl, P. (2019b). Bottom-up object detection by grouping extreme and center points. In CVPR (pp. 850–859). – reference: Choi, W. (2015). Near-online multi-target tracking with aggregated local flow descriptor. In Proceedings of the IEEE international conference on computer vision (pp. 3029–3037). – reference: RussakovskyODengJSuHKrauseJSatheeshSMaSHuangZKarpathyAKhoslaABernsteinMImagenet large scale visual recognition challengeInternational Journal of Computer Vision20151153211252342248210.1007/s11263-015-0816-y – reference: Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In ECCV (pp. 740–755). Springer. – reference: Cheng, B., Xiao, B., Wang, J., Shi, H., Huang, T. S., & Zhang, L. (2020). Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation. In CVPR. – reference: Luo, H., Xie, W., Wang, X., & Zeng, W. (2019b). Detect or track: Towards cost-effective video object detection/tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 8803–8810. – reference: Zhang, S., Benenson, R., & Schiele, B. (2017). Citypersons: A diverse dataset for pedestrian detection. In CVPR (pp. 3213–3221). – reference: KalmanREA new approach to linear filtering and prediction problemsJournal of Fluids Engineering196082135453931993 – reference: Chen, Z., Badrinarayanan, V., Lee, C. Y., & Rabinovich, A. (2018b). Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In ICML, PMLR (pp. 794–803). – reference: Bewley, A., Ge, Z., Ott, L., Ramos, F., & Upcroft, B. (2016). Simple online and realtime tracking. In ICIP (pp. 3464–3468). IEEE. – reference: Sener, O., & Koltun, V. (2018). Multi-task learning as multi-objective optimization. In NIPS (pp. 527–538). – reference: Chen, L., Ai, H., Zhuang, Z., & Shang, C. (2018a). Real-time multiple people tracking with deeply learned candidate selection and person re-identification. In 2018 IEEE international conference on multimedia and expo (ICME) (pp. 1–6). IEEE. – reference: He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR (pp. 770–778). – reference: Fang, K., Xiang, Y., Li, X., & Savarese, S. (2018). Recurrent autoregressive networks for online multi-object tracking. In WACV (pp. 466–475). IEEE. – reference: Shan, C., Wei, C., Deng, B., Huang, J., Hua, X. S., Cheng, X., & Liang, K. (2020). Fgagt: Flow-guided adaptive graph tracking. arXiv preprint arXiv:2010.09015. – reference: Xu, J., Cao, Y., Zhang, Z., & Hu, H. (2019). Spatial–temporal relation networks for multi-object tracking. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 3988–3998). – reference: Cai, Z., & Vasconcelos, N. (2018). Cascade r-cnn: Delving into high quality object detection. In CVPR (pp. 6154–6162). – reference: Milan, A., Leal-Taixé, L., Reid, I., Roth, S., & Schindler, K. (2016) Mot16: A benchmark for multi-object tracking. arXiv preprint arXiv:1603.00831. – reference: Chao, P., Kao, C. Y., Ruan, Y. S., Huang, C. H., & Lin, Y. L. (2019). Hardnet: A low memory traffic network. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 3552–3561). – reference: Liang, C., Zhang, Z., Lu, Y., Zhou, X., Li, B., Ye, X., & Zou, J. (2020). Rethinking the competition between detection and reid in multi-object tracking. arXiv preprint arXiv:2010.12138. – reference: Yang, F., Choi, W., & Lin, Y. (2016). Exploit all the layers: Fast and accurate cnn object detector with scale dependent pooling and cascaded rejection classifiers. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2129–2137). – reference: Kendall, A., Gal, Y., & Cipolla, R. (2018). Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR (pp. 7482–7491). – reference: KuhnHWThe Hungarian method for the assignment problemNaval Research Logistics Quarterly195521–283977551010.1002/nav.3800020109 – reference: Chu, P., Fan, H., Tan, C. C., & Ling, H. (2019). Online multi-object tracking with instance-aware tracker and dynamic model refreshment. In 2019 IEEE winter conference on applications of computer vision (WACV) (pp. 161–170). IEEE. – reference: Tang, S., Andriluka, M., Andres, B., & Schiele, B. (2017). Multiple people tracking by lifted multicut and person re-identification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3539–3548). – reference: Brasó, G., & Leal-Taixé, L. (2020). Learning a neural solver for multiple object tracking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6247–6257). – reference: Zhou, X., Koltun, V., & Krähenbühl, P. (2020). Tracking objects as points. In European conference on computer vision (pp. 474–490). Springer. – ident: 1513_CR23 doi: 10.1109/CVPR.2008.4587581 – ident: 1513_CR79 doi: 10.1007/978-3-030-58621-8_7 – ident: 1513_CR95 doi: 10.1007/978-3-030-58548-8_28 – volume: 78 start-page: 7077 issue: 6 year: 2019 ident: 1513_CR53 publication-title: Multimedia Tools and Applications doi: 10.1007/s11042-018-6467-6 – ident: 1513_CR9 doi: 10.1109/CVPR42600.2020.00628 – ident: 1513_CR31 doi: 10.1109/CVPR.2016.90 – ident: 1513_CR47 doi: 10.1109/ICCV.2017.324 – ident: 1513_CR76 doi: 10.1109/CVPR.2019.00813 – ident: 1513_CR49 doi: 10.1109/CVPR.2019.00197 – ident: 1513_CR68 – ident: 1513_CR51 doi: 10.1109/CVPRW.2019.00190 – ident: 1513_CR45 – ident: 1513_CR7 doi: 10.1109/AVSS.2017.8078516 – ident: 1513_CR46 doi: 10.1109/CVPR.2017.106 – ident: 1513_CR74 – volume: 33 start-page: 1806 issue: 9 year: 2011 ident: 1513_CR3 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2011.21 – ident: 1513_CR87 doi: 10.1007/978-3-319-48881-3_3 – volume: 82 start-page: 35 issue: 1 year: 1960 ident: 1513_CR36 publication-title: Journal of Fluids Engineering – ident: 1513_CR88 doi: 10.1109/CVPR.2018.00255 – volume: 40 start-page: 595 issue: 3 year: 2017 ident: 1513_CR2 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2017.2691769 – ident: 1513_CR22 doi: 10.1109/ICCV.2019.00667 – ident: 1513_CR6 doi: 10.1109/ICIP.2016.7533003 – ident: 1513_CR24 doi: 10.1109/WACV.2018.00057 – ident: 1513_CR48 doi: 10.1007/978-3-319-10602-1_48 – ident: 1513_CR40 – volume: 36 start-page: 58 issue: 1 year: 2013 ident: 1513_CR55 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2013.103 – ident: 1513_CR54 – ident: 1513_CR81 doi: 10.1109/ICIP.2017.8296962 – ident: 1513_CR99 doi: 10.1007/978-3-030-01228-1_23 – ident: 1513_CR94 doi: 10.1145/3159171 – ident: 1513_CR86 doi: 10.1109/ICCV.2019.00975 – ident: 1513_CR78 doi: 10.1109/TPAMI.2020.2983686 – ident: 1513_CR17 doi: 10.1109/WACV.2019.00023 – ident: 1513_CR28 – volume: 2 start-page: 83 issue: 1–2 year: 1955 ident: 1513_CR42 publication-title: Naval Research Logistics Quarterly doi: 10.1002/nav.3800020109 – volume: 37 start-page: 583 issue: 3 year: 2014 ident: 1513_CR32 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2014.2345390 – ident: 1513_CR83 doi: 10.1109/CVPR.2017.360 – ident: 1513_CR34 – ident: 1513_CR38 doi: 10.1109/CVPR.2016.95 – ident: 1513_CR65 doi: 10.1109/ICCV.2017.41 – ident: 1513_CR10 doi: 10.1109/CVPR.2018.00644 – ident: 1513_CR4 doi: 10.1109/ICCV.2019.00103 – ident: 1513_CR56 doi: 10.1109/CVPR42600.2020.00634 – ident: 1513_CR19 – ident: 1513_CR27 doi: 10.1007/978-3-030-01270-0_17 – ident: 1513_CR63 doi: 10.1007/978-3-319-48881-3_2 – ident: 1513_CR75 doi: 10.1109/TPAMI.2021.3054719 – ident: 1513_CR67 – ident: 1513_CR85 doi: 10.1109/CVPR.2016.234 – ident: 1513_CR8 doi: 10.1109/CVPR.2010.5539960 – ident: 1513_CR62 – ident: 1513_CR39 – ident: 1513_CR43 doi: 10.1007/978-3-030-01264-9_45 – ident: 1513_CR14 – ident: 1513_CR26 doi: 10.1109/CVPR.2008.4587597 – ident: 1513_CR93 doi: 10.1109/CVPR.2017.357 – ident: 1513_CR11 doi: 10.1109/ICCV.2019.00365 – ident: 1513_CR73 doi: 10.1109/CVPR.2017.394 – ident: 1513_CR58 doi: 10.1007/978-3-030-58548-8_9 – ident: 1513_CR61 – ident: 1513_CR77 doi: 10.1109/ICIP.2018.8451174 – volume: 41 start-page: 121 issue: 1 year: 2017 ident: 1513_CR60 publication-title: T-PAMI doi: 10.1109/TPAMI.2017.2781233 – ident: 1513_CR80 doi: 10.1109/CVPR.2014.167 – ident: 1513_CR1 doi: 10.1109/CVPR.2014.159 – ident: 1513_CR97 doi: 10.1109/CVPR.2019.00094 – ident: 1513_CR20 doi: 10.1109/CVPRW.2009.5206631 – ident: 1513_CR70 – ident: 1513_CR84 doi: 10.1109/ICCV.2019.00409 – ident: 1513_CR15 doi: 10.1109/CVPR42600.2020.00543 – volume: 43 start-page: 104 year: 2019 ident: 1513_CR71 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence – ident: 1513_CR33 doi: 10.1109/CVPRW.2019.00105 – ident: 1513_CR41 doi: 10.1109/CVPR.2017.579 – ident: 1513_CR21 doi: 10.1109/CVPR42600.2020.01053 – ident: 1513_CR66 doi: 10.1007/978-3-319-48881-3_7 – ident: 1513_CR52 doi: 10.1609/aaai.v33i01.33018803 – ident: 1513_CR69 – ident: 1513_CR44 – ident: 1513_CR91 doi: 10.1109/CVPR.2017.474 – ident: 1513_CR50 doi: 10.1109/CVPR42600.2020.01468 – ident: 1513_CR90 doi: 10.1109/CVPR.2008.4587584 – ident: 1513_CR96 – ident: 1513_CR82 doi: 10.1109/ICCV.2015.534 – ident: 1513_CR89 doi: 10.1007/978-3-642-33709-3_25 – ident: 1513_CR25 doi: 10.1109/ICCV.2017.330 – volume: 115 start-page: 211 issue: 3 year: 2015 ident: 1513_CR64 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-015-0816-y – ident: 1513_CR18 doi: 10.1109/ICCV.2019.00627 – ident: 1513_CR57 doi: 10.1109/CVPR46437.2021.00023 – ident: 1513_CR12 doi: 10.1109/ICIP.2017.8296360 – ident: 1513_CR16 doi: 10.1109/ICCV.2015.347 – ident: 1513_CR59 doi: 10.1109/CVPR42600.2020.01044 – volume: 29 start-page: 6694 year: 2020 ident: 1513_CR92 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2020.2993073 – volume: 42 start-page: 1272 issue: 5 year: 2019 ident: 1513_CR72 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2019.2910529 – ident: 1513_CR98 doi: 10.1109/ICPR.2018.8545450 – ident: 1513_CR29 – volume: 2008 start-page: 1 year: 2008 ident: 1513_CR5 publication-title: EURASIP Journal on Image and Video Processing doi: 10.1155/2008/246309 – ident: 1513_CR30 doi: 10.1109/ICCV.2017.322 – ident: 1513_CR37 doi: 10.1109/CVPR.2017.101 – ident: 1513_CR13 doi: 10.1109/ICME.2018.8486597 – ident: 1513_CR35 |
| SSID | ssj0002823 |
| Score | 2.752296 |
| Snippet | Multi-object tracking (MOT) is an important problem in computer vision which has a wide range of applications. Formulating MOT as multi-task learning of object... |
| SourceID | proquest gale crossref springer |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 3069 |
| SubjectTerms | Accuracy Artificial Intelligence Computer Imaging Computer Science Computer vision Datasets Human-computer interaction Image Processing and Computer Vision Machine vision Multiple target tracking Object recognition Optimization Pattern Recognition Pattern Recognition and Graphics Source code Vision |
| SummonAdditionalLinks | – databaseName: Springer LINK dbid: RSV link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwEB4V2gMXoKWoSwFZVaUeSiQ7cR7uDUFXVCpstVDgZjmxXa2Esmiz9Pd3xuuwgj4keowzSRx7PA97vhmA96lX3JY1uqnS2UQ2GU9qKZqkqtAZIfCjCliYy6_l2Vl1fa2-RVBY10e790eSQVIvwW4iDWeO5P7mIkvkCjxHdVdRwYbx-eW9_EUnYlFAHh2jvFAiQmX-_I4H6uixUP7tdDQoneHG_3V3E9ajkckOF1zxEp659hVsRIOTxeXcYVNf06Fv24KroZnMTkcXn9ioZWgcMromccimnh27eYjcaplpLRu7ZGJjsFGYXzZp2WkMUGSjmnZ4GOrChnbjX8P34eeLo5MkFl9IGplX80SWnFuyF-pMZrUrC29spmyhrJLeec-5x-nlynBZWmWkKI1Bd8tK750RTmXbsNpOW_cGGDbxWhRKOST2ta0IHZsbl3tUjXVaDED0c6CbmJmcCmTc6GVOZRpMjYOpw2BqOYCP98_cLvJy_JP6HU2tpoQXLUXU_DB3Xae_nI_1YYESCllG5AP4EIn8FD_fmAhQwJ-gHFkPKHd7FtFxyXc6zdFTKyg3zwAOepZY3v5753aeRv4W1lLiqoCH3IXV-ezO7cGL5ud80s32w1L4BYAo_oY priority: 102 providerName: Springer Nature |
| Title | FairMOT: On the Fairness of Detection and Re-identification in Multiple Object Tracking |
| URI | https://link.springer.com/article/10.1007/s11263-021-01513-4 https://www.proquest.com/docview/2582666459 |
| Volume | 129 |
| WOSCitedRecordID | wos000692902100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: SpringerLINK Contemporary 1997-Present customDbUrl: eissn: 1573-1405 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002823 issn: 0920-5691 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB7RlgOXlqe6UFYWQuIAFk7iPMwFlbYrEOxD29IWLpYT22gllC2bLb-fmazTVYvohUskO5PE0WePZ8bzAHgZeyVsXqKaKp3lskoEL2VU8aJAZYSCH1UbC3P6JR-NivNzNQkGtya4VXY8sWXUdl6RjfxtnKIgnFHqk_cXvzhVjaLT1VBCYwO2SLIhl76hOLjixKhOrErJo4qUZioKQTOr0Lkobk8wSZlOo4TLaxvTTfb81zlpu_0Mdv534PdhOwiebH81Ux7AHVc_hJ0ghLKwxBvs6uo8dH2P4GxgZovh-OQdG9cMBUZGbWKRbO7ZoVu23lw1M7VlU8dnNjggtZizWc2GwWmRjUuy-jDcHyuy0D-Gr4Ojk4OPPBRk4JVMiyWXuRCWZIgykUnp8swbmyibKaukd94L4RFyoYyQuVVGRrkxqIJZ6b0zkVPJE9is57XbBYZdoowypRwS-9IWFDGbGpd63C7LOOtB1KGhq5CtnIpm_NTrPMuEoEYEdYuglj14ffXMxSpXx63ULwhkTUkwavKy-WEum0Z_Op7q_Qy5Fk6eKO3Bq0Dk5_j5yoSgBfwJypt1jXKvg18HNtDoNfY9eNNNoPXtfw_u6e1vewb3Ypq6bUzkHmwuF5fuOdytfi9nzaIPG_nZtz5sfTgaTabY-pzzfrsw8DpJv-N1enz6B8AZDkw |
| linkProvider | ProQuest |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Jb9QwFH6qChJcKKuYUsBCIA5gkcVZjIRQRRl1NBsqA-3NOLFdjYQyZTIF8af4jbyXOB0VRG89cIzjJHb8-S1-G8DTyMnAZAWqqcIaLso44IUIS57nqIxQ8KNsYmE-j7LJJD86kh824FcXC0NulR1NbAi1WZR0Rv4qSlAQTin1yduTb5yqRpF1tSuh0cJiaH_-QJWtfjPYw_V9FkX997N3-9xXFeClSPIVF1kQGGKERSziwmap0yaWJpVGCmedCwKH4w6kDkRmpBZhpjXqEUY4Z3VoKfkSkvwrIs4z2lfDjJ9RflRf2tL1qJIlqQx9kE4bqhdGjcWUlPckjLk4xwj_ZAd_2WUbdtff-t9-1E244QVrttvuhFuwYavbsOWFbOZJWI1NXR2Lru0OHPb1fDmezl6zacVQIGZ0TSyALRzbs6vGW61iujLswPK58Q5WDabZvGJj75TJpgWdajHk_yVZIO7Cp0uZ8T3YrBaVvQ8Mm4IiTKW02NkVJqeI4ETbxKE4UERpD8Ju9VXps7FTUZCvap1HmhCjEDGqQYwSPXhx9sxJm4vkwt5PCFSKknxU5EV0rE_rWg0-HqjdFKkygjVMevDcd3IL_HypfVAGToLygp3rudPBTXkyV6s11nrwsgPs-va_B7d98dsew7X92XikRoPJ8AFcj2jbNPGfO7C5Wp7ah3C1_L6a18tHzQZk8OWygfwbrWdmZQ |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB5VBSEutLzEQgELgThA1DhxHkZCqGJZsWq7uyqFVlyME9toJZQtm22r_jV-HTOJ01VB9NYDxziTh5PP8_C8AJ5HToYmK9BMFdYEoozDoBC8DPIcjRFKfpRNLsyXnWw0yg8P5WQFfnW5MBRW2fHEhlGbWUl75JtRgopwSqVPNp0Pi5j0B--OfgbUQYo8rV07jRYi2_bsFM23-u2wj__6RRQNPuy__xj4DgNBKZJ8EYgsDA0JxSIWcWGz1GkTS5NKI4WzzoWhwzmEUociM1ILnmmNNoURzlnNLRViQvZ_LUMbk8IJJ8nXcymApkzbxh7NsySV3CfstGl7PGq8p2TIJzwOxAWh-Kdo-MtH24i-wdr__NHW4ZZXuNlWu0Juw4qt7sCaV76ZZ201DnX9Lbqxu3Aw0NP57nj_DRtXDBVlRsckGtjMsb5dNFFsFdOVYXs2mBofeNVgnU0rtuuDNdm4oN0uhnpBSZ6Je_D5SmZ8H1arWWUfAMOhsOCplBaJXWFyyhROtE0cqglFlPaAd0hQpa_STs1CfqhlfWlCj0L0qAY9SvTg1fk1R22NkkupnxHAFBX_qAgR3_VxXavhpz21lSK3RuDypAcvPZGb4eNL7ZM1cBJUL-wC5UYHPeXZX62WuOvB6w68y9P_frmHl9_tKdxA_Kqd4Wj7EdyMaAU1aaEbsLqYH9vHcL08WUzr-ZNmLTL4dtU4_g0i5m-J |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=FairMOT%3A+On+the+Fairness+of+Detection+and+Re-identification+in+Multiple+Object+Tracking&rft.jtitle=International+journal+of+computer+vision&rft.au=Zhang%2C+Yifu&rft.au=Wang%2C+Chunyu&rft.au=Wang%2C+Xinggang&rft.au=Zeng%2C+Wenjun&rft.date=2021-11-01&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=129&rft.issue=11&rft.spage=3069&rft.epage=3087&rft_id=info:doi/10.1007%2Fs11263-021-01513-4&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s11263_021_01513_4 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon |