MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking

Standardized benchmarks have been crucial in pushing the performance of computer vision algorithms, especially since the advent of deep learning. Although leaderboards should not be over-claimed, they often provide the most objective measure of performance and are therefore important guides for rese...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer vision Jg. 129; H. 4; S. 845 - 881
Hauptverfasser: Dendorfer, Patrick, Os̆ep, Aljos̆a, Milan, Anton, Schindler, Konrad, Cremers, Daniel, Reid, Ian, Roth, Stefan, Leal-Taixé, Laura
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York Springer US 01.04.2021
Springer
Springer Nature B.V
Schlagworte:
ISSN:0920-5691, 1573-1405
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Standardized benchmarks have been crucial in pushing the performance of computer vision algorithms, especially since the advent of deep learning. Although leaderboards should not be over-claimed, they often provide the most objective measure of performance and are therefore important guides for research. We present MOTChallenge , a benchmark for single-camera Multiple Object Tracking (MOT) launched in late 2014, to collect existing and new data and create a framework for the standardized evaluation of multiple object tracking methods. The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community, with applications ranging from robot navigation to self-driving cars. This paper collects the first three releases of the benchmark: (i) MOT15 , along with numerous state-of-the-art results that were submitted in the last years, (ii) MOT16 , which contains new challenging videos, and (iii) MOT17 , that extends MOT16 sequences with more precise labels and evaluates tracking performance on three different object detectors. The second and third release not only offers a significant increase in the number of labeled boxes, but also provide labels for multiple object classes beside pedestrians, as well as the level of visibility for every single object of interest. We finally provide a categorization of state-of-the-art trackers and a broad error analysis. This will help newcomers understand the related work and research trends in the MOT community, and hopefully shed some light into potential future research directions.
AbstractList Standardized benchmarks have been crucial in pushing the performance of computer vision algorithms, especially since the advent of deep learning. Although leaderboards should not be over-claimed, they often provide the most objective measure of performance and are therefore important guides for research. We present MOTChallenge , a benchmark for single-camera Multiple Object Tracking (MOT) launched in late 2014, to collect existing and new data and create a framework for the standardized evaluation of multiple object tracking methods. The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community, with applications ranging from robot navigation to self-driving cars. This paper collects the first three releases of the benchmark: (i) MOT15 , along with numerous state-of-the-art results that were submitted in the last years, (ii) MOT16 , which contains new challenging videos, and (iii) MOT17 , that extends MOT16 sequences with more precise labels and evaluates tracking performance on three different object detectors. The second and third release not only offers a significant increase in the number of labeled boxes, but also provide labels for multiple object classes beside pedestrians, as well as the level of visibility for every single object of interest. We finally provide a categorization of state-of-the-art trackers and a broad error analysis. This will help newcomers understand the related work and research trends in the MOT community, and hopefully shed some light into potential future research directions.
Standardized benchmarks have been crucial in pushing the performance of computer vision algorithms, especially since the advent of deep learning. Although leaderboards should not be over-claimed, they often provide the most objective measure of performance and are therefore important guides for research. We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT) launched in late 2014, to collect existing and new data and create a framework for the standardized evaluation of multiple object tracking methods. The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community, with applications ranging from robot navigation to self-driving cars. This paper collects the first three releases of the benchmark: (i) MOT15, along with numerous state-of-the-art results that were submitted in the last years, (ii) MOT16, which contains new challenging videos, and (iii) MOT17, that extends MOT16 sequences with more precise labels and evaluates tracking performance on three different object detectors. The second and third release not only offers a significant increase in the number of labeled boxes, but also provide labels for multiple object classes beside pedestrians, as well as the level of visibility for every single object of interest. We finally provide a categorization of state-of-the-art trackers and a broad error analysis. This will help newcomers understand the related work and research trends in the MOT community, and hopefully shed some light into potential future research directions.
Audience Academic
Author Roth, Stefan
Cremers, Daniel
Reid, Ian
Milan, Anton
Os̆ep, Aljos̆a
Schindler, Konrad
Dendorfer, Patrick
Leal-Taixé, Laura
Author_xml – sequence: 1
  givenname: Patrick
  orcidid: 0000-0002-4623-8749
  surname: Dendorfer
  fullname: Dendorfer, Patrick
  email: patrick.dendorfer@tum.de
  organization: Technical University Munich
– sequence: 2
  givenname: Aljos̆a
  surname: Os̆ep
  fullname: Os̆ep, Aljos̆a
  organization: Technical University Munich
– sequence: 3
  givenname: Anton
  surname: Milan
  fullname: Milan, Anton
  organization: Amazon Research
– sequence: 4
  givenname: Konrad
  surname: Schindler
  fullname: Schindler, Konrad
  organization: ETH Zürich
– sequence: 5
  givenname: Daniel
  surname: Cremers
  fullname: Cremers, Daniel
  organization: Technical University Munich
– sequence: 6
  givenname: Ian
  surname: Reid
  fullname: Reid, Ian
  organization: The University of Adelaide
– sequence: 7
  givenname: Stefan
  surname: Roth
  fullname: Roth, Stefan
  organization: Technical University of Darmstadt
– sequence: 8
  givenname: Laura
  surname: Leal-Taixé
  fullname: Leal-Taixé, Laura
  organization: Technical University Munich
BookMark eNp9kE1PAjEQhhuDiYD-AU-beF6ctrQs3hA_EwgH8dwM3SkUll3sLgf_vcU1MfFgemgzfZ925umxTlmVxNg1hwEHGN3WnAstUxCQApfjeDpjXa5GMuVDUB3WhXG8UnrML1ivrrcAIDIhu-xhvlhON1gUVK7pLpkk91TazR7DLnFVSN58uS4oneKeAibzY9H4Q0HJEsOammQZ0O5i4pKdOyxquvrZ--z96XE5fUlni-fX6WSWWiV4k2oCRSuwuYaMVO4wlwpXeqi0WlmHmLsMObqVkDgSUnOnUFsd68CdJCdkn9207x5C9XGkujHb6hjK-KURCqSSKstUTA3a1BoLMr50VRP7jCunvbfRm_OxPtFKcz4SHCKQtYANVV0Hcsb6BhtflRH0heFgTpJNK9lEyeZbsjmh4g96CD7a-_wfki1Ux3DUHn7H-If6AuDnj_8
CitedBy_id crossref_primary_10_1016_j_cviu_2024_103978
crossref_primary_10_1117_1_JEI_32_5_053007
crossref_primary_10_1016_j_compag_2022_107513
crossref_primary_10_1016_j_neucom_2025_130182
crossref_primary_10_1007_s12273_025_1295_x
crossref_primary_10_1007_s11263_022_01694_6
crossref_primary_10_3390_electronics14163187
crossref_primary_10_1016_j_imavis_2023_104737
crossref_primary_10_3390_jpm12050809
crossref_primary_10_1111_raq_13001
crossref_primary_10_3390_electronics12234720
crossref_primary_10_3390_electronics14010081
crossref_primary_10_1016_j_sigpro_2024_109551
crossref_primary_10_1109_TCSVT_2022_3182709
crossref_primary_10_1109_LSP_2025_3546896
crossref_primary_10_1016_j_imavis_2024_105317
crossref_primary_10_1109_TPAMI_2021_3119563
crossref_primary_10_1186_s13634_024_01144_0
crossref_primary_10_1016_j_compag_2024_109379
crossref_primary_10_1007_s11263_023_01937_0
crossref_primary_10_1002_int_22565
crossref_primary_10_3390_s25030931
crossref_primary_10_3390_s25051410
crossref_primary_10_1016_j_dib_2024_111205
crossref_primary_10_1109_ACCESS_2022_3170481
crossref_primary_10_1007_s10462_025_11212_y
crossref_primary_10_1007_s11263_024_02302_5
crossref_primary_10_1109_TITS_2022_3147770
crossref_primary_10_1109_MTS_2023_3306539
crossref_primary_10_1016_j_eswa_2024_125653
crossref_primary_10_1007_s11263_022_01711_8
crossref_primary_10_3390_rs16091604
crossref_primary_10_1016_j_photonics_2024_101318
crossref_primary_10_1038_s41598_023_31806_2
crossref_primary_10_3390_rs15082088
crossref_primary_10_1007_s11263_024_02074_y
crossref_primary_10_3390_app15136969
crossref_primary_10_1016_j_animal_2025_101503
crossref_primary_10_1145_3760525
crossref_primary_10_1007_s11042_024_19856_8
crossref_primary_10_3788_CJL241243
crossref_primary_10_1016_j_compeleceng_2022_108201
crossref_primary_10_1007_s00530_025_01911_5
crossref_primary_10_1109_ACCESS_2024_3524501
crossref_primary_10_1007_s00521_022_08079_3
crossref_primary_10_1007_s13748_022_00290_6
crossref_primary_10_1038_s41598_022_19697_1
crossref_primary_10_1007_s00138_025_01695_8
crossref_primary_10_1017_S1431927622007966
crossref_primary_10_1007_s11263_021_01455_x
crossref_primary_10_3390_math12081245
crossref_primary_10_1109_ACCESS_2024_3411617
crossref_primary_10_1016_j_trc_2025_105205
crossref_primary_10_1186_s13640_024_00623_6
crossref_primary_10_3390_s25154603
crossref_primary_10_1117_1_OE_62_3_031209
crossref_primary_10_1007_s11042_024_18697_9
crossref_primary_10_3390_s23156887
crossref_primary_10_3390_electronics13173460
crossref_primary_10_3390_rs14163853
crossref_primary_10_1016_j_postharvbio_2023_112587
crossref_primary_10_1109_TAES_2023_3289164
crossref_primary_10_1007_s11042_023_17983_2
crossref_primary_10_3390_buildings14040859
crossref_primary_10_3390_mi13010072
crossref_primary_10_1007_s11334_023_00533_2
crossref_primary_10_32604_cmes_2024_050140
crossref_primary_10_1155_2022_2154463
crossref_primary_10_3390_s21227543
crossref_primary_10_1371_journal_pbio_3003002
crossref_primary_10_1007_s00371_024_03544_7
crossref_primary_10_1016_j_patcog_2022_109107
crossref_primary_10_1109_ACCESS_2025_3569732
crossref_primary_10_1007_s11042_024_20435_0
crossref_primary_10_1109_TMM_2023_3234822
crossref_primary_10_1016_j_neucom_2022_07_087
crossref_primary_10_3389_fpls_2022_1047356
crossref_primary_10_1016_j_pmcj_2024_101914
crossref_primary_10_3390_drones7060389
crossref_primary_10_1109_LSP_2023_3236262
crossref_primary_10_7467_KSAE_2025_33_9_755
crossref_primary_10_1016_j_neucom_2024_128906
crossref_primary_10_3390_app12199597
crossref_primary_10_1016_j_compag_2025_110856
crossref_primary_10_1016_j_displa_2024_102682
crossref_primary_10_61453_INTIj_202407
crossref_primary_10_3390_a15090313
crossref_primary_10_1016_j_compag_2024_109161
crossref_primary_10_1016_j_neunet_2023_09_047
crossref_primary_10_1109_TITS_2023_3290827
crossref_primary_10_1007_s11263_023_01943_2
crossref_primary_10_3389_fpls_2022_1003243
crossref_primary_10_1049_ipr2_12413
crossref_primary_10_1007_s00530_024_01351_7
crossref_primary_10_1109_TITS_2025_3565334
crossref_primary_10_1016_j_cviu_2024_103957
crossref_primary_10_23919_JSEE_2022_000001
crossref_primary_10_3390_electronics13010091
crossref_primary_10_1016_j_aej_2025_04_062
crossref_primary_10_3390_a16030154
crossref_primary_10_3390_jimaging8080210
crossref_primary_10_3390_s23010532
crossref_primary_10_1007_s11042_023_17397_0
crossref_primary_10_1109_LRA_2022_3199026
crossref_primary_10_1016_j_neucom_2024_127954
crossref_primary_10_1007_s11042_023_16094_2
crossref_primary_10_1109_ACCESS_2025_3551672
crossref_primary_10_1007_s11227_022_04776_x
crossref_primary_10_1007_s10462_025_11348_x
crossref_primary_10_3390_electronics10182319
crossref_primary_10_3390_rs14215513
crossref_primary_10_1007_s11263_024_02336_9
crossref_primary_10_1109_TIP_2023_3298538
crossref_primary_10_1109_TSMC_2022_3225252
crossref_primary_10_32604_cmc_2022_030016
crossref_primary_10_3390_machines13020162
crossref_primary_10_1109_ACCESS_2024_3358450
crossref_primary_10_1007_s00530_025_01679_8
crossref_primary_10_1007_s00530_025_01694_9
crossref_primary_10_3390_app12031225
crossref_primary_10_1007_s11760_024_03544_z
crossref_primary_10_1007_s11263_023_01908_5
crossref_primary_10_1016_j_eswa_2023_121983
crossref_primary_10_1109_ACCESS_2021_3093526
crossref_primary_10_3389_fmars_2025_1524134
crossref_primary_10_1016_j_eswa_2025_127640
Cites_doi 10.1109/CVPR.2005.177
10.1109/WACV.2016.7477566
10.1109/ICCV.2017.41
10.1109/AVSS.2017.8078516
10.1109/TPAMI.2018.2876253
10.1109/CVPRW.2013.111
10.1109/TIP.2017.2745103
10.1109/AVSS.2017.8078552
10.1109/ICIP.2015.7351235
10.1364/JOSAA.34.000280
10.1109/CVPR.2006.100
10.1109/AVSS.2012.59
10.1109/UV.2018.8642156
10.1007/978-3-319-48881-3_4
10.1109/CVPR.2015.7299178
10.1109/CVPR.2017.394
10.1109/CVPRW.2018.00192
10.1109/CVPR.2012.6248074
10.1109/CVPRW.2017.266
10.1109/CVPR.2018.00935
10.1109/CVPR.2011.5995604
10.1109/ICCV.2015.534
10.3390/s19030559
10.1109/TPAMI.2015.2505309
10.1007/s11263-016-0960-z
10.1109/AVSS.2016.7738059
10.1049/iet-cvi.2016.0068
10.1109/ICRA.2016.7487371
10.1007/978-3-030-58558-7_26
10.1109/ACCESS.2019.2921975
10.1007/s11263-006-7899-4
10.1109/CVPR.2017.145
10.1007/978-3-319-46448-0_45
10.1007/s11263-014-0733-5
10.1109/ACCESS.2020.2975912
10.1007/s11263-010-0390-2
10.1007/978-3-030-01237-3_13
10.1109/CVPR.2016.234
10.1109/TCSVT.2018.2881123
10.1109/ICCV.2017.518
10.1109/ACCESS.2018.2889442
10.1007/978-3-319-48881-3_2
10.1109/CVPR42600.2020.00628
10.1109/ICCV.2019.00409
10.5220/0006564504290438
10.1109/ICCV.2011.6126532
10.1109/ICIP.2019.8803140
10.1016/j.cviu.2016.05.003
10.1023/A:1014573219977
10.1109/CVPR.2014.453
10.1109/CVPR.2008.4587584
10.1007/978-3-319-46484-8_47
10.1109/CVPR.2011.5995347
10.1109/ICCV.2019.00627
10.1109/CVPR.2017.206
10.1109/TCSVT.2020.2975842
10.1016/j.neucom.2019.08.008
10.1109/CVPR.2019.00895
10.1109/CVPR.2008.4587581
10.1109/CVPR.2014.283
10.14569/IJACSA.2017.081129
10.1007/978-3-319-10593-2_47
10.1109/WACV.2015.12
10.1109/AVSS.2018.8639078
10.1609/aaai.v31i1.11194
10.1109/CVPR.2014.167
10.1109/TPAMI.2014.2300479
10.1109/CVPR.2019.00477
10.1109/ACCESS.2018.2879535
10.1109/TITS.2019.2892413
10.1109/ICRA.2016.7487180
10.1109/TCSVT.2018.2825679
10.1109/ICCV.2015.169
10.1007/978-3-030-01228-1_23
10.1109/WACV.2015.17
10.1109/CVPRW.2016.55
10.1109/AVSS.2017.8078481
10.1109/CVPR.2015.7299138
10.1109/CVPR.2014.159
10.1109/ICCV.2019.00103
10.1109/CVPR.2018.00542
10.1109/ICCV.2011.6126516
10.1109/ACCESS.2019.2932301
10.1109/TCSVT.2018.2882192
10.1109/CVPR.2005.453
10.1145/3343031.3350853
10.1109/TIP.2018.2843129
10.1109/ACCESS.2019.2953276
10.1109/CVPRW.2009.5206631
10.1109/SITIS.2018.00021
10.1016/j.patcog.2019.04.018
10.1109/ICCV.2013.286
10.1109/ACCESS.2018.2881019
10.1007/978-3-319-48881-3_5
10.1109/ICCV.2015.349
10.1109/PETS-WINTER.2009.5399556
10.1109/CVPR.2016.155
10.1109/CVPR.2017.403
10.1109/TAC.1979.1102177
10.1109/TIP.2020.2993073
10.1016/j.jvcir.2019.01.026
10.1109/AVSS.2010.90
10.1109/TPAMI.2017.2691769
10.1109/ACCESS.2018.2816805
10.1109/CVPR.2015.7299036
10.1109/TPAMI.2013.103
10.1109/ICCVW.2011.6130233
10.1109/CVPRW.2016.59
10.1002/nav.3800020109
10.1109/TMM.2019.2902480
10.1007/978-3-319-48881-3_7
10.1109/ICCV.2015.533
10.1109/WACV.2018.00057
10.1109/CVPR42600.2020.00252
10.1007/s11263-015-0816-y
10.1155/2008/246309
10.1007/s11263-018-01147-z
10.1109/CVPR.2010.5540156
10.1109/ICME.2018.8486454
10.1109/LSP.2019.2940922
10.1016/j.cviu.2020.102907
10.1109/ICIP.2016.7533003
10.1109/TPAMI.2013.185
10.1007/978-3-030-20890-5_39
10.1109/ICCE-Asia.2016.7804800
10.1109/AVSS.2017.8078517
10.1109/CVPR.2006.19
10.1109/CVPR42600.2020.00682
10.1109/WACV.2019.00023
10.1109/ICCE-ASIA.2018.8552105
10.1109/CVPR42600.2020.00250
10.1109/ICCV.2015.347
10.1109/TSP.2008.920469
10.1007/978-3-319-48881-3_8
10.1109/CVPRW.2019.00105
ContentType Journal Article
Copyright The Author(s) 2020
COPYRIGHT 2021 Springer
The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2020
– notice: COPYRIGHT 2021 Springer
– notice: The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8FD
8FE
8FG
8FK
8FL
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PYYUZ
Q9U
DOI 10.1007/s11263-020-01393-0
DatabaseName Springer Nature OA Free Journals
CrossRef
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Global (Alumni Edition)
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni Edition)
ProQuest Central (Alumni Edition)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials - QC
ProQuest Central
Business Premium Collection
Technology Collection
ProQuest One Community College
ProQuest Central Korea
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global
Computing Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
ABI/INFORM Collection China
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Central China
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ABI/INFORM China
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
ProQuest One Academic (New)
Business Premium Collection (Alumni)
DatabaseTitleList CrossRef
ABI/INFORM Global (Corporate)


Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1573-1405
EndPage 881
ExternalDocumentID A656117210
10_1007_s11263_020_01393_0
GrantInformation_xml – fundername: Technische Universität München (1025)
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
2.D
203
28-
29J
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
6TJ
78A
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACIHN
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACUHS
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEAQA
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
C6C
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EAS
EBLON
EBS
EDO
EIOEI
EJD
EMK
EPL
ESBYG
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IAO
IHE
IJ-
IKXTQ
ISR
ITC
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Y
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M0N
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TAE
TEORI
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7S
Z7V
Z7W
Z7X
Z7Y
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8P
Z8Q
Z8R
Z8S
Z8T
Z8W
Z92
ZMTXR
~8M
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ICD
PHGZM
PHGZT
PQGLB
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
PKEHL
PQEST
PQUKI
PRINS
Q9U
ID FETCH-LOGICAL-c521t-6e05eb0cd608e5dfad35ab64565bcfaadf8a1afb23a72361f5a6c6aad01f3ef23
IEDL.DBID RSV
ISICitedReferencesCount 199
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000601485200003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0920-5691
IngestDate Wed Nov 05 01:28:14 EST 2025
Sat Nov 29 10:08:41 EST 2025
Tue Nov 18 21:42:58 EST 2025
Sat Nov 29 06:42:28 EST 2025
Fri Feb 21 02:49:50 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords Evaluation
Computer vision
MOTA
MOTChallenge
Multi-object-tracking
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c521t-6e05eb0cd608e5dfad35ab64565bcfaadf8a1afb23a72361f5a6c6aad01f3ef23
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-4623-8749
OpenAccessLink https://link.springer.com/10.1007/s11263-020-01393-0
PQID 2503535885
PQPubID 1456341
PageCount 37
ParticipantIDs proquest_journals_2503535885
gale_infotracacademiconefile_A656117210
crossref_citationtrail_10_1007_s11263_020_01393_0
crossref_primary_10_1007_s11263_020_01393_0
springer_journals_10_1007_s11263_020_01393_0
PublicationCentury 2000
PublicationDate 2021-04-01
PublicationDateYYYYMMDD 2021-04-01
PublicationDate_xml – month: 04
  year: 2021
  text: 2021-04-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle International journal of computer vision
PublicationTitleAbbrev Int J Comput Vis
PublicationYear 2021
Publisher Springer US
Springer
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer
– name: Springer Nature B.V
References Chu, P., Fan, H., Tan, C. C., & Ling, H. (2019). Online multi-object tracking with instance-aware tracker and dynamic model refreshment. In Winter conference on applications of computer vision.
Baisa, N. L. (2018). Online multi-target visual tracking using a HISP filter. In International joint conference on computer vision, imaging and computer graphics theory and applications.
Ess, A., Leibe, B., Schindler, K., & Van Gool, L. (2008). A mobile vision system for robust multi-person tracking. In Conference on computer vision and pattern recognition.
Bae, S.-H., & Yoon, K.-J. (2014). Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In Conference on computer vision and pattern recognition.
Dehghan, A., Assari, S. M., & Shah, M. (2015) GMMCP-tracker: Globally optimal generalized maximum multi clique problem for multiple object tracking. In Conference on computer vision and pattern recognition workshops.
Milan, A., Rezatofighi, S. H., Dick, A., Reid, I., & Schindler, K. (2017). Online multi-target tracking using recurrent neural networks. In Conference on artificial on intelligence.
BaisaNLWallaceADevelopment of a n-type GM-PHD filter for multiple target, multiple type visual trackingJournal of Visual Communication and Image Representation20195925727110.1016/j.jvcir.2019.01.026
MahgoubHMostafaKWassifKTFaragIMulti-target tracking using hierarchical convolutional features and motion cuesInternational Journal of Advanced Computer Science & Applications201781121722210.14569/IJACSA.2017.081129
Leal-Taixé, L., Pons-Moll, G., & Rosenhahn, B. (2011). Everybody needs somebody: Modeling social and grouping behavior on a linear programming multiple people tracker. In International conference on computer vision workshops.
Ferryman, J., & Shahrokni, A. (2009). PETS2009: Dataset and challenge. In International workshop on performance evaluation of tracking and surveillance.
Zhang, L., Li, Y., & Nevatia, R. (2008). Global data association for multi-object tracking using network flows. In Conference on computer vision and pattern recognition.
Chen, W., Chen, X., Zhang, J., & Huang, K. (2017b). Beyond triplet loss: A deep quadruplet network for person re-identification. In Conference on computer vision and pattern recognition.
Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., & Caine, B., et al. (2020). Scalability in perception for autonomous driving: Waymo open dataset. In Conference on computer vision and pattern recognition.
Wang, B., Wang, L., Shuai, B., Zuo, Z., Liu, T., et al. (2016). Joint learning of convolutional neural networks and temporally constrained metrics for tracklet association. In Conference on computer vision and pattern recognition.
Felzenszwalb, P. F., & Huttenlocher, D. P. (2006) Efficient belief propagation for early vision. In Conference on computer vision and pattern recognition.
Yoon, J. H., Lee, C. R., Yang, M. H., & Yoon, K. J. (2016). Online multi-object tracking via structural constraint event aggregation. In International conference on computer vision and pattern recognition.
YangMWuYJiaYA hybrid data association framework for robust online multi-object trackingTransactions on Image Processing201710.1109/TIP.2017.27451031409.94704
Taskar, B., Guestrin, C., & Koller, D. (2003). Max-margin Markov networks. In Advances in neural information processing systems.
Fagot-Bouquet, L., Audigier, R., Dhome, Y., & Lerasle, F. (2015). Online multi-person tracking based on global sparse collaborative representations. In International conference on image processing.
Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In European conference on computer vision.
Girshick, R. (2015). Fast R-CNN. In International conference on computer vision.
RussakovskyODengJSuHKrauseJSatheeshSMaSImageNet large scale visual recognition challengeInternational Journal of Computer Vision20151153211252342248210.1007/s11263-015-0816-y
ShengHZhangYChenJXiongZZhangJHeterogeneous association graph fusion for target association in multiple object trackingTransactions on Circuits and Systems for Video Technology201829113269328010.1109/TCSVT.2018.2882192
Stiefelhagen, R., Bernardin, K., Bowers, R., Garofolo, J. S., Mostefa, D., & Soundararajan, P. (2006). The clear 2006 evaluation. In Multimodal technologies for perception of humans.
Chu, Q., Ouyang, W., Li, H., Wang, X., Liu, B., & Yu, N. (2017). Online multi-object tracking using CNN-based single object tracker with spatial-temporal attention mechanism. In International conference on computer vision.
Wang, G., Wang, Y., Zhang, H., Gu, R., & Hwang, J.-N. (2019). Exploit the connectivity: Multi-object tracking with trackletnet. In International conference on multimedia.
Kesten, R., Usman, M., Houston, J., Pandya, T., Nadhamuni, K., et al. (2019) Lyft level 5 av dataset 2019. https://level5.lyft.com/dataset/.
Benfold, B., & Reid, I. (2011). Unsupervised learning of a scene-specific coarse gaze estimator. In International conference on computer vision.
Tao, Y., Chen, J., Fang, Y., Masaki, I., & Horn, B. K. (2018). Adaptive spatio-temporal model based multiple object tracking in video sequences considering a moving camera. In International conference on universal village.
Ma, C., Yang, C., Yang, F., Zhuang, Y., Zhang, Z., Jia, H., & Xie, X. (2018a). Trajectory factory: Tracklet cleaving and re-connection by deep Siamese bi-GRU for multiple object tracking. In International conference on multimedia and expo.
WuHHuYWangKLiHNieLChengHInstance-aware representation learning and association for online multi-person trackingPattern Recognition201994253410.1016/j.patcog.2019.04.018
Yoon, Y.-C., Song, Y.-M., Yoon, K., & Jeon, M. (2018). Online multi-object tracking using selective deep appearance matching. In International conference on consumer electronics Asia.
Kieritz, H., Becker, S., Häbner, W., & Arens, M. (2016). Online multi-person tracking using integral channel features. In International conference on advanced video and signal based surveillance.
MilanASchindlerKRothSMulti-target tracking by discrete-continuous energy minimizationTransactions on Pattern Analysis and Machine Intelligence201638102054206810.1109/TPAMI.2015.2505309
BabaeeMLiZRigollGA dual CNN-RNN for multiple people trackingNeurocomputing2019368698310.1016/j.neucom.2019.08.008
Pedersen, M., Haurum, J. B., Bengtson, S. H., & Moeslund, T. B. (June 2020). 3D-ZEF: A 3D zebrafish tracking benchmark dataset. In Conference on computer vision and pattern recognition.
Dendorfer, P., Rezatofighi, H., Milan, A., Shi, J., Cremers, D., Reid, I., Roth, S., Schindler, K., & Leal-Taixé, L. (2020). MOT20: A benchmark for multi object tracking in crowded scenes. arXiv preprint arXiv:2003.09003.
Baisa, N. L. (2019c). Robust online multi-target visual tracking using a HISP filter with discriminative deep appearance learning. arXiv preprint arXiv:1908.03945.
Maksai, A., & Fua, P. (2019). Eliminating exposure bias and metric mismatch in multiple object tracking. In Conference on computer vision and pattern recognition.
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems.
TianWLauerMChenLOnline multi-object tracking using joint domain information in traffic scenariosTransactions on Intelligent Transportation Systems201921137438410.1109/TITS.2019.2892413
Chu, P., & Ling, H. (2019). FAMNet: Joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. In International conference on computer vision.
Son, J., Baek, M., Cho, M., & Han, B. (2017). Multi-object tracking with quadruplet convolutional neural networks. In Conference on computer vision and pattern recognition.
Le, N., Heili, A., & Odobez, J.-M. (2016). Long-term time-sensitive costs for CRF-based tracking by detection. In European conference on computer vision workshops.
SongYYoonKYoonYYowKJeonMOnline multi-object tracking with GMPHD filter and occlusion group managementAccess2019716510316512110.1109/ACCESS.2019.2953276
Seitz, S. M., Curless, B., Diebel, J., Scharstein, D., & Szeliski, R. (2006). A comparison and evaluation of multi-view stereo reconstruction algorithms. In Conference on computer vision and pattern recognition.
Xu, J., Cao, Y., Zhang, Z., & Hu, H. (2019). Spatial-temporal relation networks for multi-object tracking. In International conference on computer vision.
Zamir, A. R., Dehghan, A., & Shah, M. (2012). GMCP-Tracker: Global multi-object tracking using generalized minimum clique graphs. In European conference on computer vision.
Geiger, A., Lenz, P., & Urtasun, R. (2012) Are we ready for autonomous driving? The KITTI vision benchmark suite. In Conference on computer vision and pattern recognition.
Mathias, M., Benenson, R., Pedersoli, M., & Gool, L. V. (2014). Face detection without bells and whistles. In European conference on computer vision workshops.
Bewley, A., Ott, L., Ramos, F., & Upcroft, B. (2016b). Alextrac: Affinity learning by exploring temporal reinforcement within association chains. In International conference on robotics and automation.
Yoon, Y., Kim, D. Y., Yoon, K., Song, Y., & Jeon, M. (2019b). Online multiple pedestrian tracking using deep temporal appearance matching association. arXiv preprint arXiv:1907.00831.
JuJKimDKuBHanDKoHOnline multi-object tracking with efficient track drift and fragmentation handlingJournal of the Optical Society of America A201734228029310.1364/JOSAA.34.000280
Boragule, A., & Jeon, M. (2017). Joint cost minimization for multi-object tracking. International conference on advanced video and signal based surveillance.
Chang, M.-F., Lambert, J., Sangkloy, P., Singh, J., Bak, S., Hartnett, A., Wang, D., Carr, P., Lucey, S., Ramanan, D., & Hays, J. (2019). Argoverse: 3D tracking and forecasting with rich maps.
1393_CR29
1393_CR28
Y Zhang (1393_CR154) 2020; 29
L Wen (1393_CR133) 2020; 193
1393_CR21
1393_CR20
1393_CR23
1393_CR22
1393_CR24
1393_CR27
A Milan (1393_CR95) 2016; 38
1393_CR26
H Mahgoub (1393_CR86) 2017; 8
1393_CR39
K Yoon (1393_CR147) 2020; 8
1393_CR156
1393_CR157
1393_CR32
S Wang (1393_CR132) 2016; 122
1393_CR31
1393_CR34
1393_CR33
1393_CR36
1393_CR35
1393_CR38
A Geiger (1393_CR50) 2014; 36
1393_CR30
S Thrun (1393_CR127) 2005
1393_CR105
1393_CR106
H Wu (1393_CR137) 2019; 94
Q Liu (1393_CR80) 2019; 7
1393_CR101
D Scharstein (1393_CR107) 2002; 47
1393_CR102
O Russakovsky (1393_CR103) 2015; 115
S Baker (1393_CR12) 2011; 92
1393_CR104
1393_CR43
1393_CR42
1393_CR45
1393_CR44
1393_CR47
1393_CR109
K Yoon (1393_CR148) 2019; 19
1393_CR46
H Karunasekera (1393_CR61) 2019
1393_CR40
J Ju (1393_CR60) 2017; 11
L Lan (1393_CR70) 2018; 27
1393_CR111
S-H Bae (1393_CR6) 2018; 40
D Schuhmacher (1393_CR108) 2008; 56
W Tian (1393_CR128) 2019; 21
P Dollár (1393_CR37) 2014; 36
1393_CR54
S-H Lee (1393_CR76) 2018; 6
1393_CR53
M Everingham (1393_CR41) 2015; 111
1393_CR56
1393_CR55
1393_CR58
1393_CR57
M Keuper (1393_CR63) 2018
H Sheng (1393_CR113) 2018; 29
H Zhou (1393_CR155) 2018
1393_CR52
H Sheng (1393_CR110) 2018; 29
1393_CR51
1393_CR100
1393_CR129
J Xiang (1393_CR138) 2020
J Ju (1393_CR59) 2017; 34
1393_CR123
1393_CR124
Z Fu (1393_CR49) 2018; 6
1393_CR125
1393_CR126
1393_CR65
1393_CR64
1393_CR67
Z Fu (1393_CR48) 2019; 21
1393_CR66
1393_CR69
NL Baisa (1393_CR11) 2019; 59
1393_CR8
1393_CR9
1393_CR7
1393_CR5
1393_CR2
1393_CR3
1393_CR62
K Bernardin (1393_CR17) 2008
1393_CR130
1393_CR131
A Milan (1393_CR93) 2014; 36
1393_CR116
1393_CR117
1393_CR118
M Yang (1393_CR144) 2017
1393_CR115
1393_CR1
1393_CR78
DB Reid (1393_CR99) 1979; 24
1393_CR77
HW Kuhn (1393_CR68) 1955; 2
M Babaee (1393_CR4) 2019; 368
1393_CR79
1393_CR72
1393_CR71
1393_CR74
1393_CR73
H Sheng (1393_CR112) 2018; 7
Y Song (1393_CR119) 2019; 7
1393_CR120
1393_CR121
1393_CR122
1393_CR149
1393_CR145
1393_CR146
1393_CR87
1393_CR89
1393_CR88
1393_CR81
1393_CR83
1393_CR82
1393_CR85
1393_CR84
M Yang (1393_CR143) 2016
1393_CR152
1393_CR153
1393_CR150
L Chen (1393_CR25) 2019; 26
1393_CR151
1393_CR18
1393_CR139
1393_CR19
1393_CR134
1393_CR135
1393_CR136
1393_CR10
1393_CR98
1393_CR97
1393_CR14
1393_CR13
1393_CR16
1393_CR15
1393_CR90
S Lee (1393_CR75) 2019; 7
1393_CR92
1393_CR91
1393_CR94
1393_CR96
1393_CR141
1393_CR142
X Shi (1393_CR114) 2018; 127
1393_CR140
References_xml – reference: Chang, M.-F., Lambert, J., Sangkloy, P., Singh, J., Bak, S., Hartnett, A., Wang, D., Carr, P., Lucey, S., Ramanan, D., & Hays, J. (2019). Argoverse: 3D tracking and forecasting with rich maps. In Conference on computer vision and pattern recognition.
– reference: Bewley, A., Ott, L., Ramos, F., & Upcroft, B. (2016b). Alextrac: Affinity learning by exploring temporal reinforcement within association chains. In International conference on robotics and automation.
– reference: Zhou, X., Jiang, P., Wei, Z., Dong, H., & Wang, F. (2018b). Online multi-object tracking with structural invariance constraint. In British machine vision conference.
– reference: Le, N., Heili, A., & Odobez, J.-M. (2016). Long-term time-sensitive costs for CRF-based tracking by detection. In European conference on computer vision workshops.
– reference: MahgoubHMostafaKWassifKTFaragIMulti-target tracking using hierarchical convolutional features and motion cuesInternational Journal of Advanced Computer Science & Applications201781121722210.14569/IJACSA.2017.081129
– reference: Stiefelhagen, R., Bernardin, K., Bowers, R., Garofolo, J. S., Mostefa, D., & Soundararajan, P. (2006). The clear 2006 evaluation. In Multimodal technologies for perception of humans.
– reference: Huang, G. B., Ramesh, M., Berg, T., & Learned-Miller, E. (2007). Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachussetts, Amherst.
– reference: DollárPAppelRBelongieSPeronaPFast feature pyramids for object detectionTransactions on Pattern Analysis and Machine Intelligence20143681532154510.1109/TPAMI.2014.2300479
– reference: BaeS-HYoonK-JConfidence-based data association and discriminative deep appearance learning for robust online multi-object trackingTransactions on Pattern Analysis and Machine Intelligence201840359561010.1109/TPAMI.2017.2691769
– reference: Eiselein, V., Arp, D., Pätzold, M., & Sikora, T. (2012). Real-time multi-human tracking using a probability hypothesis density filter and multiple detectors. In International conference on advanced video and signal-based surveillance.
– reference: Ess, A., Leibe, B., Schindler, K., & Van Gool, L. (2008). A mobile vision system for robust multi-person tracking. In Conference on computer vision and pattern recognition.
– reference: MilanARothSSchindlerKContinuous energy minimization for multitarget trackingTransactions on Pattern Analysis and Machine Intelligence2014361587210.1109/TPAMI.2013.103
– reference: Leal-Taixé, L., Pons-Moll, G., & Rosenhahn, B. (2011). Everybody needs somebody: Modeling social and grouping behavior on a linear programming multiple people tracker. In International conference on computer vision workshops.
– reference: Yoon, Y., Boragule, A., Song, Y., Yoon, K., & Jeon, M. (2018a). Online multi-object tracking with historical appearance matching and scene adaptive detection filtering. In International conference on advanced video and signal based surveillance.
– reference: Wang, B., Wang, L., Shuai, B., Zuo, Z., Liu, T., et al. (2016). Joint learning of convolutional neural networks and temporally constrained metrics for tracklet association. In Conference on computer vision and pattern recognition.
– reference: Ma, L., Tang, S., Black, M. J., & Van Gool, L. (2018b). Customized multi-person tracker. In Asian conference on computer vision.
– reference: Battaglia, P., Pascanu, R., Lai, M., Rezende, D. J., et al. (2016). Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems.
– reference: ScharsteinDSzeliskiRA taxonomy and evaluation of dense two-frame stereo correspondence algorithmsInternational Journal of Computer Vision200247174210.1023/A:1014573219977
– reference: Ferryman, J., & Ellis, A. (2010) PETS2010: Dataset and challenge. In International conference on advanced video and signal based surveillance.
– reference: McLaughlin, N., Martinez Del Rincon, J., Miller, P. (2015). Enhancing linear programming with motion modeling for multi-target tracking. In Winter conference on applications of computer vision.
– reference: Ban, Y., Ba, S., Alameda-Pineda, X., & Horaud, R. (2016). Tracking multiple persons based on a variational Bayesian model. In European conference on computer vision workshops.
– reference: Sadeghian, A., Alahi, A., Savarese, S. (2017). Tracking the untrackable: Learning to track multiple cues with long-term dependencies. In International conference on computer vision.
– reference: ShengHZhangXZhangYWuYChenJEnhanced association with supervoxels in multiple hypothesis trackingAccess201872107211710.1109/ACCESS.2018.2881019
– reference: Sanchez-Matilla, R., Cavallaro, A. (2019). A predictor of moving objects for first-person vision. In International conference on image processing.
– reference: Milan, A., Rezatofighi, S. H., Dick, A., Reid, I., & Schindler, K. (2017). Online multi-target tracking using recurrent neural networks. In Conference on artificial on intelligence.
– reference: Son, J., Baek, M., Cho, M., & Han, B. (2017). Multi-object tracking with quadruplet convolutional neural networks. In Conference on computer vision and pattern recognition.
– reference: BernardinKStiefelhagenREvaluating multiple object tracking performance: The CLEAR MOT metricsImage and Video Processing200810.1155/2008/246309
– reference: LeeS-HKimM-YBaeS-HLearning discriminative appearance models for online multi-object tracking with appearance discriminability measuresAccess20186673166732810.1109/ACCESS.2018.2879535
– reference: Tang, S., Andriluka, M., Andres, B., & Schiele, B. (2017). Multiple people tracking with lifted multicut and person re-identification. In Conference on computer vision and pattern recognition.
– reference: Yoon, J. H., Lee, C. R., Yang, M. H., & Yoon, K. J. (2016). Online multi-object tracking via structural constraint event aggregation. In International conference on computer vision and pattern recognition.
– reference: Yoon, Y.-C., Song, Y.-M., Yoon, K., & Jeon, M. (2018). Online multi-object tracking using selective deep appearance matching. In International conference on consumer electronics Asia.
– reference: Dave, A., Khurana, T., Tokmakov, P., Schmid, C., & Ramanan, D. (2020) Tao: A large-scale benchmark for tracking any object. In European conference on computer vision.
– reference: YoonKKimDYYoonY-CJeonMData association for multi-object tracking via deep neural networksSensors20191955910.3390/s19030559
– reference: Nguyen Thi Lan Anh, F. N., Khan, Furqan, & Bremond, F. (2017). Multi-object tracking using multi-channel part appearance representation. In International conference on advanced video and signal based surveillance.
– reference: SongYYoonKYoonYYowKJeonMOnline multi-object tracking with GMPHD filter and occlusion group managementAccess2019716510316512110.1109/ACCESS.2019.2953276
– reference: Taskar, B., Guestrin, C., & Koller, D. (2003). Max-margin Markov networks. In Advances in neural information processing systems.
– reference: Andriluka, M., Roth, S., & Schiele, B. (2010). Monocular 3D pose estimation and tracking by detection. In Conference on computer vision and pattern recognition.
– reference: Henschel, R., Leal-Taixé, L., Cremers, D., & Rosenhahn, B. (2018). Fusion of head and full-body detectors for multi-object tracking. In Conference on computer vision and pattern recognition workshops.
– reference: Dehghan, A., Assari, S. M., & Shah, M. (2015) GMMCP-tracker: Globally optimal generalized maximum multi clique problem for multiple object tracking. In Conference on computer vision and pattern recognition workshops.
– reference: ShengHZhangYChenJXiongZZhangJHeterogeneous association graph fusion for target association in multiple object trackingTransactions on Circuits and Systems for Video Technology201829113269328010.1109/TCSVT.2018.2882192
– reference: KarunasekeraHWangHZhangHMultiple object tracking with attention to appearance, structure, motion and sizeAccess201910.1109/ACCESS.2019.2932301
– reference: Ristani, E., Solera, F., Zou, R., Cucchiara, R., & Tomasi, C. (2016). Performance measures and a data set for multi-target, multi-camera tracking. In European conference on computer vision.
– reference: Manen, S., Timofte, R., Dai, D., & Gool, L. V. (2016). Leveraging single for multi-target tracking using a novel trajectory overlap affinity measure. In Winter conference on applications of computer vision.
– reference: Dendorfer, P., Rezatofighi, H., Milan, A., Shi, J., Cremers, D., Reid, I., Roth, S., Schindler, K., & Leal-Taixé, L. (2020). MOT20: A benchmark for multi object tracking in crowded scenes. arXiv preprint arXiv:2003.09003.
– reference: Smith, K., Gatica-Perez, D., Odobez, J.-M., & Ba, S. (2005). Evaluating multi-object tracking. In Workshop on empirical evaluation methods in computer vision.
– reference: BabaeeMLiZRigollGA dual CNN-RNN for multiple people trackingNeurocomputing2019368698310.1016/j.neucom.2019.08.008
– reference: Baisa, N. L. (2019a). Online multi-object visual tracking using a GM-PHD filter with deep appearance learning. In International conference on information fusion.
– reference: Kesten, R., Usman, M., Houston, J., Pandya, T., Nadhamuni, K., et al. (2019) Lyft level 5 av dataset 2019. https://level5.lyft.com/dataset/.
– reference: XiangJXuGMaCHouJEnd-to-end learning deep CRF models for multi-object trackingTransactions on Circuits and Systems for Video Technology202010.1109/TCSVT.2020.2975842
– reference: Sheng, H., Hao, L., Chen, J., et al. (2017). Robust local effective matching model for multi-target tracking. In Advances in multimedia information processing (Vol. 127, No. 8).
– reference: Zhu, J., Yang, H., Liu, N., Kim, M., Zhang, W., & Yang, M.-H. (2018). Online multi-object tracking with dual matching attention networks. In European conference on computer vision workshops.
– reference: Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In Conference on computer vision and pattern recognition workshops.
– reference: WangSFowlkesCLearning optimal parameters for multi-target tracking with contextual interactionsInternational Journal of Computer Vision20161223484501363602210.1007/s11263-016-0960-z
– reference: Henschel, R., Zou, Y., & Rosenhahn, B. (2019). Multiple people tracking using body and joint detections. In Conference on computer vision and pattern recognition workshops.
– reference: Felzenszwalb, P. F., & Huttenlocher, D. P. (2006) Efficient belief propagation for early vision. In Conference on computer vision and pattern recognition.
– reference: Henriques, J. a., Caseiro, R., & Batista, J. (2011). Globally optimal solution to multi-object tracking with merged measurements. In International conference on computer vision.
– reference: ShiXLingHPangYYHuWChuPXingJRank-1 tensor approximation for high-order association in multi-target trackingInternational Journal of Computer Vision201812710631083397760010.1007/s11263-018-01147-z
– reference: Baisa, N. L. (2018). Online multi-target visual tracking using a HISP filter. In International joint conference on computer vision, imaging and computer graphics theory and applications.
– reference: Bochinski, E., Eiselein, V., & Sikora, T. (2017). High-speed tracking-by-detection without using image information. In International conference on advanced video and signal based surveillance.
– reference: ReidDBAn algorithm for tracking multiple targetsTransactions on Automatic Control197924684385410.1109/TAC.1979.1102177
– reference: FuZAngeliniFChambersJNaqviSMMulti-level cooperative fusion of GM-PHD filters for online multiple human trackingTransactions on Multimedia20192192277229110.1109/TMM.2019.2902480
– reference: YoonKGwakJSongYYoonYJeonMOneShotDa: Online multi-object tracker with one-shot-learning-based data associationAccess20208380603807210.1109/ACCESS.2020.2975912
– reference: Dendorfer, P., Rezatofighi, H., Milan, A., Shi, J., Cremers, D., Reid, I., Roth, S., Schindler, K., & Leal-Taixe, L. (2019). Cvpr19 tracking and detection challenge: How crowded can it get? arXiv preprint arXiv:1906.04567.
– reference: GeigerALauerMWojekCStillerCUrtasunR3D traffic scene understanding from movable platformsTransactions on Pattern Analysis and Machine Intelligence20143651012102510.1109/TPAMI.2013.185
– reference: TianWLauerMChenLOnline multi-object tracking using joint domain information in traffic scenariosTransactions on Intelligent Transportation Systems201921137438410.1109/TITS.2019.2892413
– reference: Sun, P., Kretzschmar, H., Dotiwalla, X., Chouard, A., Patnaik, V., Tsui, P., Guo, J., Zhou, Y., Chai, Y., & Caine, B., et al. (2020). Scalability in perception for autonomous driving: Waymo open dataset. In Conference on computer vision and pattern recognition.
– reference: ZhangYShengHWuYWangSLyuWKeWLong-term tracking with deep tracklet associationTransactions on Image Processing2020296694670610.1109/TIP.2020.2993073
– reference: Dicle, C., Camps, O., & Sznaier, M. (2013) The way they move: Tracking targets with similar appearance. In International conference on computer vision.
– reference: Seitz, S. M., Curless, B., Diebel, J., Scharstein, D., & Szeliski, R. (2006). A comparison and evaluation of multi-view stereo reconstruction algorithms. In Conference on computer vision and pattern recognition.
– reference: Wojke, N., & Paulus, D. (2016). Global data association for the probability hypothesis density filter using network flows. International conference on robotics and automation.
– reference: Kristan, M., et al. (2014). The visual object tracking VOT2014 challenge results. In European conference on computer vision workshops.
– reference: LeeSKimEMultiple object tracking via feature pyramid Siamese networksAccess201978181819410.1109/ACCESS.2018.2889442
– reference: Levinkov, E., Uhrig, J., Tang, S., Omran, M., Insafutdinov, E., Kirillov, A., Rother, C., Brox, T., Schiele, B., & Andres, B. (2017). Joint graph decomposition and node labeling: Problem, algorithms, applications. In Conference on computer vision and pattern recognition.
– reference: KeuperMTangSAndresBBroxTSchieleBMotion segmentation and multiple object tracking by correlation co-clusteringTransactions on Pattern Analysis and Machine Intelligence201810.1109/TPAMI.2018.2876253
– reference: Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems.
– reference: Chu, P., & Ling, H. (2019). FAMNet: Joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. In International conference on computer vision.
– reference: Hadsell, R., Chopra, S., & LeCun, Y. (2006). Dimensionality reduction by learning an invariant mapping. In Conference on computer vision and pattern recognition.
– reference: Mathias, M., Benenson, R., Pedersoli, M., & Gool, L. V. (2014). Face detection without bells and whistles. In European conference on computer vision workshops.
– reference: LiuQLiuBWuYLiWYuNReal-time online multi-object tracking in compressed domainAccess20197764897649910.1109/ACCESS.2019.2921975
– reference: BakerSScharsteinDLewisJPRothSBlackMJSzeliskiRA database and evaluation methodology for optical flowInternational Journal of Computer Vision201192113110.1007/s11263-010-0390-2
– reference: Li, B., Yan, J., Wu, W., Zhu, Z., & Hu, X. (2018). High performance visual tracking with Siamese region proposal network. In Conference on computer vision and pattern recognition.
– reference: Chu, Q., Ouyang, W., Li, H., Wang, X., Liu, B., & Yu, N. (2017). Online multi-object tracking using CNN-based single object tracker with spatial-temporal attention mechanism. In International conference on computer vision.
– reference: Held, D., Thrun, S., & Savarese, S. (2016). Learning to track at 100 fps with deep regression networks. In European conference on computer vision.
– reference: JuJKimDKuBHanDKKoHOnline multi-person tracking with two-stage data association and online appearance model learningIET Computer Vision2017111879510.1049/iet-cvi.2016.0068
– reference: Maksai, A., & Fua, P. (2019). Eliminating exposure bias and metric mismatch in multiple object tracking. In Conference on computer vision and pattern recognition.
– reference: Xu, J., Cao, Y., Zhang, Z., & Hu, H. (2019). Spatial-temporal relation networks for multi-object tracking. In International conference on computer vision.
– reference: FuZFengPAngeliniFChambersJANaqviSMParticle PHD filter based multiple human tracking using online group-structured dictionary learningAccess20186147641477810.1109/ACCESS.2018.2816805
– reference: Leal-Taixé, L., Fenzi, M., Kuznetsova, A., Rosenhahn, B., & Savarese, S. (2014). Learning an image-based motion context for multiple people tracking. In Conference on computer vision and pattern recognition.
– reference: Chen, W., Chen, X., Zhang, J., & Huang, K. (2017b). Beyond triplet loss: A deep quadruplet network for person re-identification. In Conference on computer vision and pattern recognition.
– reference: JuJKimDKuBHanDKoHOnline multi-object tracking with efficient track drift and fragmentation handlingJournal of the Optical Society of America A201734228029310.1364/JOSAA.34.000280
– reference: Baisa, N. L. (2019b). Occlusion-robust online multi-object visual tracking using a GM-PHD filter with a CNN-based re-identification. arXiv preprint arXiv:1912.05949.
– reference: Zamir, A. R., Dehghan, A., & Shah, M. (2012). GMCP-Tracker: Global multi-object tracking using generalized minimum clique graphs. In European conference on computer vision.
– reference: Choi, W. (2015). Near-online multi-target tracking with aggregated local flow descriptor. In International conference on computer vision.
– reference: Alahi, A., Ramanathan, V., & Fei-Fei, L. (2014). Socially-aware large-scale crowd forecasting. In Conference on computer vision and pattern recognition.
– reference: SchuhmacherDVoB-TVoB-NA consistent metric for performance evaluation of multi-object filtersTransactions on Signal Processing200856834473457251695510.1109/TSP.2008.920469
– reference: BaisaNLWallaceADevelopment of a n-type GM-PHD filter for multiple target, multiple type visual trackingJournal of Visual Communication and Image Representation20195925727110.1016/j.jvcir.2019.01.026
– reference: ThrunSBurgardWFoxDProbabilistic robotics (intelligent robotics and autonomous agents)2005CambridgeThe MIT Press1081.68703
– reference: Kieritz, H., Becker, S., Häbner, W., & Arens, M. (2016). Online multi-person tracking using integral channel features. In International conference on advanced video and signal based surveillance.
– reference: WenLDuDCaiZLeiZChangMQiHUA-DETRAC: A new benchmark and protocol for multi-object detection and trackingComputer Vision and Image Understanding202019310290710.1016/j.cviu.2020.102907
– reference: Boragule, A., & Jeon, M. (2017). Joint cost minimization for multi-object tracking. International conference on advanced video and signal based surveillance.
– reference: Pirsiavash, H., Ramanan, D., & Fowlkes, C. C. (2011). Globally-optimal greedy algorithms for tracking a variable number of objects. In Conference on computer vision and pattern recognition.
– reference: Benfold, B., & Reid, I. (2011). Unsupervised learning of a scene-specific coarse gaze estimator. In International conference on computer vision.
– reference: Geiger, A., Lenz, P., & Urtasun, R. (2012) Are we ready for autonomous driving? The KITTI vision benchmark suite. In Conference on computer vision and pattern recognition.
– reference: Tang, S., Andres, B., Andriluka, M., & Schiele, B. (2015). Subgraph decomposition for multi-target tracking. In Conference on computer vision and pattern recognition.
– reference: Kim, C., Li, F., & Rehg, J. M. (2018). Multi-object tracking with neural gating using bilinear LSTM. In European conference on computer vision.
– reference: Milan, A., Schindler, K., & Roth, S. (2013). Challenges of ground truth evaluation of multi-target tracking. In Conference on computer vision and pattern recognition workshops.
– reference: EveringhamMEslamiSAVan GoolLWilliamsCKWinnJZissermanAThe Pascal visual object classes challenge: A retrospectiveInternational Journal of Computer Vision201511119813610.1007/s11263-014-0733-5
– reference: RussakovskyODengJSuHKrauseJSatheeshSMaSImageNet large scale visual recognition challengeInternational Journal of Computer Vision20151153211252342248210.1007/s11263-015-0816-y
– reference: Kim, C., Li, F., Ciptadi, A., & Rehg, J. M. (2015). Multiple hypothesis tracking revisited. In International conference on computer vision.
– reference: Wang, G., Wang, Y., Zhang, H., Gu, R., & Hwang, J.-N. (2019). Exploit the connectivity: Multi-object tracking with trackletnet. In International conference on multimedia.
– reference: YangMWuYJiaYA hybrid data association framework for robust online multi-object trackingTransactions on Image Processing201710.1109/TIP.2017.27451031409.94704
– reference: Yoon, Y., Kim, D. Y., Yoon, K., Song, Y., & Jeon, M. (2019b). Online multiple pedestrian tracking using deep temporal appearance matching association. arXiv preprint arXiv:1907.00831.
– reference: WuHHuYWangKLiHNieLChengHInstance-aware representation learning and association for online multi-person trackingPattern Recognition201994253410.1016/j.patcog.2019.04.018
– reference: Tao, Y., Chen, J., Fang, Y., Masaki, I., & Horn, B. K. (2018). Adaptive spatio-temporal model based multiple object tracking in video sequences considering a moving camera. In International conference on universal village.
– reference: Xu, Y., Osep, A., Ban, Y., Horaud, R., Leal-Taixe, L., & Alameda-Pineda, X. (2020). How to train your deep multi-object tracker. In Conference on computer vision and pattern recognition.
– reference: Li, Y., Huang, C., & Nevatia, R. (2009). Learning to associate: Hybrid boosted multi-target tracker for crowded scene. In Conference on computer vision and pattern recognition.
– reference: Wu, B., & Nevatia, R. (2006). Tracking of multiple, partially occluded humans based on static body part detection. In Conference on computer vision and pattern recognition.
– reference: Baisa, N. L. (2019c). Robust online multi-target visual tracking using a HISP filter with discriminative deep appearance learning. arXiv preprint arXiv:1908.03945.
– reference: LanLWangXZhangSTaoDGaoWHuangTSInteracting tracklets for multi-object trackingTransactions on Image Processing201827945854597381868210.1109/TIP.2018.2843129
– reference: Torralba, A., & Efros, A. A. (2011). Unbiased look at dataset bias. In Conference on computer vision and pattern recognition.
– reference: Bewley, A., Ge, Z., Ott, L., Ramos, F., & Upcroft, B. (2016a). Simple online and realtime tracking. In International conference on image processing.
– reference: Ferryman, J., & Shahrokni, A. (2009). PETS2009: Dataset and challenge. In International workshop on performance evaluation of tracking and surveillance.
– reference: KuhnHWYawBThe Hungarian method for the assignment problemNaval Research Logistics Quarterly1955283977551010.1002/nav.3800020109
– reference: Song, Y., Yoon, Y., Yoon, K., & Jeon, M. (2018). Online and real-time tracking with the GMPHD filter using group management and relative motion analysis. In International conference on advanced video and signal based surveillance.
– reference: YangMJiaYTemporal dynamic appearance modeling for online multi-person trackingComputer Vision and Image Understanding201610.1016/j.cviu.2016.05.003
– reference: Fang, K., Xiang, Y., Li, X., & Savarese, S. (2018). Recurrent autoregressive networks for online multi-object tracking. In Winter conference on applications of computer vision.
– reference: Andriluka, M., Iqbal, U., Insafutdinov, E., Pishchulin, L., Milan, A., Gall, J., & Schiele, B. (2018). Posetrack: A benchmark for human pose estimation and tracking. In Conference on computer vision and pattern recognition.
– reference: Ma, C., Yang, C., Yang, F., Zhuang, Y., Zhang, Z., Jia, H., & Xie, X. (2018a). Trajectory factory: Tracklet cleaving and re-connection by deep Siamese bi-GRU for multiple object tracking. In International conference on multimedia and expo.
– reference: Zhang, L., Li, Y., & Nevatia, R. (2008). Global data association for multi-object tracking using network flows. In Conference on computer vision and pattern recognition.
– reference: Dollár, P., Wojek, C., Schiele, B., & Perona, P. (2009) Pedestrian detection: A benchmark. In Conference on computer vision and pattern recognition workshops.
– reference: Long, C., Haizhou, A., Zijie, Z., & Chong, S. (2018) Real-time multiple people tracking with deeply learned candidate selection and person re-identification. In International conference on multimedia and expo.
– reference: Brasó, G., & Leal-Taixé, L. (2020). Learning a neural solver for multiple object tracking. In Conference on computer vision and pattern recognition.
– reference: Kutschbach, T., Bochinski, E., Eiselein, V., & Sikora, T. (2017). Sequential sensor fusion combining probability hypothesis density and kernelized correlation filters for multi-object tracking in video data. In International conference on advanced video and signal based surveillance.
– reference: Song, Y., & Jeon, M. (2016). Online multiple object tracking with the hierarchically adopted GM-PHD filter using motion and appearance. In International conference on consumer electronics.
– reference: Long, C., Haizhou, A., Chong, S., Zijie, Z., & Bo, B. (2017). Online multi-object tracking with convolutional neural networks. In International conference on image processing.
– reference: Fagot-Bouquet, L., Audigier, R., Dhome, Y., & Lerasle, F. (2016). Improving multi-frame data association with sparse representations for robust near-online multi-object tracking. In European conference on computer vision workshops.
– reference: Loumponias, K., Dimou, A., Vretos, N., & Daras, P. (2018). Adaptive tobit Kalman-based tracking. In International conference on signal-image technology & internet-based systems.
– reference: ShengHChenJZhangYKeWXiongZYuJIterative multiple hypothesis tracking with tracklet-level associationTransactions on Circuits and Systems for Video Technology201829123660367210.1109/TCSVT.2018.2881123
– reference: Leal-Taixe, L., Canton-Ferrer, C., & Schindler, K. (2016). Learning by tracking: Siamese CNN for robust target association. In Conference on computer vision and pattern recognition workshops.
– reference: Pedersen, M., Haurum, J. B., Bengtson, S. H., & Moeslund, T. B. (June 2020). 3D-ZEF: A 3D zebrafish tracking benchmark dataset. In Conference on computer vision and pattern recognition.
– reference: Fagot-Bouquet, L., Audigier, R., Dhome, Y., & Lerasle, F. (2015). Online multi-person tracking based on global sparse collaborative representations. In International conference on image processing.
– reference: Wen, L., Li, W., Yan, J., Lei, Z., Yi, D., & Li, S. Z. (2014). Multiple target tracking based on undirected hierarchical relation hypergraph. In Conference on computer vision and pattern recognition.
– reference: Milan, A., Leal-Taixé, L., Schindler, K., & Reid, I. (2015). Joint tracking and segmentation of multiple targets. In Conference on computer vision and pattern recognition.
– reference: Xiang, Y., Alahi, A., & Savarese, S. (2015). Learning to track: Online multi-object tracking by decision making. In International conference on computer vision.
– reference: ZhouHOuyangWChengJWangXLiHDeep continuous conditional random fields with asymmetric inter-object constraints for online multi-object trackingTransactions on Circuits and Systems for Video Technology201810.1109/TCSVT.2018.2825679
– reference: Chen, J., Sheng, H., Zhang, Y., & Xiong, Z. (2017a). Enhancing detection model for multiple hypothesis tracking. In Conference on computer vision and pattern recognition workshops.
– reference: Yang, F., Choi, W., & Lin, Y. (2016). Exploit all the layers: Fast and accurate CNN object detector with scale dependent pooling and cascaded rejection classifiers. In Conference on computer vision and pattern recognition.
– reference: Chu, P., Fan, H., Tan, C. C., & Ling, H. (2019). Online multi-object tracking with instance-aware tracker and dynamic model refreshment. In Winter conference on applications of computer vision.
– reference: ChenLAiHChenRZhuangZAggregate tracklet appearance features for multi-object trackingSignal Processing Letters201926111613161710.1109/LSP.2019.2940922
– reference: Tang, S., Andres, B., Andriluka, M., & Schiele, B. (2016). Multi-person tracking by multicuts and deep matching. In European conference on computer vision workshops.
– reference: Bae, S.-H., & Yoon, K.-J. (2014). Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In Conference on computer vision and pattern recognition.
– reference: Bergmann, P., Meinhardt, T., & Leal-Taixé, L. (2019). Tracking without bells and whistles. In International conference on computer vision.
– reference: Girshick, R. (2015). Fast R-CNN. In International conference on computer vision.
– reference: Yoon, J., Yang, H., Lim, J., & Yoon, K. (2015). Bayesian multi-object tracking using motion context from multiple objects. In Winter conference on applications of computer vision.
– reference: MilanASchindlerKRothSMulti-target tracking by discrete-continuous energy minimizationTransactions on Pattern Analysis and Machine Intelligence201638102054206810.1109/TPAMI.2015.2505309
– reference: Rezatofighi, H., Milan, A., Zhang, Z., Shi, Q., Dick, A., & Reid, I. (2015). Joint probabilistic data association revisited. In International conference on computer vision.
– reference: Sanchez-Matilla, R., Poiesi, F., & Cavallaro, A. (2016). Online multi-target tracking with strong and weak detections. In European conference on computer vision workshops.
– ident: 1393_CR31
  doi: 10.1109/CVPR.2005.177
– ident: 1393_CR88
  doi: 10.1109/WACV.2016.7477566
– ident: 1393_CR104
  doi: 10.1109/ICCV.2017.41
– ident: 1393_CR20
  doi: 10.1109/AVSS.2017.8078516
– year: 2018
  ident: 1393_CR63
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2018.2876253
– ident: 1393_CR94
  doi: 10.1109/CVPRW.2013.111
– year: 2017
  ident: 1393_CR144
  publication-title: Transactions on Image Processing
  doi: 10.1109/TIP.2017.2745103
– ident: 1393_CR96
  doi: 10.1109/AVSS.2017.8078552
– ident: 1393_CR42
  doi: 10.1109/ICIP.2015.7351235
– volume: 34
  start-page: 280
  issue: 2
  year: 2017
  ident: 1393_CR59
  publication-title: Journal of the Optical Society of America A
  doi: 10.1364/JOSAA.34.000280
– ident: 1393_CR53
  doi: 10.1109/CVPR.2006.100
– ident: 1393_CR39
  doi: 10.1109/AVSS.2012.59
– ident: 1393_CR136
– ident: 1393_CR125
  doi: 10.1109/UV.2018.8642156
– ident: 1393_CR71
  doi: 10.1007/978-3-319-48881-3_4
– ident: 1393_CR91
  doi: 10.1109/CVPR.2015.7299178
– ident: 1393_CR124
  doi: 10.1109/CVPR.2017.394
– ident: 1393_CR56
  doi: 10.1109/CVPRW.2018.00192
– ident: 1393_CR51
  doi: 10.1109/CVPR.2012.6248074
– ident: 1393_CR24
  doi: 10.1109/CVPRW.2017.266
– ident: 1393_CR78
  doi: 10.1109/CVPR.2018.00935
– ident: 1393_CR98
  doi: 10.1109/CVPR.2011.5995604
– ident: 1393_CR139
  doi: 10.1109/ICCV.2015.534
– volume: 19
  start-page: 559
  year: 2019
  ident: 1393_CR148
  publication-title: Sensors
  doi: 10.3390/s19030559
– volume: 38
  start-page: 2054
  issue: 10
  year: 2016
  ident: 1393_CR95
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2015.2505309
– volume: 122
  start-page: 484
  issue: 3
  year: 2016
  ident: 1393_CR132
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-016-0960-z
– ident: 1393_CR64
  doi: 10.1109/AVSS.2016.7738059
– volume: 11
  start-page: 87
  issue: 1
  year: 2017
  ident: 1393_CR60
  publication-title: IET Computer Vision
  doi: 10.1049/iet-cvi.2016.0068
– ident: 1393_CR19
  doi: 10.1109/ICRA.2016.7487371
– ident: 1393_CR32
  doi: 10.1007/978-3-030-58558-7_26
– volume: 7
  start-page: 76489
  year: 2019
  ident: 1393_CR80
  publication-title: Access
  doi: 10.1109/ACCESS.2019.2921975
– ident: 1393_CR45
  doi: 10.1007/s11263-006-7899-4
– ident: 1393_CR26
  doi: 10.1109/CVPR.2017.145
– ident: 1393_CR79
– ident: 1393_CR54
  doi: 10.1007/978-3-319-46448-0_45
– volume: 111
  start-page: 98
  issue: 1
  year: 2015
  ident: 1393_CR41
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-014-0733-5
– volume: 8
  start-page: 38060
  year: 2020
  ident: 1393_CR147
  publication-title: Access
  doi: 10.1109/ACCESS.2020.2975912
– volume: 92
  start-page: 1
  issue: 1
  year: 2011
  ident: 1393_CR12
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-010-0390-2
– ident: 1393_CR62
– ident: 1393_CR66
  doi: 10.1007/978-3-030-01237-3_13
– ident: 1393_CR142
  doi: 10.1109/CVPR.2016.234
– volume: 29
  start-page: 3660
  issue: 12
  year: 2018
  ident: 1393_CR110
  publication-title: Transactions on Circuits and Systems for Video Technology
  doi: 10.1109/TCSVT.2018.2881123
– ident: 1393_CR30
  doi: 10.1109/ICCV.2017.518
– volume: 7
  start-page: 8181
  year: 2019
  ident: 1393_CR75
  publication-title: Access
  doi: 10.1109/ACCESS.2018.2889442
– ident: 1393_CR102
  doi: 10.1007/978-3-319-48881-3_2
– ident: 1393_CR22
  doi: 10.1109/CVPR42600.2020.00628
– ident: 1393_CR140
  doi: 10.1109/ICCV.2019.00409
– ident: 1393_CR7
  doi: 10.5220/0006564504290438
– ident: 1393_CR111
– ident: 1393_CR34
– ident: 1393_CR82
– ident: 1393_CR55
  doi: 10.1109/ICCV.2011.6126532
– ident: 1393_CR105
  doi: 10.1109/ICIP.2019.8803140
– year: 2016
  ident: 1393_CR143
  publication-title: Computer Vision and Image Understanding
  doi: 10.1016/j.cviu.2016.05.003
– ident: 1393_CR120
– volume: 47
  start-page: 7
  issue: 1
  year: 2002
  ident: 1393_CR107
  publication-title: International Journal of Computer Vision
  doi: 10.1023/A:1014573219977
– ident: 1393_CR67
– ident: 1393_CR73
  doi: 10.1109/CVPR.2014.453
– ident: 1393_CR100
– ident: 1393_CR153
  doi: 10.1109/CVPR.2008.4587584
– ident: 1393_CR9
  doi: 10.5220/0006564504290438
– ident: 1393_CR43
  doi: 10.1007/978-3-319-46484-8_47
– ident: 1393_CR129
  doi: 10.1109/CVPR.2011.5995347
– ident: 1393_CR29
  doi: 10.1109/ICCV.2019.00627
– ident: 1393_CR77
  doi: 10.1109/CVPR.2017.206
– volume-title: Probabilistic robotics (intelligent robotics and autonomous agents)
  year: 2005
  ident: 1393_CR127
– year: 2020
  ident: 1393_CR138
  publication-title: Transactions on Circuits and Systems for Video Technology
  doi: 10.1109/TCSVT.2020.2975842
– volume: 368
  start-page: 69
  year: 2019
  ident: 1393_CR4
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2019.08.008
– ident: 1393_CR23
  doi: 10.1109/CVPR.2019.00895
– ident: 1393_CR40
  doi: 10.1109/CVPR.2008.4587581
– ident: 1393_CR1
  doi: 10.1109/CVPR.2014.283
– volume: 8
  start-page: 217
  issue: 11
  year: 2017
  ident: 1393_CR86
  publication-title: International Journal of Advanced Computer Science & Applications
  doi: 10.14569/IJACSA.2017.081129
– ident: 1393_CR89
  doi: 10.1007/978-3-319-10593-2_47
– ident: 1393_CR145
  doi: 10.1109/WACV.2015.12
– ident: 1393_CR149
  doi: 10.1109/AVSS.2018.8639078
– ident: 1393_CR92
  doi: 10.1609/aaai.v31i1.11194
– ident: 1393_CR134
  doi: 10.1109/CVPR.2014.167
– volume: 36
  start-page: 1532
  issue: 8
  year: 2014
  ident: 1393_CR37
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2014.2300479
– ident: 1393_CR87
  doi: 10.1109/CVPR.2019.00477
– volume: 6
  start-page: 67316
  year: 2018
  ident: 1393_CR76
  publication-title: Access
  doi: 10.1109/ACCESS.2018.2879535
– ident: 1393_CR14
– ident: 1393_CR156
– volume: 21
  start-page: 374
  issue: 1
  year: 2019
  ident: 1393_CR128
  publication-title: Transactions on Intelligent Transportation Systems
  doi: 10.1109/TITS.2019.2892413
– ident: 1393_CR135
  doi: 10.1109/ICRA.2016.7487180
– year: 2018
  ident: 1393_CR155
  publication-title: Transactions on Circuits and Systems for Video Technology
  doi: 10.1109/TCSVT.2018.2825679
– ident: 1393_CR52
  doi: 10.1109/ICCV.2015.169
– ident: 1393_CR157
  doi: 10.1007/978-3-030-01228-1_23
– ident: 1393_CR90
  doi: 10.1109/WACV.2015.17
– ident: 1393_CR130
  doi: 10.1109/CVPRW.2016.55
– ident: 1393_CR21
  doi: 10.1109/AVSS.2017.8078481
– ident: 1393_CR122
  doi: 10.1109/CVPR.2015.7299138
– ident: 1393_CR5
  doi: 10.1109/CVPR.2014.159
– ident: 1393_CR16
  doi: 10.1109/ICCV.2019.00103
– ident: 1393_CR3
  doi: 10.1109/CVPR.2018.00542
– ident: 1393_CR126
– ident: 1393_CR15
  doi: 10.1109/ICCV.2011.6126516
– year: 2019
  ident: 1393_CR61
  publication-title: Access
  doi: 10.1109/ACCESS.2019.2932301
– volume: 29
  start-page: 3269
  issue: 11
  year: 2018
  ident: 1393_CR113
  publication-title: Transactions on Circuits and Systems for Video Technology
  doi: 10.1109/TCSVT.2018.2882192
– ident: 1393_CR115
  doi: 10.1109/CVPR.2005.453
– ident: 1393_CR131
  doi: 10.1145/3343031.3350853
– volume: 27
  start-page: 4585
  issue: 9
  year: 2018
  ident: 1393_CR70
  publication-title: Transactions on Image Processing
  doi: 10.1109/TIP.2018.2843129
– volume: 7
  start-page: 165103
  year: 2019
  ident: 1393_CR119
  publication-title: Access
  doi: 10.1109/ACCESS.2019.2953276
– ident: 1393_CR10
  doi: 10.5220/0006564504290438
– ident: 1393_CR38
  doi: 10.1109/CVPRW.2009.5206631
– ident: 1393_CR83
  doi: 10.1109/SITIS.2018.00021
– volume: 94
  start-page: 25
  year: 2019
  ident: 1393_CR137
  publication-title: Pattern Recognition
  doi: 10.1016/j.patcog.2019.04.018
– ident: 1393_CR36
  doi: 10.1109/ICCV.2013.286
– volume: 7
  start-page: 2107
  year: 2018
  ident: 1393_CR112
  publication-title: Access
  doi: 10.1109/ACCESS.2018.2881019
– ident: 1393_CR13
  doi: 10.1007/978-3-319-48881-3_5
– ident: 1393_CR101
  doi: 10.1109/ICCV.2015.349
– ident: 1393_CR47
  doi: 10.1109/PETS-WINTER.2009.5399556
– ident: 1393_CR146
  doi: 10.1109/CVPR.2016.155
– ident: 1393_CR116
  doi: 10.1109/CVPR.2017.403
– volume: 24
  start-page: 843
  issue: 6
  year: 1979
  ident: 1393_CR99
  publication-title: Transactions on Automatic Control
  doi: 10.1109/TAC.1979.1102177
– ident: 1393_CR150
– volume: 29
  start-page: 6694
  year: 2020
  ident: 1393_CR154
  publication-title: Transactions on Image Processing
  doi: 10.1109/TIP.2020.2993073
– volume: 59
  start-page: 257
  year: 2019
  ident: 1393_CR11
  publication-title: Journal of Visual Communication and Image Representation
  doi: 10.1016/j.jvcir.2019.01.026
– ident: 1393_CR46
  doi: 10.1109/AVSS.2010.90
– volume: 40
  start-page: 595
  issue: 3
  year: 2018
  ident: 1393_CR6
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2017.2691769
– volume: 6
  start-page: 14764
  year: 2018
  ident: 1393_CR49
  publication-title: Access
  doi: 10.1109/ACCESS.2018.2816805
– ident: 1393_CR33
  doi: 10.1109/CVPR.2015.7299036
– volume: 36
  start-page: 58
  issue: 1
  year: 2014
  ident: 1393_CR93
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2013.103
– ident: 1393_CR74
  doi: 10.1109/ICCVW.2011.6130233
– ident: 1393_CR58
– ident: 1393_CR81
– ident: 1393_CR72
  doi: 10.1109/CVPRW.2016.59
– volume: 2
  start-page: 83
  year: 1955
  ident: 1393_CR68
  publication-title: Naval Research Logistics Quarterly
  doi: 10.1002/nav.3800020109
– volume: 21
  start-page: 2277
  issue: 9
  year: 2019
  ident: 1393_CR48
  publication-title: Transactions on Multimedia
  doi: 10.1109/TMM.2019.2902480
– ident: 1393_CR106
  doi: 10.1007/978-3-319-48881-3_7
– ident: 1393_CR65
  doi: 10.1109/ICCV.2015.533
– ident: 1393_CR44
  doi: 10.1109/WACV.2018.00057
– ident: 1393_CR118
– ident: 1393_CR8
  doi: 10.5220/0006564504290438
– ident: 1393_CR121
  doi: 10.1109/CVPR42600.2020.00252
– volume: 115
  start-page: 211
  issue: 3
  year: 2015
  ident: 1393_CR103
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-015-0816-y
– year: 2008
  ident: 1393_CR17
  publication-title: Image and Video Processing
  doi: 10.1155/2008/246309
– volume: 127
  start-page: 1063
  year: 2018
  ident: 1393_CR114
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-018-01147-z
– ident: 1393_CR2
  doi: 10.1109/CVPR.2010.5540156
– ident: 1393_CR84
  doi: 10.1109/ICME.2018.8486454
– volume: 26
  start-page: 1613
  issue: 11
  year: 2019
  ident: 1393_CR25
  publication-title: Signal Processing Letters
  doi: 10.1109/LSP.2019.2940922
– volume: 193
  start-page: 102907
  year: 2020
  ident: 1393_CR133
  publication-title: Computer Vision and Image Understanding
  doi: 10.1016/j.cviu.2020.102907
– ident: 1393_CR18
  doi: 10.1109/ICIP.2016.7533003
– volume: 36
  start-page: 1012
  issue: 5
  year: 2014
  ident: 1393_CR50
  publication-title: Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2013.185
– ident: 1393_CR85
  doi: 10.1007/978-3-030-20890-5_39
– ident: 1393_CR117
  doi: 10.1109/ICCE-Asia.2016.7804800
– ident: 1393_CR152
– ident: 1393_CR69
  doi: 10.1109/AVSS.2017.8078517
– ident: 1393_CR109
  doi: 10.1109/CVPR.2006.19
– ident: 1393_CR141
  doi: 10.1109/CVPR42600.2020.00682
– ident: 1393_CR28
  doi: 10.1109/WACV.2019.00023
– ident: 1393_CR151
  doi: 10.1109/ICCE-ASIA.2018.8552105
– ident: 1393_CR97
  doi: 10.1109/CVPR42600.2020.00250
– ident: 1393_CR27
  doi: 10.1109/ICCV.2015.347
– ident: 1393_CR35
– volume: 56
  start-page: 3447
  issue: 8
  year: 2008
  ident: 1393_CR108
  publication-title: Transactions on Signal Processing
  doi: 10.1109/TSP.2008.920469
– ident: 1393_CR123
  doi: 10.1007/978-3-319-48881-3_8
– ident: 1393_CR57
  doi: 10.1109/CVPRW.2019.00105
SSID ssj0002823
Score 2.7025945
Snippet Standardized benchmarks have been crucial in pushing the performance of computer vision algorithms, especially since the advent of deep learning. Although...
SourceID proquest
gale
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 845
SubjectTerms Algorithms
Artificial Intelligence
Autonomous cars
Benchmarks
Cameras
Computer Imaging
Computer Science
Computer vision
Error analysis
Image Processing and Computer Vision
Labels
Machine learning
Machine vision
Multiple target tracking
Pattern Recognition
Pattern Recognition and Graphics
Pedestrians
Performance evaluation
Special Issue on Performance Evaluation in Computer Vision
Visibility
Vision
SummonAdditionalLinks – databaseName: Computer Science Database
  dbid: K7-
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV07T8MwELagMLDwRpSXPCAxgEXs1KnDgkoBISEeEkVisxw_BKIUaAu_n7vUoQIEC0uGS-JYufO9fL6PkG2ZZ4UShWSp45I1uHIsNw24BFdARKR8rkwJNtG8vFR3d_l1TLgNYlllpRNLRe2eLebI98FUpzKVSsnDl1eGqFG4uxohNCbJFBeCo5yfN9mnJoZwYgQlDyGSzHIeD82Mjs5xUe5gYllWijVsXwzTd_X8Y5-0ND-nc_-d-DyZjY4nbY0kZYFM-N4imYtOKI1LfACkCuehoi2R44urTrsCXTmgLXoE9Psn03-k4PLSG5hw17O2wfwWvYgVirRT1phTMIYW0_HL5Pb0pNM-YxF9gVkEOWCZT6QvEuuyRHnpgnGpNEWGHmBhgzEuKMNNKERqmtjBJUiT2QzoCQ-pDyJdIbXec8-vEuqFBa8rT6zIVaNhncGeiAW6DjB0EFmd8OrXaxtbkyNCRlePmyojuzSwS5fs0kmd7H6-8zJqzPHn0zvIUY2rFka2Jh4-gPlh_yvdAreWYzQMT25UbNRxOQ_0mId1slcJwvj2799d-3u0dTIjsEamrATaILVh_81vkmn7PnwY9LdKYf4AvJj2iQ
  priority: 102
  providerName: ProQuest
Title MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking
URI https://link.springer.com/article/10.1007/s11263-020-01393-0
https://www.proquest.com/docview/2503535885
Volume 129
WOSCitedRecordID wos000601485200003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: Springer Nature Link
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEB60evDiW6zWkoPgQRf20Wyz3mpVBGktWp-XkM0mKNYqrfr7ndlmLT5BL4Gdnc2GSSb5kswDYJMncSrClHtRFnCvFojMS1QNC5uluCMSJhEqTzZRb7fF1VXScU5hw8LavbiSzGfqsbNbEOZ3jmRIFZHV2SRM4XInSB1Pzy7e51_cRIwSyCMnj5PAucp8X8eH5ejzpPzldjRfdA7n_tfceZh1IJM1RqNiASZMfxHmHOBkTp2HSCpyOhS0JdhvnXSbRYKVXdZge0i_fVCDe4bwlp1hE3rGayo6y2ItZ43Iurk9OcOFT9PR-zKcHx50m0eey7TgaUpo4MXG5yb1dRb7wvDMqiziKo0J7aXaKpVZoQJl0zBSdYrWYrmKdYx0P7CRsWG0AqX-Y9-sAjOhRoSV-DpMRK2mM0XxD1OCCVi1DeMyBIXApXZhyCkbRk-OAyiT5CRKTuaSk34Ztt-_eRoF4fiVe4v6UZKGYs1aOUcDbB_FupINhLAB7XyRs1J0tXSqO5SICSMecSF4GXaKrh2__vm_a39jX4eZkOxjciugCpSeBy9mA6b16_PdcFCFyfrldRWm9g7anVN8Oq57WLb8JpYdflPNh_wbRtfx9w
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3NT9RAFH8hYCIXwQ_iCuocNB6wsZ12ytTEmHWBQGAXEtaE2zCdj2jEBXcBwz_F38h77QwbNXDj4KWH6XTazvzmfcz7AngjqrKWvBZJbjORFJm0SaULvHhbo0YkXSV1U2xibTCQh4fV_gxcxVgYcquMNLEh1PbE0Bn5B2TVuciFlOLz6a-EqkaRdTWW0GhhseMuf6PKNvm0vY7r-5bzzY1hbysJVQUSQ8n7k9KlwtWpsWUqnbBe21zouiTJpjZea-ulzrSvea7XKDOJF7o0Jbanmc-dp0QHSPLnigK3A7kKpr0byo_qS1u6HlUyUVZZCNJpQ_Uy3lhMyQ0sJ5-5Pxjh3-zgH7tsw-42F_63iVqER0GwZt12JzyGGTd6AgtByGaBhE2wKdaxiG1PYb2_N-zFojIfWZd9wfZvP_X4B0ORnh3gBB27pKfp_I71gwcmGzY-9AyZvSFzwzP4ei-_twSzo5ORew7McYNSZZUaXsmiMFZTzseaRCMc2vOyA1lcamVC6nWqAHKspkmjCR4K4aEaeKi0A6s3z5y2iUfu7P2OEKSIKuHIRofgCvw-yu-luii2Z6TtY8-VCBsVyNVETTHTgfcReNPbt7_3xd2jvYaHW8P-rtrdHuwswzwnf6DG62kFZs_G5-4lPDAXZ98n41fNRmJwdN-AvAYCf1f6
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3Nb9MwFH-aBkJcGJ-iY4APIA5gLXHq1EFCqLRUTGVlEkXazXP8IdC2bmu3If41_jreS-xVgNhtBy45vDhOYv_8Puz3AfBMVmWtRC154XLJu7lyvDJdvARXo0WkfKVMU2yiN5mo3d1qZwV-plgYcqtMPLFh1O7I0h75JorqQhZSKbkZolvEznD09viEUwUpOmlN5TRaiIz9j-9ovi3ebA1xrp8LMXo_HXzgscIAt5TIn5c-k77OrCsz5aULxhXS1CVpObUNxrigTG5CLQrToywlQZrSlkjP8lD4QEkPkP1fQyksaY2Ne_xCCqAp05axR_NMllUeA3basL1cNKen5BJWkP_cb0LxT9Hw1xltI_pGa__zoN2GW1HhZv12hdyBFT-7C2tR-WaRtS2QlOpbJNo9GG5_mg5SsZnXrM_eIf3roZnvM1T12WccrAPPB4b29dh29Mxk08a3nqESYOkY4j58uZLfewCrs6OZfwjMC4vaZpVZUalu1zpDuSBrUpmw6yDKDuRp2rWNKdmpMsiBXiaTJqhohIpuoKKzDry8eOa4TUhyaesXhCZN3Ap7tiYGXeD3Ud4v3Ud1PqddAGy5kSCkIxtb6CV-OvAqgXB5-9_vXb-8t6dwA3GoP25Nxo_gpiA3ocYZagNWT-dn_jFct-en3xbzJ82aYrB31Xj8BZ2oYKA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MOTChallenge%3A+A+Benchmark+for+Single-Camera+Multiple+Target+Tracking&rft.jtitle=International+journal+of+computer+vision&rft.au=Dendorfer%2C+Patrick&rft.au=Os%CC%86ep%2C+Aljos%CC%86a&rft.au=Milan%2C+Anton&rft.au=Schindler%2C+Konrad&rft.date=2021-04-01&rft.pub=Springer+US&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=129&rft.issue=4&rft.spage=845&rft.epage=881&rft_id=info:doi/10.1007%2Fs11263-020-01393-0&rft.externalDocID=10_1007_s11263_020_01393_0
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon