Scale-Aware Fast R-CNN for Pedestrian Detection

In this paper, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may exhibit dramatically different features. Thus, large variance in instance scales, which results in undesirable large intracategory variance in fea...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on multimedia Ročník 20; číslo 4; s. 985 - 996
Hlavní autori: Li, Jianan, Liang, Xiaodan, Shen, Shengmei, Xu, Tingfa, Feng, Jiashi, Yan, Shuicheng
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: IEEE 01.04.2018
Predmet:
ISSN:1520-9210, 1941-0077
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract In this paper, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may exhibit dramatically different features. Thus, large variance in instance scales, which results in undesirable large intracategory variance in features, may severely hurt the performance of modern object instance detection methods. We argue that this issue can be substantially alleviated by the divide-and-conquer philosophy. Taking pedestrian detection as an example, we illustrate how we can leverage this philosophy to develop a Scale-Aware Fast R-CNN (SAF R-CNN) framework. The model introduces multiple built-in subnetworks which detect pedestrians with scales from disjoint ranges. Outputs from all of the subnetworks are then adaptively combined to generate the final detection results that are shown to be robust to large variance in instance scales, via a gate function defined over the sizes of object proposals. Extensive evaluations on several challenging pedestrian detection datasets well demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our method achieves state-of-the-art performance on Caltech [P. Dollar, C. Wojek, B. Schiele, and P. Perona, "Pedestrian detection: An evaluation of the state of the art," IEEE Trans. Pattern Anal. Mach. Intell. , vol. 34, no. 4, pp. 743-761, Apr. 2012], and obtains competitive results on INRIA [N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , 2005, pp. 886-893], ETH [A. Ess, B. Leibe, and L. V. Gool, "Depth and appearance for mobile scene analysis," in Proc. Int. Conf. Comput. Vis ., 2007, pp. 1-8], and KITTI [A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? The KITTI vision benchmark suite," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit ., 2012, pp. 3354-3361].
AbstractList In this paper, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may exhibit dramatically different features. Thus, large variance in instance scales, which results in undesirable large intracategory variance in features, may severely hurt the performance of modern object instance detection methods. We argue that this issue can be substantially alleviated by the divide-and-conquer philosophy. Taking pedestrian detection as an example, we illustrate how we can leverage this philosophy to develop a Scale-Aware Fast R-CNN (SAF R-CNN) framework. The model introduces multiple built-in subnetworks which detect pedestrians with scales from disjoint ranges. Outputs from all of the subnetworks are then adaptively combined to generate the final detection results that are shown to be robust to large variance in instance scales, via a gate function defined over the sizes of object proposals. Extensive evaluations on several challenging pedestrian detection datasets well demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our method achieves state-of-the-art performance on Caltech [P. Dollar, C. Wojek, B. Schiele, and P. Perona, "Pedestrian detection: An evaluation of the state of the art," IEEE Trans. Pattern Anal. Mach. Intell. , vol. 34, no. 4, pp. 743-761, Apr. 2012], and obtains competitive results on INRIA [N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , 2005, pp. 886-893], ETH [A. Ess, B. Leibe, and L. V. Gool, "Depth and appearance for mobile scene analysis," in Proc. Int. Conf. Comput. Vis ., 2007, pp. 1-8], and KITTI [A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? The KITTI vision benchmark suite," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit ., 2012, pp. 3354-3361].
Author Yan, Shuicheng
Xu, Tingfa
Feng, Jiashi
Liang, Xiaodan
Li, Jianan
Shen, Shengmei
Author_xml – sequence: 1
  givenname: Jianan
  orcidid: 0000-0002-1479-1099
  surname: Li
  fullname: Li, Jianan
  email: 20090964@bit.edu.cn
  organization: School of Optical Engineering, Beijing Institute of Technology, Beijing, China
– sequence: 2
  givenname: Xiaodan
  surname: Liang
  fullname: Liang, Xiaodan
  email: xdliang328@gmail.com
  organization: Carnegie Mellon University, Pittsburgh, USA
– sequence: 3
  givenname: Shengmei
  surname: Shen
  fullname: Shen, Shengmei
  email: shengmei.shen@sg.panasonic.com
  organization: Panasonic R&D Center Singapore, Singapore
– sequence: 4
  givenname: Tingfa
  orcidid: 0000-0002-1479-1099
  surname: Xu
  fullname: Xu, Tingfa
  email: 15210538723@163.com
  organization: School of Optical Engineering, Beijing Institute of Technology, Beijing, China
– sequence: 5
  givenname: Jiashi
  surname: Feng
  fullname: Feng, Jiashi
  email: jshfeng@gmail.com
  organization: Department of Electrical and Computer Engineering, National University of Singapore, Singapore
– sequence: 6
  givenname: Shuicheng
  surname: Yan
  fullname: Yan, Shuicheng
  email: eleyans@nus.edu.sg
  organization: Department of Electrical and Computer Engineering, National University of Singapore, Singapore
BookMark eNp9j8FKAzEQhoNUsK3eBS_7AmlnstlNcizVqtBW0XpestkJrNRdSQLi27ulxYMHTzPMP98w34SNur4jxq4RZohg5rvNZiYA1UyowhSgz9gYjUQOoNRo6AsB3AiECzaJ8R0AZQFqzOavzu6JL75soGxlY8pe-HK7zXwfsmdqKKbQ2i67pUQutX13yc693Ue6OtUpe1vd7ZYPfP10_7hcrLkTZZ54UcsSraHGGdRIptZQa-3RKtC1loYsNJqa0utSSGlEIVUxzL3zlGNNkE9ZebzrQh9jIF-5NtnDBynYdl8hVAftatCuDtrVSXsA4Q_4GdoPG77_Q26OSEtEv-saShjy_Ad1cmMN
CODEN ITMUF8
CitedBy_id crossref_primary_10_1117_1_JEI_31_6_061812
crossref_primary_10_1109_TVT_2020_2983825
crossref_primary_10_1109_TPAMI_2020_2981890
crossref_primary_10_1155_2018_7328074
crossref_primary_10_1109_ACCESS_2019_2928879
crossref_primary_10_1109_TIM_2022_3230459
crossref_primary_10_1109_TMM_2018_2834873
crossref_primary_10_1109_TMM_2020_2981237
crossref_primary_10_1109_ACCESS_2021_3115963
crossref_primary_10_1109_TMM_2021_3140025
crossref_primary_10_1109_TCSVT_2018_2883558
crossref_primary_10_1109_TCSVT_2019_2913114
crossref_primary_10_1109_TMM_2021_3116430
crossref_primary_10_1109_TPAMI_2023_3290594
crossref_primary_10_1109_TPAMI_2019_2910514
crossref_primary_10_1109_TMM_2023_3272471
crossref_primary_10_1109_ACCESS_2019_2896201
crossref_primary_10_1109_TCSVT_2023_3306870
crossref_primary_10_1109_TIM_2024_3428635
crossref_primary_10_1088_1361_6501_adfe09
crossref_primary_10_1109_TNNLS_2020_3004819
crossref_primary_10_1109_TIM_2021_3054627
crossref_primary_10_17163_ings_n27_2022_08
crossref_primary_10_1109_TITS_2020_3043250
crossref_primary_10_1109_TIP_2020_3038347
crossref_primary_10_1109_TMM_2020_2990070
crossref_primary_10_1109_TII_2023_3316182
crossref_primary_10_1109_TIP_2021_3049948
crossref_primary_10_1109_ACCESS_2020_2972055
crossref_primary_10_1109_JIOT_2022_3162295
crossref_primary_10_1109_TITS_2019_2948398
crossref_primary_10_1007_s11042_024_18879_5
crossref_primary_10_1109_TNNLS_2021_3080276
crossref_primary_10_1109_TCSVT_2019_2950526
crossref_primary_10_1109_ACCESS_2019_2904712
crossref_primary_10_1109_TITS_2018_2869087
crossref_primary_10_1007_s00530_019_00628_6
crossref_primary_10_1007_s11042_023_14721_6
crossref_primary_10_20965_jaciii_2018_p1056
crossref_primary_10_1117_1_JEI_31_4_043018
crossref_primary_10_1109_ACCESS_2020_3022517
crossref_primary_10_1109_TITS_2022_3171250
crossref_primary_10_1109_TCSVT_2021_3060162
crossref_primary_10_1109_TITS_2023_3307167
crossref_primary_10_1109_LSP_2022_3215920
crossref_primary_10_1109_TMM_2019_2929005
crossref_primary_10_1109_TGRS_2022_3155634
crossref_primary_10_1109_TIP_2019_2944306
crossref_primary_10_1109_TITS_2019_2933581
crossref_primary_10_1109_JOE_2020_3039037
crossref_primary_10_1007_s11771_020_4448_1
crossref_primary_10_1109_ACCESS_2024_3406968
crossref_primary_10_1109_TIP_2020_3029901
crossref_primary_10_1109_TPAMI_2019_2897684
crossref_primary_10_1109_MCE_2021_3116440
crossref_primary_10_1007_s00530_025_01967_3
crossref_primary_10_1109_ACCESS_2020_2995321
crossref_primary_10_1109_ACCESS_2020_3012558
crossref_primary_10_1109_TMC_2024_3452101
crossref_primary_10_1016_j_ijleo_2022_170334
crossref_primary_10_1109_ACCESS_2019_2900369
crossref_primary_10_1007_s00521_021_06265_3
crossref_primary_10_1049_cvi2_12159
crossref_primary_10_1109_TMM_2024_3350926
crossref_primary_10_1186_s13640_019_0441_8
crossref_primary_10_1109_ACCESS_2025_3561279
crossref_primary_10_1109_LSP_2020_3033462
crossref_primary_10_1007_s11432_020_2969_8
crossref_primary_10_1117_1_JEI_34_3_033046
crossref_primary_10_1109_TII_2024_3353845
crossref_primary_10_1109_ACCESS_2025_3599213
crossref_primary_10_1364_AO_503039
crossref_primary_10_1007_s40192_021_00240_5
crossref_primary_10_3390_app9112335
crossref_primary_10_1007_s13735_022_00239_4
crossref_primary_10_1109_ACCESS_2025_3543164
crossref_primary_10_1109_TITS_2024_3495814
crossref_primary_10_1108_IR_09_2020_0186
crossref_primary_10_1109_TITS_2019_2904836
crossref_primary_10_1109_TCSVT_2021_3079897
crossref_primary_10_1109_TMM_2022_3160589
crossref_primary_10_1109_TPAMI_2020_3012414
crossref_primary_10_1155_2022_6106853
crossref_primary_10_1155_2023_5349965
crossref_primary_10_1007_s00521_019_04424_1
crossref_primary_10_1109_LGRS_2024_3440045
crossref_primary_10_2478_amns_2023_2_00449
crossref_primary_10_1186_s13640_019_0456_1
crossref_primary_10_1088_1361_6501_adcf41
crossref_primary_10_1109_TCSVT_2020_3000223
crossref_primary_10_1109_TMM_2021_3088303
crossref_primary_10_1109_ACCESS_2018_2872497
crossref_primary_10_1109_ACCESS_2021_3063028
crossref_primary_10_1080_10095020_2025_2542964
crossref_primary_10_1109_TITS_2019_2963700
crossref_primary_10_1109_TMM_2024_3381377
crossref_primary_10_1631_FITEE_2000567
crossref_primary_10_1109_ACCESS_2022_3175303
crossref_primary_10_1109_ACCESS_2019_2927036
crossref_primary_10_1109_LSP_2020_2995102
crossref_primary_10_1109_TMM_2024_3401549
crossref_primary_10_1109_TCOMM_2020_3027027
crossref_primary_10_1109_ACCESS_2019_2924443
crossref_primary_10_1016_j_ins_2020_05_009
crossref_primary_10_1109_ACCESS_2023_3244981
crossref_primary_10_1002_tee_23696
crossref_primary_10_1109_JSEN_2023_3242082
crossref_primary_10_1109_TIP_2019_2957927
crossref_primary_10_1109_TGRS_2024_3457155
crossref_primary_10_1109_TITS_2022_3142445
crossref_primary_10_1088_1361_6501_ad9bd9
crossref_primary_10_1109_TMM_2019_2929949
crossref_primary_10_3389_fnbot_2018_00064
crossref_primary_10_1109_TBIOM_2024_3487545
crossref_primary_10_1109_TCDS_2019_2920364
crossref_primary_10_1007_s11554_022_01203_5
crossref_primary_10_1155_2021_7978644
crossref_primary_10_1109_TITS_2024_3390576
crossref_primary_10_1109_ACCESS_2020_2999694
crossref_primary_10_1109_TITS_2022_3210186
crossref_primary_10_1109_JPHOT_2019_2922270
crossref_primary_10_1109_TIE_2019_2945295
crossref_primary_10_1109_ACCESS_2024_3362636
crossref_primary_10_1109_ACCESS_2022_3201139
crossref_primary_10_1109_TMM_2019_2916104
crossref_primary_10_1109_TITS_2019_2923752
crossref_primary_10_1109_TIP_2020_3038371
crossref_primary_10_1109_TMM_2019_2922127
crossref_primary_10_1109_TGRS_2021_3059450
crossref_primary_10_1109_TIP_2020_3040854
crossref_primary_10_1109_TMM_2021_3078141
crossref_primary_10_1088_1742_6596_2252_1_012050
crossref_primary_10_3389_fpls_2025_1604514
crossref_primary_10_1109_TII_2020_3024578
crossref_primary_10_1016_j_eswa_2021_115295
crossref_primary_10_1109_ACCESS_2019_2899105
crossref_primary_10_1109_TIP_2019_2942686
crossref_primary_10_1109_TVT_2020_3043203
crossref_primary_10_1007_s11042_018_6866_8
crossref_primary_10_1145_3434398
crossref_primary_10_1007_s12145_025_01818_9
crossref_primary_10_1109_TMM_2021_3058546
crossref_primary_10_1155_2018_3518959
crossref_primary_10_1109_TIP_2020_3039574
crossref_primary_10_1155_2021_5590895
crossref_primary_10_1109_TCSVT_2023_3245613
crossref_primary_10_1109_ACCESS_2018_2869976
crossref_primary_10_1109_ACCESS_2023_3293120
crossref_primary_10_1109_TIFS_2019_2916592
crossref_primary_10_1109_TITS_2019_2942045
crossref_primary_10_1007_s11263_021_01461_z
crossref_primary_10_1109_TMM_2020_3020691
crossref_primary_10_1007_s11042_023_15845_5
crossref_primary_10_1109_TIP_2022_3189803
crossref_primary_10_1007_s10586_023_04164_x
crossref_primary_10_1109_TMM_2022_3149129
crossref_primary_10_1145_3307599_3307605
crossref_primary_10_1109_TMM_2019_2951461
crossref_primary_10_1109_TMM_2023_3251100
crossref_primary_10_1109_ACCESS_2021_3063294
crossref_primary_10_1109_TIP_2019_2938877
crossref_primary_10_3389_fnbot_2022_881021
crossref_primary_10_1109_TPAMI_2021_3076733
crossref_primary_10_1109_TCSVT_2021_3076466
crossref_primary_10_1109_TCSVT_2019_2924912
crossref_primary_10_1007_s11045_021_00764_1
crossref_primary_10_1109_TMM_2022_3178871
crossref_primary_10_1007_s11704_018_7153_6
crossref_primary_10_1117_1_JEI_33_2_023052
crossref_primary_10_1155_2022_2748862
crossref_primary_10_1109_TIP_2020_2966371
crossref_primary_10_1109_ACCESS_2019_2895376
crossref_primary_10_1088_1361_6501_abfdde
crossref_primary_10_1109_ACCESS_2023_3343383
crossref_primary_10_1109_ACCESS_2020_2986476
crossref_primary_10_1007_s10462_023_10630_0
crossref_primary_10_1016_j_inffus_2018_11_017
crossref_primary_10_1016_j_infrared_2018_11_007
crossref_primary_10_1109_TITS_2020_3034239
crossref_primary_10_1109_JIOT_2024_3362851
crossref_primary_10_1109_ACCESS_2022_3150988
crossref_primary_10_1109_TDSC_2019_2923653
crossref_primary_10_1145_3498340
crossref_primary_10_1109_TMM_2021_3075566
Cites_doi 10.1007/s11263-013-0620-5
10.1109/CVPR.2007.383157
10.1109/ICCV.2013.10
10.1109/ICCV.2013.257
10.1109/CVPR.2014.126
10.1109/IVS.2015.7225711
10.1109/TPAMI.2011.155
10.1109/ICCV.2013.322
10.1109/ICCV.2003.1238422
10.1109/TMM.2016.2642789
10.1145/2647868.2654889
10.1109/CVPR.2014.81
10.1109/TCYB.2013.2255271
10.1109/TMM.2016.2602060
10.1109/CVPR.2005.177
10.1145/3065386
10.1109/ICCV.2015.169
10.1109/CVPR.2013.235
10.1109/CVPR.2015.7299143
10.1109/CVPR.2015.7299034
10.1109/TPAMI.2013.124
10.1109/ICCV.2009.5459207
10.1007/978-3-319-16181-5_47
10.1109/CVPR.2012.6248074
10.1109/CVPR.2016.91
10.1109/CVPR.2016.141
10.1109/TPAMI.2009.167
10.1007/978-3-642-23678-5_55
10.1109/TMM.2015.2476660
10.1109/CVPR.2014.49
10.1109/TCYB.2016.2593940
10.1109/ICCV.2007.4409092
10.1109/TPAMI.2016.2577031
10.1109/ICCV.2015.384
10.1007/s11263-016-0890-9
10.1007/978-3-319-10584-0_26
10.1016/j.procs.2016.05.455
10.1109/TPAMI.2014.2300479
10.1109/CVPR.2012.6248017
10.1109/CVPR.2016.147
10.1007/978-3-319-10593-2_36
10.1109/TMM.2014.2321534
10.1109/CVPR.2013.390
10.1023/B:VISI.0000013087.49260.fb
10.1007/978-3-642-15561-1_18
10.1109/TMM.2015.2476655
10.1109/CVPR.2015.7298784
10.1109/CVPR.2013.465
10.1007/978-3-319-10602-1_26
10.1109/ICCV.2015.221
10.1109/CVPR.2014.27
10.5244/C.23.91
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
DOI 10.1109/TMM.2017.2759508
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1941-0077
EndPage 996
ExternalDocumentID 10_1109_TMM_2017_2759508
8060595
Genre orig-research
GrantInformation_xml – fundername: China Scholarship Council
  grantid: 201506030045
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
TN5
VH1
ZY4
AAYXX
CITATION
ID FETCH-LOGICAL-c263t-5b461a9edc9181e9b80b88f1a708b849ea0d8ed6f86244925475b84fcfe31be03
IEDL.DBID RIE
ISICitedReferencesCount 577
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000427623000017&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1520-9210
IngestDate Sat Nov 29 02:04:04 EST 2025
Tue Nov 18 22:41:41 EST 2025
Wed Aug 27 02:52:11 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c263t-5b461a9edc9181e9b80b88f1a708b849ea0d8ed6f86244925475b84fcfe31be03
ORCID 0000-0002-1479-1099
PageCount 12
ParticipantIDs crossref_citationtrail_10_1109_TMM_2017_2759508
crossref_primary_10_1109_TMM_2017_2759508
ieee_primary_8060595
PublicationCentury 2000
PublicationDate 2018-04-01
PublicationDateYYYYMMDD 2018-04-01
PublicationDate_xml – month: 04
  year: 2018
  text: 2018-04-01
  day: 01
PublicationDecade 2010
PublicationTitle IEEE transactions on multimedia
PublicationTitleAbbrev TMM
PublicationYear 2018
Publisher IEEE
Publisher_xml – name: IEEE
References zhang (ref58) 0
ref57
ref13
ref12
ref59
ref15
ref14
ref53
ref52
ref55
ref11
ref54
ref10
ref17
ref16
ref19
ref18
hariharan (ref20) 0
ref51
ref50
ouyang (ref33) 0
ref46
nam (ref31) 0
ref48
ref47
ref41
ref44
simonyan (ref45) 2014
he (ref21) 0
chen (ref7) 0
ref49
ref8
ref9
ref4
ref3
ref6
ref5
ref40
ref35
ref34
nam (ref32) 0
ref37
ref36
ref30
ref2
ref1
ref39
ref38
ribeiro (ref42) 2016
ref24
ref23
ref26
ref25
ref64
ref63
ref22
xu (ref56) 2014
liu (ref28) 0
ref27
ref29
sermanet (ref43) 2013
ref60
ref62
ref61
References_xml – start-page: 443
  year: 0
  ident: ref58
  article-title: Is faster R-CNN doing well for pedestrian detection
  publication-title: Proc Eur Conf Comput Vis
– ident: ref49
  doi: 10.1007/s11263-013-0620-5
– ident: ref38
  doi: 10.1109/CVPR.2007.383157
– start-page: 1
  year: 0
  ident: ref31
  article-title: Pedestrian detection system based on stereo vision for mobile robot
  publication-title: Proc
– ident: ref55
  doi: 10.1109/ICCV.2013.10
– start-page: 424
  year: 0
  ident: ref7
  article-title: 3D object proposals for accurate object class detection
  publication-title: Proc 28th Int Conf Neural Inform Process Syst
– start-page: 346
  year: 0
  ident: ref21
  article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition
  publication-title: Proc Eur Conf Comput Vis
– year: 2016
  ident: ref42
  article-title: A real-time pedestrian detector using deep learning for human-aware navigation
– ident: ref34
  doi: 10.1109/ICCV.2013.257
– ident: ref61
  doi: 10.1109/CVPR.2014.126
– ident: ref19
  doi: 10.1109/IVS.2015.7225711
– ident: ref11
  doi: 10.1109/TPAMI.2011.155
– ident: ref30
  doi: 10.1109/ICCV.2013.322
– ident: ref51
  doi: 10.1109/ICCV.2003.1238422
– ident: ref26
  doi: 10.1109/TMM.2016.2642789
– ident: ref24
  doi: 10.1145/2647868.2654889
– start-page: 21
  year: 0
  ident: ref28
  article-title: SSD: Single shot multibox detector
  publication-title: Proc Eur Conf Comput Vis
– start-page: 459
  year: 0
  ident: ref20
  article-title: Discriminative decorrelation for clustering and classification
  publication-title: Proc Eur Conf Comput Vis
– ident: ref16
  doi: 10.1109/CVPR.2014.81
– ident: ref29
  doi: 10.1109/TCYB.2013.2255271
– ident: ref23
  doi: 10.1109/TMM.2016.2602060
– ident: ref8
  doi: 10.1109/CVPR.2005.177
– ident: ref25
  doi: 10.1145/3065386
– ident: ref15
  doi: 10.1109/ICCV.2015.169
– ident: ref6
  doi: 10.1109/CVPR.2013.235
– ident: ref48
  doi: 10.1109/CVPR.2015.7299143
– ident: ref22
  doi: 10.1109/CVPR.2015.7299034
– ident: ref53
  doi: 10.1109/TPAMI.2013.124
– ident: ref54
  doi: 10.1109/ICCV.2009.5459207
– ident: ref3
  doi: 10.1007/978-3-319-16181-5_47
– ident: ref14
  doi: 10.1109/CVPR.2012.6248074
– ident: ref40
  doi: 10.1109/CVPR.2016.91
– ident: ref59
  doi: 10.1109/CVPR.2016.141
– ident: ref13
  doi: 10.1109/TPAMI.2009.167
– start-page: 3258
  year: 0
  ident: ref33
  article-title: A discriminative deep model for pedestrian detection with occlusion handling
  publication-title: Proc IEEE Conf Comput Vis Pattern Recogn
– ident: ref39
  doi: 10.1007/978-3-642-23678-5_55
– ident: ref46
  doi: 10.1109/TMM.2015.2476660
– ident: ref1
  doi: 10.1109/CVPR.2014.49
– ident: ref18
  doi: 10.1109/TCYB.2016.2593940
– ident: ref12
  doi: 10.1109/ICCV.2007.4409092
– ident: ref41
  doi: 10.1109/TPAMI.2016.2577031
– ident: ref62
  doi: 10.1109/ICCV.2015.384
– ident: ref35
  doi: 10.1007/s11263-016-0890-9
– ident: ref17
  doi: 10.1007/978-3-319-10584-0_26
– ident: ref4
  doi: 10.1016/j.procs.2016.05.455
– ident: ref9
  doi: 10.1109/TPAMI.2014.2300479
– ident: ref2
  doi: 10.1109/CVPR.2012.6248017
– ident: ref5
  doi: 10.1109/CVPR.2016.147
– year: 2014
  ident: ref45
  article-title: Very deep convolutional networks for large-scale image recognition
– start-page: 424
  year: 0
  ident: ref32
  article-title: Local decorrelation for improved pedestrian detection
  publication-title: Proc 27th Int Conf Neural Inf Process Syst
– ident: ref36
  doi: 10.1007/978-3-319-10593-2_36
– ident: ref63
  doi: 10.1109/TMM.2014.2321534
– year: 2013
  ident: ref43
  article-title: Overfeat: Integrated recognition, localization and detection using convolutional networks
– ident: ref57
  doi: 10.1109/CVPR.2013.390
– ident: ref50
  doi: 10.1023/B:VISI.0000013087.49260.fb
– ident: ref37
  doi: 10.1007/978-3-642-15561-1_18
– ident: ref52
  doi: 10.1109/TMM.2015.2476655
– ident: ref60
  doi: 10.1109/CVPR.2015.7298784
– ident: ref44
  doi: 10.1109/CVPR.2013.465
– ident: ref64
  doi: 10.1007/978-3-319-10602-1_26
– ident: ref47
  doi: 10.1109/ICCV.2015.221
– ident: ref27
  doi: 10.1109/CVPR.2014.27
– ident: ref10
  doi: 10.5244/C.23.91
– year: 2014
  ident: ref56
  article-title: Scale-invariant convolutional neural networks
SSID ssj0014507
Score 2.6704466
Snippet In this paper, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may...
SourceID crossref
ieee
SourceType Enrichment Source
Index Database
Publisher
StartPage 985
SubjectTerms deep learning
Detectors
Feature extraction
Logic gates
Pedestrian detection
Proposals
Robustness
scale-aware
Skeleton
Training
Title Scale-Aware Fast R-CNN for Pedestrian Detection
URI https://ieeexplore.ieee.org/document/8060595
Volume 20
WOSCitedRecordID wos000427623000017&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0077
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014507
  issn: 1520-9210
  databaseCode: RIE
  dateStart: 19990101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS8QwEB508aAH1yeuL3rwIhg3tWkex0VdPGgRX3greUxBkCpr1b9vku0WBRG8lTAJ4UvLTDrzzQdwwJXxTlUyolPhCHPMEOM0J5XIhMhFJfPIcn24FEUhHx_V9RwcdVwYRIzFZ3gcHmMu373Y9_CrbCipD75VPg_zQvApV6vLGLA8UqO9O6JE-XvMLCVJ1fDu6irUcInjE5EH0dMfLuibpkp0KeP-_zazAstt6JiMpme9CnNYr0F_JsuQtF_pGix96zG4DsNbfwpIRp96gslYvzXJDTktisQHq8k1Ooy6HXVyhk0syqo34H58fnd6QVqVBGJPeNaQ3DCeaoXOKu-tURlJjZRVqgWVRjKFmjqJjleBChJaETKR-_HKVpilBmm2Cb36pcYtSJgWuWY8c456S79WqjmzFrlRxii0AxjOgCtt20I8KFk8l_EqQVXpoS4D1GUL9QAOuxmv0_YZf9iuB5Q7uxbg7d-Hd2DRT5bTIppd6DWTd9yDBfvRPL1N9uO78QVeCLR4
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fb9MwED6VgQQ8MOiG6DZGHnhBmhunsWP7ceqoitZGFZSpb5F_XCQklKE2g38f202rIiEk3iLrYlmfY905d999AO8LZbxTlYzoTDjCHDPEOF2QWuRCcFFLHlmudzNRlnK1UoseXO25MIgYi89wGB5jLt_d24fwqyyV1Affij-Cx5yxEd2ytfY5A8YjOdo7JEqUv8nskpJUpcv5PFRxieFI8CB7-ocTOlBViU5lcvx_y3kJL7rgMbne7vYr6GHTh-OdMEPSndM-PD_oMngC6Re_D0iuf-k1JhO9aZPPZFyWiQ9XkwU6jModTXKDbSzLak7h6-TjcjwlnU4CsaMibwk3rMi0QmeV99eojKRGyjrTgkojmUJNnURX1IEMEpoRMsH9eG1rzDODNH8NR819g28gYVpwzYrcOeot_VyZLpi1WBhljEI7gHQHXGW7JuJBy-J7FS8TVFUe6ipAXXVQD-DD_o0f2wYa_7A9CSjv7TqAz_4-_A6eTpfzWTX7VN6ewzM_kdyW1FzAUbt-wLfwxP5sv23Wl_E7-Q3x47e_
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Scale-aware+Fast+R-CNN+for+Pedestrian+Detection&rft.jtitle=IEEE+transactions+on+multimedia&rft.au=Li%2C+Jianan&rft.au=Liang%2C+Xiaodan&rft.au=Shen%2C+ShengMei&rft.au=Xu%2C+Tingfa&rft.date=2018-04-01&rft.issn=1520-9210&rft.eissn=1941-0077&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FTMM.2017.2759508&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TMM_2017_2759508
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1520-9210&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1520-9210&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1520-9210&client=summon