Articulated Human Detection with Flexible Mixtures of Parts

We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence Vol. 35; no. 12; pp. 2878 - 2890
Main Authors: Yi Yang, Ramanan, Deva
Format: Journal Article
Language:English
Published: Los Alamitos, CA IEEE 01.12.2013
IEEE Computer Society
Subjects:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We describe a general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations. Our models have several notable properties: 1) They efficiently model articulation by sharing computation across similar warps, 2) they efficiently model an exponentially large set of global mixtures through composition of local mixtures, and 3) they capture the dependency of global geometry on local appearance (parts look different at different locations). When relations are tree structured, our models can be efficiently optimized with dynamic programming. We learn all parameters, including local appearances, spatial relations, and co-occurrence relations (which encode local rigidity) with a structured SVM solver. Because our model is efficient enough to be used as a detector that searches over scales and image locations, we introduce novel criteria for evaluating pose estimation and human detection, both separately and jointly. We show that currently used evaluation criteria may conflate these two issues. Most previous approaches model limbs with rigid and articulated templates that are trained independently of each other, while we present an extensive diagnostic evaluation that suggests that flexible structure and joint training are crucial for strong performance. We present experimental results on standard benchmarks that suggest our approach is the state-of-the-art system for pose estimation, improving past work on the challenging Parse and Buffy datasets while being orders of magnitude faster.
AbstractList We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We describe a general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations. Our models have several notable properties: 1) They efficiently model articulation by sharing computation across similar warps, 2) they efficiently model an exponentially large set of global mixtures through composition of local mixtures, and 3) they capture the dependency of global geometry on local appearance (parts look different at different locations). When relations are tree structured, our models can be efficiently optimized with dynamic programming. We learn all parameters, including local appearances, spatial relations, and co-occurrence relations (which encode local rigidity) with a structured SVM solver. Because our model is efficient enough to be used as a detector that searches over scales and image locations, we introduce novel criteria for evaluating pose estimation and human detection, both separately and jointly. We show that currently used evaluation criteria may conflate these two issues. Most previous approaches model limbs with rigid and articulated templates that are trained independently of each other, while we present an extensive diagnostic evaluation that suggests that flexible structure and joint training are crucial for strong performance. We present experimental results on standard benchmarks that suggest our approach is the state-of-the-art system for pose estimation, improving past work on the challenging Parse and Buffy datasets while being orders of magnitude faster.
We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We describe a general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations. Our models have several notable properties: 1) They efficiently model articulation by sharing computation across similar warps, 2) they efficiently model an exponentially large set of global mixtures through composition of local mixtures, and 3) they capture the dependency of global geometry on local appearance (parts look different at different locations). When relations are tree structured, our models can be efficiently optimized with dynamic programming. We learn all parameters, including local appearances, spatial relations, and co-occurrence relations (which encode local rigidity) with a structured SVM solver. Because our model is efficient enough to be used as a detector that searches over scales and image locations, we introduce novel criteria for evaluating pose estimation and human detection, both separately and jointly. We show that currently used evaluation criteria may conflate these two issues. Most previous approaches model limbs with rigid and articulated templates that are trained independently of each other, while we present an extensive diagnostic evaluation that suggests that flexible structure and joint training are crucial for strong performance. We present experimental results on standard benchmarks that suggest our approach is the state-of-the-art system for pose estimation, improving past work on the challenging Parse and Buffy datasets while being orders of magnitude faster.We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather than modeling articulation using a family of warped (rotated and foreshortened) templates, we use a mixture of small, nonoriented parts. We describe a general, flexible mixture model that jointly captures spatial relations between part locations and co-occurrence relations between part mixtures, augmenting standard pictorial structure models that encode just spatial relations. Our models have several notable properties: 1) They efficiently model articulation by sharing computation across similar warps, 2) they efficiently model an exponentially large set of global mixtures through composition of local mixtures, and 3) they capture the dependency of global geometry on local appearance (parts look different at different locations). When relations are tree structured, our models can be efficiently optimized with dynamic programming. We learn all parameters, including local appearances, spatial relations, and co-occurrence relations (which encode local rigidity) with a structured SVM solver. Because our model is efficient enough to be used as a detector that searches over scales and image locations, we introduce novel criteria for evaluating pose estimation and human detection, both separately and jointly. We show that currently used evaluation criteria may conflate these two issues. Most previous approaches model limbs with rigid and articulated templates that are trained independently of each other, while we present an extensive diagnostic evaluation that suggests that flexible structure and joint training are crucial for strong performance. We present experimental results on standard benchmarks that suggest our approach is the state-of-the-art system for pose estimation, improving past work on the challenging Parse and Buffy datasets while being orders of magnitude faster.
Author Yi Yang
Ramanan, Deva
Author_xml – sequence: 1
  surname: Yi Yang
  fullname: Yi Yang
  email: yyang8@ics.uci.edu
  organization: Dept. of Comput. Sci., Univ. of California at Irvine, Irvine, CA, USA
– sequence: 2
  givenname: Deva
  surname: Ramanan
  fullname: Ramanan, Deva
  email: dramanan@ics.uci.edu
  organization: Dept. of Comput. Sci., Univ. of California at Irvine, Irvine, CA, USA
BackLink http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=28150229$$DView record in Pascal Francis
https://www.ncbi.nlm.nih.gov/pubmed/24136428$$D View this record in MEDLINE/PubMed
BookMark eNp1kE1LAzEQhoNUtK1evQiyF8HL1nyb4KlUqwVFD3oOaXYWI9tdTbJY_71b2yoIngaG532ZeQaoVzc1IHRE8IgQrM-fHsf3sxHFhI6oJDuoT4nEuaaa9lAfE0lzpajaR4MYXzEmXGC2h_YpJ0xyqvrochySd21lExTZbbuwdXYFCVzyTZ19-PSSTStY-nkF2b1fpjZAzJoye7QhxQO0W9oqwuFmDtHz9PppcpvfPdzMJuO73DEuUq6FsMIRS7iThcQcdAlaK9FtmHIFgMNzKLli3blCCFKKubWWFQpLydwFZUN0tu59C817CzGZhY8OqsrW0LTREM4ZJwpf6A492aDtfAGFeQt-YcOn2T7cAacbwEZnqzLY2vn4yykiMKWrIr7mXGhiDFAa55NdWUnB-soQbFb-zbd_s_JvOv9dbPQntm3-N3C8DngA-IElU5hrxb4Ao16NhQ
CODEN ITPIDJ
CitedBy_id crossref_primary_10_3390_s17081913
crossref_primary_10_1109_TVT_2024_3426538
crossref_primary_10_1016_j_neunet_2022_03_041
crossref_primary_10_1049_iet_cvi_2017_0146
crossref_primary_10_1016_j_robot_2016_10_006
crossref_primary_10_1049_csy2_12045
crossref_primary_10_1145_3177852
crossref_primary_10_1049_iet_cvi_2017_0382
crossref_primary_10_1016_j_eswa_2020_113247
crossref_primary_10_1109_TPAMI_2019_2901875
crossref_primary_10_1007_s10489_023_04658_6
crossref_primary_10_3390_s18103202
crossref_primary_10_1007_s11548_025_03467_1
crossref_primary_10_3390_a18080533
crossref_primary_10_1007_s00530_017_0570_9
crossref_primary_10_1007_s11042_017_5133_8
crossref_primary_10_1016_j_media_2016_07_001
crossref_primary_10_1587_transinf_2015EDP7228
crossref_primary_10_1109_TPAMI_2016_2587642
crossref_primary_10_1016_j_patcog_2015_08_027
crossref_primary_10_1016_j_media_2022_102654
crossref_primary_10_1109_TPAMI_2016_2526002
crossref_primary_10_7717_peerj_cs_1152
crossref_primary_10_1109_ACCESS_2020_3010248
crossref_primary_10_1109_ACCESS_2024_3456436
crossref_primary_10_1016_j_sigpro_2014_09_014
crossref_primary_10_1088_1742_6596_1804_1_012178
crossref_primary_10_1109_TNNLS_2015_2411287
crossref_primary_10_1007_s11704_019_8266_2
crossref_primary_10_1109_TRO_2025_3577030
crossref_primary_10_1088_1757_899X_1275_1_012004
crossref_primary_10_1016_j_jbiomech_2017_01_028
crossref_primary_10_1016_j_patcog_2021_108046
crossref_primary_10_1016_j_jvcir_2015_06_013
crossref_primary_10_1109_TASE_2024_3389592
crossref_primary_10_1016_j_cmpb_2021_106189
crossref_primary_10_1109_ACCESS_2020_2973039
crossref_primary_10_1038_s41746_025_01929_z
crossref_primary_10_1016_j_patcog_2020_107258
crossref_primary_10_1007_s11042_015_2608_3
crossref_primary_10_1109_ACCESS_2020_3026276
crossref_primary_10_1109_TII_2016_2605629
crossref_primary_10_1016_j_media_2018_03_012
crossref_primary_10_1016_j_cviu_2018_10_006
crossref_primary_10_3390_math13060893
crossref_primary_10_1007_s10462_024_10930_z
crossref_primary_10_1007_s10994_025_06759_4
crossref_primary_10_1016_j_jvcir_2015_10_001
crossref_primary_10_1007_s10489_022_03549_6
crossref_primary_10_1049_el_2018_6478
crossref_primary_10_1609_aimag_v39i1_2776
crossref_primary_10_1016_j_eswa_2025_128922
crossref_primary_10_1016_j_jvcir_2020_102948
crossref_primary_10_1109_TCSVT_2022_3164663
crossref_primary_10_1007_s11263_024_02074_y
crossref_primary_10_1109_TIP_2019_2955280
crossref_primary_10_1007_s11042_015_2611_8
crossref_primary_10_1016_j_cviu_2025_104297
crossref_primary_10_1016_j_neucom_2024_128049
crossref_primary_10_1016_j_ins_2016_06_016
crossref_primary_10_1007_s13198_022_01838_4
crossref_primary_10_1016_j_neucom_2016_11_038
crossref_primary_10_1016_j_patcog_2016_11_018
crossref_primary_10_1109_TCSVT_2025_3558125
crossref_primary_10_1007_s11263_016_0938_x
crossref_primary_10_1016_j_neucom_2020_01_123
crossref_primary_10_1007_s10462_020_09904_8
crossref_primary_10_1109_ACCESS_2020_3011360
crossref_primary_10_1007_s00138_020_01089_y
crossref_primary_10_1007_s00530_022_01019_0
crossref_primary_10_1007_s11263_015_0851_8
crossref_primary_10_1016_j_ins_2020_05_083
crossref_primary_10_1109_TPAMI_2016_2557783
crossref_primary_10_1016_j_cviu_2018_02_003
crossref_primary_10_1016_j_cviu_2019_04_011
crossref_primary_10_1109_ACCESS_2020_2977856
crossref_primary_10_1080_01691864_2024_2442721
crossref_primary_10_1016_j_actaastro_2025_02_015
crossref_primary_10_1016_j_inffus_2022_03_009
crossref_primary_10_1109_TPAMI_2020_3013620
crossref_primary_10_1145_3528223_3530106
crossref_primary_10_1007_s11042_018_5839_2
crossref_primary_10_1109_TPAMI_2023_3330935
crossref_primary_10_1177_1729881417714230
crossref_primary_10_1073_pnas_1802103115
crossref_primary_10_1109_ACCESS_2020_2966655
crossref_primary_10_3390_s16121966
crossref_primary_10_1007_s11263_021_01482_8
crossref_primary_10_1016_j_patcog_2019_03_010
crossref_primary_10_1109_TVCG_2024_3456141
crossref_primary_10_1038_s41598_024_62151_7
crossref_primary_10_1126_scirobotics_abe1315
crossref_primary_10_1016_j_cviu_2017_08_008
crossref_primary_10_1134_S0361768823080066
crossref_primary_10_1016_j_neucom_2014_07_069
crossref_primary_10_1109_TPAMI_2018_2865351
crossref_primary_10_1109_TAI_2023_3318575
crossref_primary_10_1016_j_sigpro_2014_07_031
crossref_primary_10_1007_s11042_018_5617_1
crossref_primary_10_1109_TCSVT_2021_3095489
crossref_primary_10_3390_jimaging9050104
crossref_primary_10_1109_TIP_2015_2473662
crossref_primary_10_1109_TMM_2016_2571629
crossref_primary_10_1016_j_cviu_2019_102897
crossref_primary_10_1038_s41598_025_91426_w
crossref_primary_10_1007_s12652_020_02347_7
crossref_primary_10_1109_TIP_2019_2942686
crossref_primary_10_7554_eLife_63720
crossref_primary_10_3390_app13095402
crossref_primary_10_1007_s11263_018_1077_3
crossref_primary_10_1109_TITS_2019_2921325
crossref_primary_10_1109_ACCESS_2020_2995764
crossref_primary_10_1109_ACCESS_2024_3399222
crossref_primary_10_1109_ACCESS_2019_2902330
crossref_primary_10_1038_s41592_022_01443_0
crossref_primary_10_1007_s11042_018_6502_7
crossref_primary_10_3390_app12157909
crossref_primary_10_1016_j_cviu_2019_03_004
crossref_primary_10_1186_s42492_023_00148_1
crossref_primary_10_1145_3748717
crossref_primary_10_1016_j_cviu_2017_04_008
crossref_primary_10_17586_2226_1494_2025_25_2_273_285
crossref_primary_10_3724_SP_J_1089_2022_18878
crossref_primary_10_1016_j_envsoft_2024_106252
crossref_primary_10_1109_TIP_2018_2865666
crossref_primary_10_1038_s41598_022_16014_8
crossref_primary_10_1109_JBHI_2021_3062234
crossref_primary_10_3390_agronomy14123027
crossref_primary_10_3390_electronics14071307
crossref_primary_10_1109_TIP_2018_2872628
crossref_primary_10_1109_TPAMI_2014_2369050
crossref_primary_10_1007_s00138_023_01448_5
crossref_primary_10_1109_TIP_2022_3161081
crossref_primary_10_1007_s11263_016_0939_9
crossref_primary_10_1016_j_cviu_2021_103225
crossref_primary_10_1007_s00530_022_00980_0
crossref_primary_10_1016_j_patcog_2019_107074
crossref_primary_10_1109_TPAMI_2021_3087695
crossref_primary_10_1016_j_cviu_2018_01_001
crossref_primary_10_1111_exsy_12552
crossref_primary_10_3390_electronics14112107
crossref_primary_10_1007_s00138_015_0725_7
crossref_primary_10_1186_s13640_018_0255_0
crossref_primary_10_1016_j_inffus_2018_11_011
crossref_primary_10_1016_j_neucom_2024_128894
crossref_primary_10_1109_LSP_2014_2362553
crossref_primary_10_1145_3180420
crossref_primary_10_1109_TIP_2016_2639441
crossref_primary_10_1016_j_future_2022_12_002
crossref_primary_10_1016_j_compag_2025_110928
crossref_primary_10_1109_TCSVT_2017_2789224
crossref_primary_10_1016_j_neucom_2016_09_033
crossref_primary_10_1109_ACCESS_2019_2936709
crossref_primary_10_1016_j_neucom_2015_07_143
crossref_primary_10_1109_JSEN_2019_2929527
crossref_primary_10_1088_2057_1976_ad98a3
crossref_primary_10_3390_rs13132496
crossref_primary_10_1016_j_compbiomed_2025_110294
crossref_primary_10_1007_s11042_025_20704_6
crossref_primary_10_1109_JSEN_2020_3037121
crossref_primary_10_1016_j_neucom_2021_01_005
crossref_primary_10_1109_ACCESS_2020_3001473
crossref_primary_10_1109_TMM_2017_2762010
crossref_primary_10_1016_j_media_2016_05_003
crossref_primary_10_1109_TCSVT_2024_3435014
crossref_primary_10_1016_j_neucom_2025_130328
crossref_primary_10_4316_AECE_2015_04006
crossref_primary_10_1016_j_compag_2023_107736
crossref_primary_10_1080_01677063_2020_1804565
crossref_primary_10_1016_j_knosys_2017_06_001
crossref_primary_10_3389_fpls_2021_575751
crossref_primary_10_1038_s41598_025_96206_0
crossref_primary_10_1145_3524497
crossref_primary_10_1016_j_cviu_2016_12_002
crossref_primary_10_1061_JCCEE5_CPENG_5816
crossref_primary_10_1109_JSYST_2023_3270495
crossref_primary_10_1109_TPAMI_2020_2985395
crossref_primary_10_1155_2018_6271348
crossref_primary_10_1109_TPAMI_2021_3068236
crossref_primary_10_1109_ACCESS_2018_2808459
crossref_primary_10_1007_s11263_018_1081_7
crossref_primary_10_1109_ACCESS_2020_3010307
crossref_primary_10_1109_TAI_2022_3164065
crossref_primary_10_1016_j_cviu_2016_08_010
crossref_primary_10_1007_s11042_015_2819_7
crossref_primary_10_1109_TCSVT_2020_3042517
crossref_primary_10_1109_ACCESS_2020_3046845
crossref_primary_10_3233_JPD_223351
crossref_primary_10_1109_TCSVT_2017_2765242
crossref_primary_10_1007_s11119_023_10034_8
crossref_primary_10_1109_TPAMI_2019_2910523
crossref_primary_10_1109_TPAMI_2019_2929257
crossref_primary_10_1016_j_compag_2025_109961
crossref_primary_10_1016_j_autcon_2020_103308
crossref_primary_10_1049_iet_cvi_2016_0249
crossref_primary_10_3390_technologies10020047
crossref_primary_10_1007_s11760_019_01602_5
crossref_primary_10_1049_el_2017_4544
crossref_primary_10_1007_s11042_018_7119_6
crossref_primary_10_1145_2980179_2980235
crossref_primary_10_3390_rs12030465
crossref_primary_10_1109_TCSVT_2021_3059706
crossref_primary_10_1109_TIP_2022_3177959
crossref_primary_10_3390_s22051729
crossref_primary_10_1109_TPAMI_2016_2578328
crossref_primary_10_1016_j_patcog_2022_108652
crossref_primary_10_1109_ACCESS_2020_2980565
crossref_primary_10_1109_ACCESS_2020_2994283
crossref_primary_10_1109_TIP_2021_3077138
crossref_primary_10_1007_s11263_015_0869_y
crossref_primary_10_1109_TPAMI_2019_2894422
crossref_primary_10_1109_JIOT_2025_3533556
crossref_primary_10_1109_LRA_2022_3151981
crossref_primary_10_1109_ACCESS_2016_2643439
crossref_primary_10_3390_s22052011
crossref_primary_10_3390_s25134028
crossref_primary_10_1016_j_jvcir_2021_103055
crossref_primary_10_1109_TCSVT_2020_3038145
crossref_primary_10_1145_3609235
crossref_primary_10_3390_s23094425
crossref_primary_10_1109_ACCESS_2019_2919154
crossref_primary_10_1016_j_patcog_2023_110048
crossref_primary_10_1109_TPAMI_2017_2731842
crossref_primary_10_1109_TCSVT_2023_3288370
crossref_primary_10_1049_ipr2_13111
crossref_primary_10_1109_TPAMI_2017_2724510
crossref_primary_10_1109_TIV_2018_2804170
crossref_primary_10_3390_s20061593
crossref_primary_10_1007_s41870_023_01497_z
crossref_primary_10_3390_rs13173443
crossref_primary_10_1016_j_cviu_2016_11_002
crossref_primary_10_3390_en14030696
crossref_primary_10_1016_j_biosystemseng_2024_04_014
crossref_primary_10_1016_j_cag_2023_09_001
crossref_primary_10_1109_ACCESS_2019_2904117
crossref_primary_10_1007_s11263_016_0901_x
crossref_primary_10_1109_TCSVT_2017_2707477
crossref_primary_10_3390_prosthesis6010002
crossref_primary_10_1016_j_cviu_2017_12_005
crossref_primary_10_1109_TIP_2021_3097836
crossref_primary_10_1109_TCSVT_2019_2952779
crossref_primary_10_1109_TPAMI_2021_3124736
crossref_primary_10_1109_TPAMI_2025_3552604
crossref_primary_10_3390_s21227640
crossref_primary_10_1016_j_patcog_2021_107863
crossref_primary_10_1016_j_cogsys_2017_08_001
crossref_primary_10_1109_JTEHM_2019_2892970
crossref_primary_10_1016_j_patcog_2016_12_025
crossref_primary_10_1109_TMM_2016_2556859
crossref_primary_10_3390_jimaging8110308
crossref_primary_10_1109_TPAMI_2016_2537807
crossref_primary_10_1007_s00138_022_01352_4
crossref_primary_10_3390_s22228819
crossref_primary_10_3390_sym17071098
crossref_primary_10_1007_s00138_020_01104_2
crossref_primary_10_1007_s12204_025_2815_7
crossref_primary_10_1109_ACCESS_2024_3376426
crossref_primary_10_1007_s10489_021_02718_3
crossref_primary_10_3390_app15137344
crossref_primary_10_1016_j_cviu_2022_103483
crossref_primary_10_1016_j_mechatronics_2022_102807
crossref_primary_10_1109_TSMC_2016_2639788
crossref_primary_10_1109_TIT_2019_2916805
crossref_primary_10_1111_cgf_13310
crossref_primary_10_1109_TPAMI_2015_2408349
crossref_primary_10_1016_j_neucom_2016_08_032
crossref_primary_10_1109_ACCESS_2020_2969994
crossref_primary_10_1109_TCSVT_2018_2879980
crossref_primary_10_1007_s12530_023_09508_x
crossref_primary_10_1109_JPHOT_2024_3453116
crossref_primary_10_3390_s24175457
crossref_primary_10_1007_s11263_014_0767_8
crossref_primary_10_1016_j_jvcir_2022_103461
crossref_primary_10_1007_s00138_024_01608_1
Cites_doi 10.1007/s11263-009-0275-4
10.1109/T-C.1973.223602
10.1007/978-3-319-57021-1_9
10.1109/TPAMI.1980.6447699
10.1007/978-3-642-15561-1_17
10.1109/CVPR.2006.180
10.1145/1015330.1015341
10.1109/CVPR.2012.6248058
10.1145/1273496.1273508
10.1007/978-3-540-88690-7_53
10.1109/ICCV.2001.937589
10.1109/CVPR.2010.5540227
10.1109/CVPR.2007.383301
10.1109/ICCV.2009.5459192
10.1023/A:1011179004708
10.1109/CVPR.2010.5539879
10.1109/ICCV.2011.6126309
10.1016/0262-8856(83)90003-3
10.1007/3-540-47977-5_44
10.1109/CVPR.2011.5995741
10.1109/CVPR.2009.5206754
10.5244/C.23.3
10.1109/TPAMI.2009.167
10.5244/C.24.12
10.1007/978-3-642-15552-9_30
10.1109/CVPR.2011.5995318
10.1109/ICCV.2011.6126552
10.1007/3-540-47969-4_42
10.1109/CVPR.2008.4587468
10.1007/978-3-642-15558-1_23
10.1109/ICCV.2009.5459303
10.1109/CVPR.2005.177
10.1109/CVPR.2004.1315183
10.1006/cviu.1994.1006
10.1109/ICCVW.2009.5457673
10.1109/CVPR.2010.5539906
10.1109/CVPR.2006.315
10.1109/CVPR.2011.5995607
10.1007/s11263-012-0524-9
10.1109/CVPR.2004.1315182
10.7551/mitpress/7503.003.0146
10.1007/s11263-010-0375-1
10.1023/B:VISI.0000042934.15159.49
10.1109/CVPR.2010.5540182
10.1162/08997660360581958
10.1109/ICCV.2005.48
10.1007/978-0-85729-997-0_11
10.1109/CVPR.2007.383086
ContentType Journal Article
Copyright 2015 INIST-CNRS
Copyright_xml – notice: 2015 INIST-CNRS
DBID 97E
RIA
RIE
AAYXX
CITATION
IQODW
CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1109/TPAMI.2012.261
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Pascal-Francis
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
MEDLINE
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
Applied Sciences
EISSN 2160-9292
1939-3539
EndPage 2890
ExternalDocumentID 24136428
28150229
10_1109_TPAMI_2012_261
6380498
Genre orig-research
Journal Article
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
9M8
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ADRHT
AENEX
AETEA
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
FA8
HZ~
H~9
IBMZZ
ICLAB
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RXW
RZB
TAE
TN5
UHB
VH1
XJT
~02
AAYXX
CITATION
AAYOK
IQODW
RIG
CGR
CUY
CVF
ECM
EIF
NPM
7X8
ID FETCH-LOGICAL-c345t-955a5c1a14c6d604e9fe99851a138cdeec0bef4830165551f5baaa3d80663c723
IEDL.DBID RIE
ISICitedReferencesCount 552
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000326502200006&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0162-8828
1939-3539
IngestDate Sun Nov 23 09:38:05 EST 2025
Mon Jul 21 06:02:35 EDT 2025
Wed Apr 02 07:37:54 EDT 2025
Sat Nov 29 08:08:20 EST 2025
Tue Nov 18 21:05:09 EST 2025
Tue Aug 26 16:41:28 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 12
Keywords Computer vision
Motion estimation
Pose estimation
articulated shapes
Spatial analysis
Mixture theory
Tree structure
Modeling
Posture
Experimental result
Object detection
Dynamic programming
deformable part models
Localization
Cooccurrence analysis
Mechanical deformation
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
CC BY 4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c345t-955a5c1a14c6d604e9fe99851a138cdeec0bef4830165551f5baaa3d80663c723
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
PMID 24136428
PQID 1443418079
PQPubID 23479
PageCount 13
ParticipantIDs pubmed_primary_24136428
ieee_primary_6380498
pascalfrancis_primary_28150229
crossref_citationtrail_10_1109_TPAMI_2012_261
proquest_miscellaneous_1443418079
crossref_primary_10_1109_TPAMI_2012_261
PublicationCentury 2000
PublicationDate 2013-12-01
PublicationDateYYYYMMDD 2013-12-01
PublicationDate_xml – month: 12
  year: 2013
  text: 2013-12-01
  day: 01
PublicationDecade 2010
PublicationPlace Los Alamitos, CA
PublicationPlace_xml – name: Los Alamitos, CA
– name: United States
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2013
Publisher IEEE
IEEE Computer Society
Publisher_xml – name: IEEE
– name: IEEE Computer Society
References ref13
ref12
ref15
ref14
ref53
ref11
ref10
ref17
ref16
ref19
ref18
ref51
ref50
ref45
ref48
ref47
ref42
ref41
ref43
Fan (ref44) 2008; 9
ref49
ref7
Ferrari (ref8) 2013
ref9
ref4
ref3
ref5
Yang (ref6) 2013
ref40
Ramanan (ref46) 2012
ref35
ref34
ref37
ref36
ref31
ref30
ref33
ref32
ref2
ref1
ref39
ref38
ref24
ref23
ref26
ref25
ref20
ref22
ref21
Yang (ref52) 2013
ref28
ref27
ref29
References_xml – ident: ref49
  doi: 10.1007/s11263-009-0275-4
– ident: ref2
  doi: 10.1109/T-C.1973.223602
– ident: ref38
  doi: 10.1007/978-3-319-57021-1_9
– ident: ref11
  doi: 10.1109/TPAMI.1980.6447699
– ident: ref23
  doi: 10.1007/978-3-642-15561-1_17
– ident: ref18
  doi: 10.1109/CVPR.2006.180
– ident: ref43
  doi: 10.1145/1015330.1015341
– ident: ref35
  doi: 10.1109/CVPR.2012.6248058
– ident: ref45
  doi: 10.1145/1273496.1273508
– ident: ref19
  doi: 10.1007/978-3-540-88690-7_53
– ident: ref15
  doi: 10.1109/ICCV.2001.937589
– volume: 9
  start-page: 1871
  year: 2008
  ident: ref44
  article-title: Liblinear: A Library for Large Linear Classification
  publication-title: J. Machine Learning Research
– ident: ref20
  doi: 10.1109/CVPR.2010.5540227
– ident: ref30
  doi: 10.1109/CVPR.2007.383301
– year: 2013
  ident: ref6
  article-title: Flexible Mixtures of Parts for Articulated Pose Detection, Release 1.3
– ident: ref25
  doi: 10.1109/ICCV.2009.5459192
– ident: ref17
  doi: 10.1023/A:1011179004708
– ident: ref36
  doi: 10.1109/CVPR.2010.5539879
– ident: ref37
  doi: 10.1109/ICCV.2011.6126309
– ident: ref12
  doi: 10.1016/0262-8856(83)90003-3
– ident: ref31
  doi: 10.1007/3-540-47977-5_44
– year: 2013
  ident: ref52
  article-title: Flexible Mixtures of Parts for Articulated Pose Detection, Release 1.2
– ident: ref10
  doi: 10.1109/CVPR.2011.5995741
– year: 2012
  ident: ref46
  article-title: Dual Coordinate Descent Solvers for Large Structured Prediction Problems
– ident: ref27
  doi: 10.1109/CVPR.2009.5206754
– ident: ref53
  doi: 10.5244/C.23.3
– ident: ref4
  doi: 10.1109/TPAMI.2009.167
– ident: ref50
  doi: 10.5244/C.24.12
– ident: ref26
  doi: 10.1007/978-3-642-15552-9_30
– year: 2013
  ident: ref8
  article-title: Buffy Stickmen v3.01: Annotated Data and Evaluation Routines for 2D Human Pose Estimation
– ident: ref51
  doi: 10.1109/CVPR.2011.5995318
– ident: ref42
  doi: 10.1109/ICCV.2011.6126552
– ident: ref32
  doi: 10.1007/3-540-47969-4_42
– ident: ref7
  doi: 10.1109/CVPR.2008.4587468
– ident: ref21
  doi: 10.1007/978-3-642-15558-1_23
– ident: ref3
  doi: 10.1109/ICCV.2009.5459303
– ident: ref34
  doi: 10.1109/CVPR.2005.177
– ident: ref16
  doi: 10.1109/CVPR.2004.1315183
– ident: ref13
  doi: 10.1006/cviu.1994.1006
– ident: ref33
  doi: 10.1109/ICCVW.2009.5457673
– ident: ref5
  doi: 10.1109/CVPR.2010.5539906
– ident: ref24
  doi: 10.1109/CVPR.2006.315
– ident: ref41
  doi: 10.1109/CVPR.2011.5995607
– ident: ref48
  doi: 10.1007/s11263-012-0524-9
– ident: ref28
  doi: 10.1109/CVPR.2004.1315182
– ident: ref9
  doi: 10.7551/mitpress/7503.003.0146
– ident: ref40
  doi: 10.1007/s11263-010-0375-1
– ident: ref1
  doi: 10.1023/B:VISI.0000042934.15159.49
– ident: ref29
  doi: 10.1109/CVPR.2010.5540182
– ident: ref47
  doi: 10.1162/08997660360581958
– ident: ref22
  doi: 10.1109/ICCV.2005.48
– ident: ref14
  doi: 10.1007/978-0-85729-997-0_11
– ident: ref39
  doi: 10.1109/CVPR.2007.383086
SSID ssj0014503
Score 2.6323993
Snippet We describe a method for articulated human detection and human pose estimation in static images based on a new representation of deformable part models. Rather...
SourceID proquest
pubmed
pascalfrancis
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 2878
SubjectTerms Algorithms
Applied sciences
articulated shapes
Artificial intelligence
Computational modeling
Computer science; control theory; systems
Data processing. List processing. Character string processing
Deformable models
deformable part models
Exact sciences and technology
Human factors
Humans
Memory organisation. Data processing
Models, Theoretical
object detection
Object segmentation
Pattern recognition. Digital image processing. Computational geometry
Pose estimation
Reproducibility of Results
Shape analysis
Software
Title Articulated Human Detection with Flexible Mixtures of Parts
URI https://ieeexplore.ieee.org/document/6380498
https://www.ncbi.nlm.nih.gov/pubmed/24136428
https://www.proquest.com/docview/1443418079
Volume 35
WOSCitedRecordID wos000326502200006&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9tAEB4F1AM9lFcLgRItElIvNTj2rr0WJwRE7QGUA5Vys_YxlpCQg3BS9ed3Zu24RIJDb5Y9K1szs95vdmbnAzirfGastD5yKrORtKaItFEZhSoKrVMVIQoTyCby-3s9mxXTAXzvz8IgYig-w3O-DLl8P3dL3iq7IF8hQKs3YCPPs_asVp8xkCqwIBOCoRlOYUTXoHEcFxcP06u7n1zFlZxTvMDtf-nPzbh7bS0K5CpcGmka0k7V0lq8jzvD-jPZ_r8v34FPHc4UV61j7MIA6z3YXnE4iG5K78HHVw0J9-EyyDOjF3oR9vfFDS5CtVYteMtWTLiBpn1Ccff4h5MPjZhXYkr-13yGX5Pbh-sfUUevELlUqkVUKGWUG5uxdJnPYolFhRR8KbqTaucRXWyxkjrlE08ErCpljTGp14xSXJ6kX2Czntd4CMLEFOk69JlVsZTeGqVSjJ00Js9tofMhRCtFl67rPc4UGE9liEHiogw2KtlGJdloCN96-ee268a7kvus7V6qU_QQRmt27J8nmhBwkhRDOF0ZtqQJxVkSU-N82VAsJGll13FOMgetxf-N7hzn6O23HsNWwmwZodrlK2wuXpZ4Ah_c78Vj8zIir53pUfDav3VG5vM
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS9xAEB_UCrYPWrUfV1vdQsEXo7lkN9ngk1QPRe-4hyv4FvZjAoLkirkr_vnO7OVShfrQt7CZJWFmlv3Nzuz8AH5UPjNWWh85ldlIWlNE2qiMQhWF1qmKEIUJZBP5aKRvb4vxChx1d2EQMRSf4TE_hly-n7o5H5WdkK8QoNWr8IaZs9rbWl3OQKrAg0wYhtY4BRJti8Z-XJxMxmfDK67jSo4pYuAGwDSdkfeL3SjQq3BxpGlIP9WC2OJ15Bl2oMHW__37e9hskaY4W7jGNqxgvQNbSxYH0S7qHXj3rCXhLpwGeeb0Qi_CCb84x1mo16oFH9qKAbfQtPcohnePnH5oxLQSY_LA5gP8GlxMfl5GLcFC5FKpZlGhlFGub_rSZT6LJRYVUvilaCTVziO62GIldcp3nghaVcoaY1KvGae4PEk_wlo9rfEzCBNTrOvQZ1bFUnprlEoxdtKYPLeFznsQLRVdurb7OJNg3JchComLMtioZBuVZKMeHHbyvxd9N16V3GVtd1Ktonuw_8KO3ftEEwZOkqIH35eGLWlJcZ7E1DidNxQNSdrbdZyTzKeFxf_Obh3ny7-_egAbl5PhTXlzNbreg7cJc2eE2pevsDZ7mOM3WHd_ZnfNw37w3SczhelU
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Articulated+Human+Detection+with+Flexible+Mixtures+of+Parts&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Yi+Yang&rft.au=Ramanan%2C+Deva&rft.date=2013-12-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=35&rft.issue=12&rft.spage=2878&rft.epage=2890&rft_id=info:doi/10.1109%2FTPAMI.2012.261&rft_id=info%3Apmid%2F24136428&rft.externalDocID=6380498
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon