The Open Images Dataset V4 Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale
We present Open Images V4, a dataset of 9.2M images with unified annotations for image classification, object detection and visual relationship detection. The images have a Creative Commons Attribution license that allows to share and adapt the material, and they have been collected from Flickr with...
Uloženo v:
| Vydáno v: | International journal of computer vision Ročník 128; číslo 7; s. 1956 - 1981 |
|---|---|
| Hlavní autoři: | , , , , , , , , , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
Springer US
01.07.2020
|
| Témata: | |
| ISSN: | 0920-5691, 1573-1405 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | We present Open Images V4, a dataset of 9.2M images with unified annotations for image classification, object detection and visual relationship detection. The images have a Creative Commons Attribution license that allows to share and adapt the material, and they have been collected from Flickr without a predefined list of class names or tags, leading to natural class statistics and avoiding an initial design bias. Open Images V4 offers large scale across several dimensions: 30.1M image-level labels for 19.8k concepts, 15.4M bounding boxes for 600 object classes, and 375k visual relationship annotations involving 57 classes. For object detection in particular, we provide
15
×
more bounding boxes than the next largest datasets (15.4M boxes on 1.9M images). The images often show complex scenes with several objects (8 annotated objects per image on average). We annotated visual relationships between them, which support visual relationship detection, an emerging task that requires structured reasoning. We provide in-depth comprehensive statistics about the dataset, we validate the quality of the annotations, we study how the performance of several modern models evolves with increasing amounts of training data, and we demonstrate two applications made possible by having unified annotations of multiple types coexisting in the same images. We hope that the scale, quality, and variety of Open Images V4 will foster further research and innovation even beyond the areas of image classification, object detection, and visual relationship detection. |
|---|---|
| AbstractList | We present Open Images V4, a dataset of 9.2M images with unified annotations for image classification, object detection and visual relationship detection. The images have a Creative Commons Attribution license that allows to share and adapt the material, and they have been collected from Flickr without a predefined list of class names or tags, leading to natural class statistics and avoiding an initial design bias. Open Images V4 offers large scale across several dimensions: 30.1M image-level labels for 19.8k concepts, 15.4M bounding boxes for 600 object classes, and 375k visual relationship annotations involving 57 classes. For object detection in particular, we provide
15
×
more bounding boxes than the next largest datasets (15.4M boxes on 1.9M images). The images often show complex scenes with several objects (8 annotated objects per image on average). We annotated visual relationships between them, which support visual relationship detection, an emerging task that requires structured reasoning. We provide in-depth comprehensive statistics about the dataset, we validate the quality of the annotations, we study how the performance of several modern models evolves with increasing amounts of training data, and we demonstrate two applications made possible by having unified annotations of multiple types coexisting in the same images. We hope that the scale, quality, and variety of Open Images V4 will foster further research and innovation even beyond the areas of image classification, object detection, and visual relationship detection. |
| Author | Duerig, Tom Popov, Stefan Alldrin, Neil Malloci, Matteo Krasin, Ivan Kamali, Shahab Kuznetsova, Alina Uijlings, Jasper Pont-Tuset, Jordi Ferrari, Vittorio Rom, Hassan Kolesnikov, Alexander |
| Author_xml | – sequence: 1 givenname: Alina surname: Kuznetsova fullname: Kuznetsova, Alina organization: Google Research – sequence: 2 givenname: Hassan surname: Rom fullname: Rom, Hassan organization: Google Research – sequence: 3 givenname: Neil surname: Alldrin fullname: Alldrin, Neil organization: Google Research – sequence: 4 givenname: Jasper surname: Uijlings fullname: Uijlings, Jasper organization: Google Research – sequence: 5 givenname: Ivan surname: Krasin fullname: Krasin, Ivan organization: Google Research – sequence: 6 givenname: Jordi surname: Pont-Tuset fullname: Pont-Tuset, Jordi email: jponttuset@google.com organization: Google Research – sequence: 7 givenname: Shahab surname: Kamali fullname: Kamali, Shahab organization: Google Research – sequence: 8 givenname: Stefan surname: Popov fullname: Popov, Stefan organization: Google Research – sequence: 9 givenname: Matteo surname: Malloci fullname: Malloci, Matteo organization: Google Research – sequence: 10 givenname: Alexander surname: Kolesnikov fullname: Kolesnikov, Alexander organization: Google Research – sequence: 11 givenname: Tom surname: Duerig fullname: Duerig, Tom organization: Google Research – sequence: 12 givenname: Vittorio surname: Ferrari fullname: Ferrari, Vittorio organization: Google Research |
| BookMark | eNp9j71OwzAUhS1UJNLCC3TKCxjutWMnHlH5aaVKXQqr5cROSdU6lR0G-vQYwsTQ6Q7nfufom5KJ770jZI5wjwDlQ0RkklNgQAE5Snq-IhmKklMsQExIBipFQiq8IdMY9wDAKsYzMt9-uHxzcj5fHc3OxfzJDCa6IX8vbsl1aw7R3f3dGXl7ed4ulnS9eV0tHte0YQoHamuheGuLtmlYZUUlleDScAdK1kUFpSwUtNaK9C1r4RBUw5wFwziT6ETLZ4SNvU3oYwyu1afQHU340gj6x06PdjrZ6V87fU5Q9Q9qusEMXe-HYLrDZZSPaEw7fueC3vefwSfFS9Q3-mtkPA |
| CitedBy_id | crossref_primary_10_1088_1361_6560_ad4083 crossref_primary_10_1145_3687973 crossref_primary_10_3934_aci_2024007 crossref_primary_10_1016_j_compag_2025_110407 crossref_primary_10_1007_s13735_021_00225_2 crossref_primary_10_32604_cmes_2023_028018 crossref_primary_10_3390_s22103617 crossref_primary_10_1111_raq_13001 crossref_primary_10_1016_j_jag_2025_104851 crossref_primary_10_1007_s11276_021_02713_z crossref_primary_10_1186_s43067_023_00123_z crossref_primary_10_1145_3705006 crossref_primary_10_1109_TNNLS_2025_3547801 crossref_primary_10_1016_j_smim_2020_101411 crossref_primary_10_1109_TIP_2024_3485518 crossref_primary_10_3390_drones8050189 crossref_primary_10_1109_MIS_2022_3151466 crossref_primary_10_3390_app14114558 crossref_primary_10_1109_TGRS_2025_3559798 crossref_primary_10_1007_s10462_025_11284_w crossref_primary_10_1109_ACCESS_2021_3101789 crossref_primary_10_1080_13682199_2024_2395749 crossref_primary_10_1109_TAES_2025_3544593 crossref_primary_10_1145_3710967 crossref_primary_10_1109_ACCESS_2022_3165617 crossref_primary_10_1016_j_engappai_2025_111168 crossref_primary_10_1145_3696452 crossref_primary_10_1080_24751448_2021_1967060 crossref_primary_10_1109_ACCESS_2025_3573376 crossref_primary_10_1109_TPAMI_2021_3085295 crossref_primary_10_1109_TPAMI_2024_3421300 crossref_primary_10_1007_s10462_023_10512_5 crossref_primary_10_1016_j_asr_2024_03_033 crossref_primary_10_1109_TAI_2023_3297086 crossref_primary_10_3390_info14020115 crossref_primary_10_1007_s11263_023_01929_0 crossref_primary_10_1002_esp_70007 crossref_primary_10_1016_j_eswa_2025_126507 crossref_primary_10_1016_j_engappai_2023_106223 crossref_primary_10_1007_s10851_024_01188_9 crossref_primary_10_1007_s11042_024_20502_6 crossref_primary_10_1038_s41598_023_48423_8 crossref_primary_10_1016_j_softx_2025_102328 crossref_primary_10_1016_j_solener_2023_111840 crossref_primary_10_1109_JPHOT_2024_3450295 crossref_primary_10_1145_3707447 crossref_primary_10_1016_j_jksuci_2021_08_001 crossref_primary_10_5194_tc_17_5519_2023 crossref_primary_10_1016_j_neucom_2024_129131 crossref_primary_10_1109_TPAMI_2023_3266023 crossref_primary_10_1109_TPAMI_2024_3508072 crossref_primary_10_1007_s10618_022_00854_z crossref_primary_10_1145_3663667 crossref_primary_10_3390_app112110224 crossref_primary_10_1139_cjfas_2020_0446 crossref_primary_10_1186_s40494_022_00664_y crossref_primary_10_1007_s11554_025_01673_3 crossref_primary_10_3390_bdcc8010007 crossref_primary_10_1155_2023_6696721 crossref_primary_10_1016_j_oceaneng_2025_120823 crossref_primary_10_3390_su151310270 crossref_primary_10_1109_JSTARS_2025_3542220 crossref_primary_10_3390_ijerph18010091 crossref_primary_10_1007_s00138_024_01527_1 crossref_primary_10_1016_j_autcon_2024_105696 crossref_primary_10_1016_j_inffus_2023_101868 crossref_primary_10_1093_icesjms_fsaf039 crossref_primary_10_1109_ACCESS_2025_3552796 crossref_primary_10_3390_s22124575 crossref_primary_10_1007_s11042_023_14456_4 crossref_primary_10_3390_agronomy12020319 crossref_primary_10_1038_s41467_024_48792_2 crossref_primary_10_1007_s10462_024_10784_5 crossref_primary_10_1109_ACCESS_2023_3273736 crossref_primary_10_3390_computation12110228 crossref_primary_10_1109_TIP_2024_3378464 crossref_primary_10_1016_j_knosys_2023_110732 crossref_primary_10_1109_TGRS_2025_3575841 crossref_primary_10_1109_TCSVT_2024_3443417 crossref_primary_10_1016_j_buildenv_2023_110552 crossref_primary_10_1016_j_patcog_2022_109107 crossref_primary_10_3390_rs16173295 crossref_primary_10_1007_s10489_024_05389_y crossref_primary_10_1007_s11042_025_20913_z crossref_primary_10_1038_s42256_024_00802_0 crossref_primary_10_1080_10494820_2025_2484644 crossref_primary_10_1016_j_engappai_2023_107542 crossref_primary_10_1109_ACCESS_2023_3266093 crossref_primary_10_1109_TGRS_2024_3440386 crossref_primary_10_3390_s24237566 crossref_primary_10_1007_s11042_022_13153_y crossref_primary_10_1016_j_eswa_2023_121529 crossref_primary_10_1007_s00607_022_01095_0 crossref_primary_10_1016_j_bspc_2022_104479 crossref_primary_10_1109_TIP_2022_3199089 crossref_primary_10_1016_j_eswa_2023_121405 crossref_primary_10_1080_15230406_2021_1992299 crossref_primary_10_3390_s24134329 crossref_primary_10_1108_DTA_06_2022_0247 crossref_primary_10_1016_j_enggeo_2025_108286 crossref_primary_10_3233_MGS_230132 crossref_primary_10_1109_TMM_2022_3214090 crossref_primary_10_1145_3760535_3760542 crossref_primary_10_1109_TVCG_2025_3585077 crossref_primary_10_3390_rs13244999 crossref_primary_10_1016_j_patcog_2025_111568 crossref_primary_10_1007_s11760_024_03458_w crossref_primary_10_1109_TPAMI_2023_3276392 crossref_primary_10_1109_ACCESS_2024_3505989 crossref_primary_10_1073_pnas_2309688120 crossref_primary_10_1016_j_autcon_2024_105366 crossref_primary_10_1162_COLI_a_14 crossref_primary_10_1109_ACCESS_2021_3094720 crossref_primary_10_1186_s40537_021_00471_5 crossref_primary_10_1109_COMST_2024_3399612 crossref_primary_10_3390_agronomy12123052 crossref_primary_10_1002_eng2_12785 crossref_primary_10_1016_j_jfoodeng_2024_112020 crossref_primary_10_1109_TPAMI_2025_3537283 crossref_primary_10_3390_electronics13245000 crossref_primary_10_1038_s41524_025_01744_w crossref_primary_10_1109_ACCESS_2024_3491868 crossref_primary_10_1109_TPAMI_2024_3409826 crossref_primary_10_1080_07038992_2024_2309895 crossref_primary_10_1007_s10489_023_05198_9 crossref_primary_10_1080_18824889_2025_2527471 crossref_primary_10_1109_ACCESS_2023_3257280 crossref_primary_10_1007_s00138_023_01463_6 crossref_primary_10_3390_app11135931 crossref_primary_10_3390_electronics10040517 crossref_primary_10_1016_j_ecoinf_2025_103319 crossref_primary_10_1109_ACCESS_2024_3446854 crossref_primary_10_3390_app112211040 crossref_primary_10_1109_TMM_2021_3105807 crossref_primary_10_1145_3527619 crossref_primary_10_1155_2020_6626471 crossref_primary_10_1007_s11042_023_15967_w crossref_primary_10_1109_TMM_2024_3387831 crossref_primary_10_1109_ACCESS_2022_3186471 crossref_primary_10_1007_s41095_021_0247_3 crossref_primary_10_1109_TPAMI_2020_2992393 crossref_primary_10_1088_1361_6471_abd009 crossref_primary_10_3390_electronics14061158 crossref_primary_10_1109_TPAMI_2021_3114555 crossref_primary_10_1145_3744339 crossref_primary_10_1016_j_engappai_2023_106406 crossref_primary_10_1109_TMM_2022_3200282 crossref_primary_10_1145_3617592 crossref_primary_10_1162_opmi_a_00088 crossref_primary_10_1109_ACCESS_2024_3507820 crossref_primary_10_1109_TPAMI_2023_3339661 crossref_primary_10_1007_s11227_024_06280_w crossref_primary_10_1109_JSAC_2022_3180802 crossref_primary_10_1109_LSP_2021_3113827 crossref_primary_10_1016_j_heliyon_2024_e30485 crossref_primary_10_3389_fnbot_2024_1513354 crossref_primary_10_1109_TASLP_2021_3133208 crossref_primary_10_1145_3622806 crossref_primary_10_3390_ijgi11040256 crossref_primary_10_1109_ACCESS_2023_3299235 crossref_primary_10_1109_ACCESS_2024_3460764 crossref_primary_10_1109_TPAMI_2022_3218275 crossref_primary_10_1007_s11263_022_01622_8 crossref_primary_10_1109_ACCESS_2024_3425547 crossref_primary_10_1007_s42979_023_01748_7 crossref_primary_10_3390_app13137868 crossref_primary_10_1109_TCCN_2024_3522140 crossref_primary_10_1016_j_neunet_2025_107610 crossref_primary_10_1016_j_is_2023_102318 crossref_primary_10_1016_j_patcog_2024_111324 crossref_primary_10_1016_j_image_2022_116857 crossref_primary_10_1016_j_patcog_2022_109280 crossref_primary_10_1109_TMM_2023_3239229 crossref_primary_10_1007_s11263_023_01949_w crossref_primary_10_1145_3390462 crossref_primary_10_1016_j_anbehav_2021_06_011 crossref_primary_10_3390_s24092799 crossref_primary_10_1111_exsy_13396 crossref_primary_10_1016_j_patcog_2024_111209 crossref_primary_10_1007_s00500_021_05766_6 crossref_primary_10_3390_s23062935 crossref_primary_10_1109_TIP_2024_3384096 crossref_primary_10_1007_s00371_023_02899_7 crossref_primary_10_1109_TNET_2024_3424444 crossref_primary_10_1109_TPAMI_2024_3402143 crossref_primary_10_1002_rob_70064 crossref_primary_10_1016_j_compag_2021_106213 crossref_primary_10_1109_JSTARS_2024_3501216 crossref_primary_10_1007_s41064_023_00242_2 crossref_primary_10_3390_electronics11071010 crossref_primary_10_32604_cmc_2024_056742 crossref_primary_10_3390_app10207301 crossref_primary_10_3390_app12178534 crossref_primary_10_1007_s11263_021_01458_8 crossref_primary_10_1016_j_ijdrr_2025_105730 crossref_primary_10_1007_s11704_024_40065_x crossref_primary_10_1016_j_eswa_2022_119167 crossref_primary_10_1108_IJWIS_04_2022_0088 crossref_primary_10_20965_jaciii_2024_p0094 crossref_primary_10_1016_j_compenvurbsys_2022_101808 crossref_primary_10_1109_ACCESS_2025_3581931 crossref_primary_10_1109_TASE_2023_3269639 crossref_primary_10_1016_j_eswa_2023_120698 crossref_primary_10_1109_TMM_2022_3190135 crossref_primary_10_1016_j_neunet_2024_106122 crossref_primary_10_1016_j_compag_2022_107208 crossref_primary_10_1007_s00146_024_02168_8 crossref_primary_10_3390_jimaging10020042 crossref_primary_10_1007_s13042_024_02321_1 crossref_primary_10_1016_j_dsp_2025_105459 crossref_primary_10_1109_TPAMI_2021_3117983 crossref_primary_10_1109_TIE_2024_3472279 crossref_primary_10_3390_make5010018 crossref_primary_10_3390_math9151766 crossref_primary_10_1016_j_patcog_2025_111826 crossref_primary_10_1109_MCE_2020_3029769 crossref_primary_10_32604_cmc_2023_037861 crossref_primary_10_1088_1742_6596_2243_1_012038 crossref_primary_10_1109_TPAMI_2022_3151200 crossref_primary_10_1007_s43681_022_00199_9 crossref_primary_10_1109_TIP_2025_3540296 crossref_primary_10_1145_3762195 crossref_primary_10_1016_j_dsp_2025_105220 crossref_primary_10_1016_j_future_2024_107597 crossref_primary_10_3390_futuretransp5030120 crossref_primary_10_20965_jrm_2021_p1135 crossref_primary_10_1109_TPAMI_2023_3282889 crossref_primary_10_1016_j_patcog_2021_107830 crossref_primary_10_1109_TPAMI_2024_3413013 crossref_primary_10_1109_TNNLS_2023_3347633 crossref_primary_10_1109_TPAMI_2025_3531452 crossref_primary_10_1007_s00371_024_03608_8 crossref_primary_10_1080_23080477_2023_2268380 crossref_primary_10_1109_ACCESS_2024_3357939 crossref_primary_10_1109_TCSVT_2023_3302858 crossref_primary_10_3390_robotics12060158 crossref_primary_10_3390_rs16224267 crossref_primary_10_1016_j_inffus_2025_103260 crossref_primary_10_1109_TIP_2024_3379935 crossref_primary_10_3389_fmars_2024_1470424 crossref_primary_10_2196_58617 crossref_primary_10_3390_fi14020050 crossref_primary_10_1145_3697837 crossref_primary_10_1016_j_patrec_2022_03_007 crossref_primary_10_3390_rs15051463 crossref_primary_10_1007_s11263_024_02219_z crossref_primary_10_1007_s42979_022_01472_8 crossref_primary_10_1016_j_eswa_2023_120247 crossref_primary_10_1145_3672451 crossref_primary_10_1016_j_eswa_2025_128687 crossref_primary_10_1109_ACCESS_2025_3584395 crossref_primary_10_1007_s10994_024_06598_9 crossref_primary_10_3390_electronics14122427 crossref_primary_10_1109_ACCESS_2023_3307404 crossref_primary_10_1109_TPAMI_2021_3138337 crossref_primary_10_1038_s41598_025_01529_7 crossref_primary_10_3389_frobt_2023_1028329 crossref_primary_10_1007_s11370_021_00366_7 crossref_primary_10_1109_TPAMI_2025_3529038 crossref_primary_10_1016_j_cviu_2024_104015 crossref_primary_10_1016_j_jvcir_2025_104404 crossref_primary_10_1145_3729388 crossref_primary_10_3390_ijms22094435 crossref_primary_10_3390_app13020723 crossref_primary_10_1109_TPAMI_2021_3129870 crossref_primary_10_12720_jait_16_5_676_685 crossref_primary_10_1109_TMI_2024_3444279 crossref_primary_10_1109_TPAMI_2023_3329339 crossref_primary_10_1109_TPAMI_2020_2981890 crossref_primary_10_1016_j_cviu_2021_103299 crossref_primary_10_1016_j_eswa_2023_121391 crossref_primary_10_1016_j_neunet_2023_08_023 crossref_primary_10_1016_j_dsp_2022_103514 crossref_primary_10_1371_journal_pone_0313946 crossref_primary_10_1109_TPAMI_2025_3568644 crossref_primary_10_1002_ece3_7344 crossref_primary_10_1016_j_cities_2022_104107 crossref_primary_10_1007_s00340_022_07753_7 crossref_primary_10_1051_itmconf_20257003013 crossref_primary_10_1148_ryai_240637 crossref_primary_10_1016_j_neucom_2024_128944 crossref_primary_10_3389_frobt_2021_600410 crossref_primary_10_1109_TCSVT_2025_3552596 crossref_primary_10_1109_TPAMI_2023_3338291 crossref_primary_10_1007_s42979_022_01453_x crossref_primary_10_1136_bmjopen_2019_035798 crossref_primary_10_1007_s11042_023_16673_3 crossref_primary_10_3390_electronics10202557 crossref_primary_10_1016_j_jvcir_2023_103777 crossref_primary_10_1007_s11042_024_18872_y crossref_primary_10_1016_j_ecoinf_2024_102546 crossref_primary_10_1016_j_neucom_2024_127738 crossref_primary_10_1007_s11042_024_20561_9 crossref_primary_10_3389_fnbot_2024_1439195 crossref_primary_10_3390_su132413834 crossref_primary_10_1007_s00371_024_03769_6 crossref_primary_10_3390_diagnostics12051237 crossref_primary_10_1145_3618393 crossref_primary_10_1109_TKDE_2022_3224228 crossref_primary_10_1007_s11263_023_01945_0 crossref_primary_10_3390_e23121614 crossref_primary_10_5909_JBE_2025_30_3_366 crossref_primary_10_1016_j_imavis_2024_105384 crossref_primary_10_1109_TPAMI_2023_3286760 crossref_primary_10_1061__ASCE_CP_1943_5487_0001035 crossref_primary_10_1016_j_optcom_2025_131492 crossref_primary_10_3390_mca30050097 crossref_primary_10_1016_j_inffus_2025_103623 crossref_primary_10_1109_TCSVT_2023_3296889 crossref_primary_10_1016_j_autcon_2021_104034 crossref_primary_10_1016_j_neucom_2024_128715 crossref_primary_10_3390_a16110510 crossref_primary_10_1109_ACCESS_2024_3359996 crossref_primary_10_3390_ijgi9120728 crossref_primary_10_1007_s12369_024_01157_7 crossref_primary_10_1016_j_jag_2024_103938 crossref_primary_10_1016_j_imavis_2024_105171 crossref_primary_10_1016_j_neunet_2020_09_013 crossref_primary_10_3390_electronics13020379 crossref_primary_10_1016_j_eswa_2023_121495 crossref_primary_10_1016_j_imavis_2025_105572 crossref_primary_10_1016_j_neucom_2025_131129 crossref_primary_10_3390_app10217514 crossref_primary_10_1109_TGRS_2025_3589986 crossref_primary_10_1016_j_cviu_2022_103569 crossref_primary_10_1016_j_cose_2024_103899 crossref_primary_10_1093_llc_fqad008 crossref_primary_10_1145_3579612 crossref_primary_10_1109_ACCESS_2024_3425166 crossref_primary_10_3390_ani14101408 crossref_primary_10_3390_a16110501 crossref_primary_10_1016_j_engappai_2025_111984 crossref_primary_10_1007_s11263_025_02499_z crossref_primary_10_1177_14759217211053776 crossref_primary_10_1016_j_imavis_2024_105283 crossref_primary_10_1109_TSC_2021_3079580 crossref_primary_10_1016_j_heliyon_2024_e38287 crossref_primary_10_3390_app14188535 crossref_primary_10_3390_app112210736 crossref_primary_10_1007_s11042_023_15099_1 crossref_primary_10_1007_s41060_025_00797_w crossref_primary_10_3389_fneur_2022_791816 crossref_primary_10_1016_j_cviu_2024_104187 crossref_primary_10_1016_j_inffus_2024_102270 crossref_primary_10_3389_fnbot_2024_1453571 crossref_primary_10_3390_e25081166 crossref_primary_10_3390_s21113647 crossref_primary_10_1515_nanoph_2025_0354 crossref_primary_10_1016_j_neunet_2025_107827 crossref_primary_10_1016_j_neucom_2025_130498 crossref_primary_10_1016_j_ajo_2021_06_025 crossref_primary_10_1109_TPAMI_2023_3268066 crossref_primary_10_1109_MWC_013_2300372 crossref_primary_10_3390_app15179241 crossref_primary_10_1007_s00799_022_00337_y crossref_primary_10_1016_j_neucom_2025_130130 crossref_primary_10_1007_s10489_025_06680_2 crossref_primary_10_1038_s41467_023_40499_0 crossref_primary_10_3390_s21113633 crossref_primary_10_3390_rs17040717 crossref_primary_10_1016_j_patcog_2024_110747 crossref_primary_10_1038_s41467_023_42984_y crossref_primary_10_3389_fncom_2022_998096 crossref_primary_10_1109_TIP_2024_3424985 crossref_primary_10_1007_s11554_021_01191_y crossref_primary_10_1109_TPAMI_2024_3411595 crossref_primary_10_3390_s22249926 crossref_primary_10_1016_j_patcog_2025_112186 crossref_primary_10_1007_s11042_023_15981_y crossref_primary_10_1007_s12559_024_10351_8 crossref_primary_10_1109_TIP_2024_3374644 crossref_primary_10_3390_s23125718 crossref_primary_10_1016_j_dsp_2022_103812 crossref_primary_10_3390_drones6120410 crossref_primary_10_1109_TITS_2024_3438263 crossref_primary_10_1007_s13735_020_00200_3 crossref_primary_10_1016_j_patcog_2020_107383 crossref_primary_10_1016_j_eswa_2022_118720 crossref_primary_10_1016_j_patcog_2024_110708 crossref_primary_10_1109_TCSVT_2023_3343520 crossref_primary_10_1109_JPROC_2023_3238524 crossref_primary_10_3390_app15126843 crossref_primary_10_1016_j_inffus_2024_102385 crossref_primary_10_3390_electronics10030279 crossref_primary_10_1109_TGRS_2025_3532594 crossref_primary_10_1109_TCSVT_2024_3364153 crossref_primary_10_1016_j_autcon_2022_104481 crossref_primary_10_3390_jimaging8080209 crossref_primary_10_1007_s10055_023_00937_2 crossref_primary_10_1038_s41592_021_01249_6 crossref_primary_10_1016_j_scs_2022_104064 crossref_primary_10_1007_s11263_024_02214_4 crossref_primary_10_1109_TAI_2023_3266183 crossref_primary_10_3390_en17205177 crossref_primary_10_1109_JSAC_2025_3531546 crossref_primary_10_1038_s41598_022_24474_1 crossref_primary_10_1007_s11554_025_01703_0 crossref_primary_10_1109_ACCESS_2025_3560687 crossref_primary_10_3390_app131911103 crossref_primary_10_1109_TMI_2021_3091178 crossref_primary_10_1109_TUFFC_2022_3210701 crossref_primary_10_1155_2020_1041962 crossref_primary_10_1142_S0129065725500108 crossref_primary_10_1109_TIP_2022_3216771 crossref_primary_10_1007_s00530_024_01645_w crossref_primary_10_1007_s11042_024_19915_0 crossref_primary_10_1109_TIP_2023_3276570 crossref_primary_10_3390_e26080634 crossref_primary_10_1016_j_compbiomed_2024_108709 crossref_primary_10_3390_electronics13244970 crossref_primary_10_3390_s21041468 crossref_primary_10_3390_s22186879 crossref_primary_10_1109_TPAMI_2023_3323523 crossref_primary_10_1016_j_neucom_2023_127052 crossref_primary_10_1109_TPAMI_2023_3346405 crossref_primary_10_1007_s10586_023_04004_y crossref_primary_10_1016_j_eswa_2020_113656 crossref_primary_10_1007_s42803_023_00077_8 crossref_primary_10_1109_TPAMI_2023_3332246 crossref_primary_10_1038_s41598_025_92225_z crossref_primary_10_1177_09567976221140341 crossref_primary_10_1007_s10489_024_05324_1 crossref_primary_10_1515_auto_2021_0020 crossref_primary_10_1007_s10489_022_04022_0 crossref_primary_10_1109_TPAMI_2021_3119334 crossref_primary_10_3390_s21103488 crossref_primary_10_1109_ACCESS_2020_3039858 crossref_primary_10_1007_s11263_024_02223_3 crossref_primary_10_1109_TPAMI_2024_3471170 crossref_primary_10_1016_j_neuri_2022_100070 crossref_primary_10_1007_s00530_022_00971_1 crossref_primary_10_3390_app14010037 crossref_primary_10_3390_electronics14010143 crossref_primary_10_1109_TNNLS_2022_3226871 crossref_primary_10_3390_machines10050340 crossref_primary_10_1109_ACCESS_2024_3365043 crossref_primary_10_3389_frai_2022_736791 crossref_primary_10_1016_j_neucom_2025_131409 crossref_primary_10_1109_TDSC_2024_3456811 crossref_primary_10_1007_s10044_024_01334_4 crossref_primary_10_1007_s11831_025_10266_z crossref_primary_10_1145_3631439 crossref_primary_10_1109_TCSVT_2022_3231437 crossref_primary_10_1109_TMC_2025_3530950 crossref_primary_10_1016_j_compag_2023_108146 crossref_primary_10_1016_j_patcog_2023_109750 crossref_primary_10_3390_s22041596 crossref_primary_10_1007_s42979_022_01322_7 crossref_primary_10_3390_s25113406 crossref_primary_10_1177_23780231241259617 crossref_primary_10_3390_s23136138 crossref_primary_10_1109_TIP_2023_3321461 crossref_primary_10_1007_s11042_022_13801_3 crossref_primary_10_1016_j_isci_2025_112507 crossref_primary_10_3390_s22239477 crossref_primary_10_5909_JBE_2025_30_3_301 crossref_primary_10_1007_s11042_024_18866_w crossref_primary_10_3390_jlpea13020040 crossref_primary_10_1016_j_wneu_2024_05_065 crossref_primary_10_1016_j_cviu_2023_103774 crossref_primary_10_1016_j_jag_2023_103569 crossref_primary_10_1109_TKDE_2023_3270328 crossref_primary_10_1016_j_inffus_2025_102951 crossref_primary_10_1109_TVCG_2024_3498065 crossref_primary_10_1177_0278364920987859 crossref_primary_10_1016_j_cviu_2023_103772 crossref_primary_10_1016_j_psj_2025_104784 crossref_primary_10_3390_app11062823 crossref_primary_10_3390_app15116338 crossref_primary_10_1109_TPAMI_2023_3298433 crossref_primary_10_1016_j_eswa_2025_126809 crossref_primary_10_1016_j_neucom_2024_128348 crossref_primary_10_1109_ACCESS_2025_3548990 crossref_primary_10_1007_s11042_023_17949_4 crossref_primary_10_1109_TPAMI_2023_3285009 crossref_primary_10_1111_srt_70016 crossref_primary_10_1109_TIP_2024_3411927 crossref_primary_10_1109_TPAMI_2020_2992222 crossref_primary_10_1109_TCCN_2024_3511960 crossref_primary_10_1109_LRA_2024_3524910 crossref_primary_10_1007_s00500_022_07110_y crossref_primary_10_1016_j_buildenv_2024_112174 crossref_primary_10_1007_s11263_022_01703_8 crossref_primary_10_3390_app10144913 crossref_primary_10_1007_s11370_024_00551_4 crossref_primary_10_1016_j_cviu_2023_103649 crossref_primary_10_3390_data10030036 crossref_primary_10_3390_rs13010039 crossref_primary_10_1016_j_eswa_2024_124576 crossref_primary_10_3389_fpls_2022_1041514 crossref_primary_10_3389_frai_2023_1048874 crossref_primary_10_1016_j_compag_2020_105760 crossref_primary_10_1109_TMM_2022_3222657 crossref_primary_10_1007_s12200_022_00025_4 crossref_primary_10_1109_JSTSP_2022_3180220 crossref_primary_10_1631_FITEE_2300196 crossref_primary_10_1007_s10462_023_10455_x crossref_primary_10_1109_TPAMI_2024_3442301 crossref_primary_10_1145_3476073 crossref_primary_10_1109_ACCESS_2023_3309050 crossref_primary_10_1109_TGRS_2023_3314012 crossref_primary_10_1016_j_eswa_2022_118698 crossref_primary_10_1007_s11554_022_01215_1 crossref_primary_10_1016_j_neunet_2024_106506 crossref_primary_10_1007_s11274_025_04442_3 crossref_primary_10_1371_journal_pone_0302958 crossref_primary_10_1016_j_rineng_2024_103197 crossref_primary_10_3390_s21103569 crossref_primary_10_3390_s24206691 crossref_primary_10_1007_s11263_024_02343_w crossref_primary_10_1109_TPAMI_2022_3194988 crossref_primary_10_1145_3479586 crossref_primary_10_1145_3748318 crossref_primary_10_3390_app13137591 crossref_primary_10_1007_s41060_025_00836_6 crossref_primary_10_1109_LRA_2023_3346800 crossref_primary_10_3390_su142114324 crossref_primary_10_34133_2020_3521852 crossref_primary_10_1007_s11263_025_02450_2 crossref_primary_10_1093_llc_fqae020 crossref_primary_10_1109_TPAMI_2023_3243306 crossref_primary_10_1109_TIP_2025_3527883 crossref_primary_10_1145_3749642 crossref_primary_10_3390_s25020553 crossref_primary_10_1109_TPAMI_2025_3531907 crossref_primary_10_1017_flo_2024_3 crossref_primary_10_1109_TPAMI_2022_3227116 crossref_primary_10_3389_fonc_2024_1417862 crossref_primary_10_1109_TPAMI_2021_3130188 crossref_primary_10_1016_j_inffus_2022_10_007 crossref_primary_10_1109_TPAMI_2024_3409416 crossref_primary_10_1109_ACCESS_2024_3522238 crossref_primary_10_1145_3652161 crossref_primary_10_3390_technologies11060168 crossref_primary_10_1016_j_cviu_2023_103661 crossref_primary_10_20965_jaciii_2023_p0622 crossref_primary_10_1038_s41597_025_05118_1 crossref_primary_10_1109_TCSVT_2023_3317424 crossref_primary_10_1080_01431161_2023_2283900 crossref_primary_10_3348_kjr_2021_0201 crossref_primary_10_1109_ACCESS_2025_3581214 crossref_primary_10_1145_3479569 crossref_primary_10_1038_s41467_022_28091_4 crossref_primary_10_1109_LSP_2022_3184251 crossref_primary_10_3390_app11010036 crossref_primary_10_1007_s11263_024_01985_0 crossref_primary_10_1016_j_chaos_2021_111506 |
| Cites_doi | 10.1109/CVPR.2016.99 10.1109/ICCV.2017.528 10.1109/CVPR.2010.5540235 10.1109/CVPR.2010.5539906 10.1109/ICCV.2017.97 10.1109/CVPR.2015.7298594 10.1109/CVPR.2017.351 10.1109/ICCV.2017.554 10.1109/CVPR.2009.5206848 10.1007/s11263-009-0275-4 10.1109/CVPR.2018.00474 10.1109/TPAMI.2011.158 10.1007/978-3-319-10602-1_48 10.1016/S0893-6080(98)00116-6 10.1109/CVPR.2017.352 10.1109/CVPR.2016.308 10.1109/ICCV.2017.324 10.1609/aaai.v32i1.12274 10.1109/CVPR.2016.90 10.1109/CVPR.2018.00872 10.1007/s11263-016-0981-7 10.1007/s11263-013-0620-5 10.1109/CVPR.2018.00611 10.1109/TPAMI.2009.83 10.1109/CVPR.2017.766 10.1109/TPAMI.2012.28 10.1007/978-3-319-46448-0_2 10.1007/s11263-014-0733-5 10.1109/CVPR.2017.690 10.1109/TPAMI.2009.167 10.1007/978-3-319-46448-0_51 10.1109/TPAMI.2006.79 10.1109/CVPR.2018.00121 10.1109/ICCV.2017.454 10.1007/s11263-015-0816-y 10.1109/CVPR.2010.5540226 10.1109/CVPR.2017.330 10.1109/ICCV.2015.169 10.1609/aaai.v31i1.11231 10.1109/CVPR.2014.81 10.1109/CVPR.2017.195 10.1109/CVPR.2017.469 10.1109/CVPR.2016.91 10.1109/CVPR.2017.331 |
| ContentType | Journal Article |
| Copyright | Springer Science+Business Media, LLC, part of Springer Nature 2020 |
| Copyright_xml | – notice: Springer Science+Business Media, LLC, part of Springer Nature 2020 |
| DBID | AAYXX CITATION |
| DOI | 10.1007/s11263-020-01316-z |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Computer Science |
| EISSN | 1573-1405 |
| EndPage | 1981 |
| ExternalDocumentID | 10_1007_s11263_020_01316_z |
| GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29J 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 6TJ 78A 7WY 8FE 8FG 8FL 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDBF ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACIHN ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACUHS ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEAQA AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. B0M BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EAS EBLON EBS EDO EIOEI EJD EMK EPL ESBYG ESX F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IAO IHE IJ- IKXTQ ISR ITC ITM IWAJR IXC IZIGR IZQ I~X I~Y I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TEORI TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7V Z7W Z7X Z7Y Z7Z Z83 Z86 Z88 Z8M Z8N Z8P Z8Q Z8R Z8S Z8T Z8W Z92 ZMTXR ~8M ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG ADKFA AEZWR AFDZB AFFHD AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION ICD PHGZM PHGZT PQGLB |
| ID | FETCH-LOGICAL-c291t-db593fd4fcc28d5869536a3e096b48076490fdd5c296b5e109c2ed0a23261e5f3 |
| IEDL.DBID | RSV |
| ISICitedReferencesCount | 1323 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000519831900002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0920-5691 |
| IngestDate | Tue Nov 18 20:42:56 EST 2025 Sat Nov 29 06:42:28 EST 2025 Fri Feb 21 02:35:18 EST 2025 |
| IsDoiOpenAccess | false |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 7 |
| Keywords | Ground-truth dataset Visual relationship detection Object detection Image classification |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c291t-db593fd4fcc28d5869536a3e096b48076490fdd5c296b5e109c2ed0a23261e5f3 |
| PageCount | 26 |
| ParticipantIDs | crossref_primary_10_1007_s11263_020_01316_z crossref_citationtrail_10_1007_s11263_020_01316_z springer_journals_10_1007_s11263_020_01316_z |
| PublicationCentury | 2000 |
| PublicationDate | 20200700 2020-07-00 |
| PublicationDateYYYYMMDD | 2020-07-01 |
| PublicationDate_xml | – month: 7 year: 2020 text: 20200700 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | International journal of computer vision |
| PublicationTitleAbbrev | Int J Comput Vis |
| PublicationYear | 2020 |
| Publisher | Springer US |
| Publisher_xml | – name: Springer US |
| References | Papadopoulos, D.P., Uijlings, J.R., Keller, F., & Ferrari, V. (2017). Extreme clicking for efficient object annotation. In ICCV. Uijlings, J., Popov, S., & Ferrari, V. (2018). Revisiting knowledge transfer for training object class detectors. In CVPR. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., & Zisserman, A. (2012). The PASCAL visual object classes challenge 2012 (VOC2012) results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html. FelzenszwalbPGirshickRMcAllesterDRamananDObject detection with discriminatively trained part based modelsIEEE Transactions on Pattern Analysis and Machine Intelligence20103291627164510.1109/TPAMI.2009.167 Zellers, R., Yatskar, M., Thomson, S., & Choi, Y. (2018). Neural motifs: Scene graph parsing with global context. In CVPR. Zhang, H., Kyaw, Z., Chang, S.F., & Chua, T.S. (2017a). Visual translation embedding network for visual relation detection. In CVPR. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In CVPR. Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In CVPR. Yao, B., & Fei-Fei, L. (2010). Modeling mutual context of object and human pose in human-object interaction activities. In CVPR. AlexeBDeselaersTFerrariVMeasuring the objectness of image windowsIEEE Transactions on PAMI2012342189220210.1109/TPAMI.2012.28 Dai, B., Zhang, Y., & Lin, D. (2017). Detecting visual relationships with deep relational networks. In CVPR. Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI. UijlingsJRRvan de SandeKEAGeversTSmeuldersAWMSelective search for object recognitionInternational Journal of Computer Vision201310415417110.1007/s11263-013-0620-5 Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In NeurIPS. Gupta, S., & Malik, J. (2015). Visual semantic role labeling. arXiv preprint arXiv:1505.04474. Zhang, H., Kyaw, Z., Yu, J., & Chang, S.F. (2017b). PPR-FCN: weakly supervised visual relation detection via parallel pairwise R-FCN. In ICCV Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In CVPR. ViolaPJonesMRobust real-time object detectionInternational Journal of Computer Vision200144 PrestASchmidCFerrariVWeakly supervised learning of interactions between humans and objectsIEEE Transactions on Pattern Analysis and Machine Intelligence20123460161410.1109/TPAMI.2011.158 Lin, T., Goyal, P., Girshick, R., He, K., & Dollar, P. (2017). Focal loss for dense object detection. In ICCV. Veit, A., Alldrin, N., Chechik, G., Krasin, I., Gupta, A., & Belongie, S. (2017). Learning from noisy large-scale datasets with minimal supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 839–847). http://openaccess.thecvf.com/content_cvpr_2017/papers/Veit_Learning_From_Noisy_CVPR_2017_paper.pdf. Lin, T.Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona P, Ramanan, D., Zitnick, C.L., & Dollár, P. (2014). Microsoft COCO: Common objects in context. In ECCV. Hinton, G. E., Vinyals, O., & Dean, J. (2014). Distilling the knowledge in a neural network. In NeurIPS. Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto. Fei-FeiLFergusRPeronaPOne-shot learning of object categoriesIEEE Transactions on Pattern Analysis and Machine Intelligence200628459461110.1109/TPAMI.2006.79 Kolesnikov, A., Kuznetsova, A., Lampert, C., & Ferrari, V. (2018). Detecting visual relationships using box attention. arXiv:1807.02136. QianNOn the momentum term in gradient descent learning algorithmsNeural Networks1999121145151146555910.1016/S0893-6080(98)00116-6 Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., & Fei-fei, L. (2009). ImageNet: A large-scale hierarchical image database. In CVPR. GuptaAKembhaviADavisLObserving human-object interactions: Using spatial and functional compatibility for recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence2009311775178910.1109/TPAMI.2009.83 Felzenszwalb, P., Girshick, R., & McAllester, D. (2010a). Cascade object detection with deformable part models. In CVPR. Gkioxari, G., Girshick, R., Dollár, P., & He, K. (2018). Detecting and recognizing human-object interactions. CVPR. Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. Liang, X., Lee, L., & Xing, E. P. (2017). Deep variation-structured reinforcement learning for visual relationship and attribute detection. In CVPR. Sun, C., Shrivastava, A., Singh, S., & Gupta, A. (2017). Revisiting unreasonable effectiveness of data in deep learning era. In ICCV. Su, H., Deng, J., & Fei-Fei, L. (2012). Crowdsourcing annotations for visual object detection. In AAAI Human Computation Workshop. Liang, K., Guo, Y., Chang, H., & Chen, X. (2018). Visual relationship detection with deep structural ranking. In AAAI. EveringhamMVan GoolLWilliamsCKIWinnJZissermanAThe PASCAL Visual Object Classes (VOC) ChallengeInternational Journal of Computer Vision20108830333810.1007/s11263-009-0275-4 He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR. KrishnaRZhuYGrothOJohnsonJHataKKravitzJChenSKalantidisYLiLJShammaDABernsteinMFei-FeiLVisual genome: Connecting language and vision using crowdsourced dense image annotationsIJCV201712313273364073810.1007/s11263-016-0981-7 Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In NeurIPS. Gao, C., Zou, Y., & Huang, J.B. (2018). iCAN: Instance-centric attention network for human-object interaction detection. In BMVC. Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In CVPR. EveringhamMEslamiSvan GoolLWilliamsCWinnJZissermanAThe PASCAL visual object classes challenge: A retrospectiveInternational Journal of Computer Vision20151119813610.1007/s11263-014-0733-5 Lu, C., Krishna, R., Bernstein, M., & Fei-Fei, L. (2016). Visual relationship detection with language priors. In European Conference on Computer Vision. Alexe, B., Deselaers, T., & Ferrari, V. (2010). What is an object? In CVPR. Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., & Chen, L. (2018). Mobilenetv2: Inverted residuals and linear bottleneck. In CVPR. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., & Berg, A.C. (2016). SSD: Single shot multibox detector. In ECCV. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In CVPR. Girshick, R. (2015). Fast R-CNN. In ICCV. Peyre, J., Laptev, I., Schmid, C., & Sivic, J. (2017). Weakly-supervised learning of visual relations. In CVPR. Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In NeurIPS. Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., Guadarrama, S., & Murphy, K. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. In CVPR. Papadopoulos, D.P., Uijlings, J.R.R., Keller, F., & Ferrari, V. (2016). We don’t need no bounding-boxes: Training object class detectors using only human verification. In CVPR. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A., & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. IJCV. Viola, P., & Jones, M. (2001a). Rapid object detection using a boosted cascade of simple features. In CVPR. Li, Y., Ouyang, W., Wang, X., & Tang, X. (2017). ViP-CNN: Visual phrase guided convolutional neural network. In CVPR. Griffin, G., Holub, A., & Perona, P. (2007). The Caltech-256. Technical report, Caltech. Xu, D., Zhu, Y., Choy, C., & Fei-Fei, L. (2017). Scene graph generation by iterative message passing. In Computer Vision and Pattern Recognition (CVPR). 1316_CR32 1316_CR33 1316_CR30 1316_CR31 L Fei-Fei (1316_CR9) 2006; 28 1316_CR7 1316_CR36 1316_CR37 1316_CR34 1316_CR35 1316_CR1 1316_CR5 1316_CR3 P Viola (1316_CR54) 2001; 4 1316_CR4 1316_CR43 1316_CR44 1316_CR41 1316_CR42 R Krishna (1316_CR24) 2017; 123 1316_CR40 A Gupta (1316_CR17) 2009; 31 1316_CR49 1316_CR47 1316_CR48 1316_CR45 1316_CR46 M Everingham (1316_CR6) 2010; 88 A Prest (1316_CR38) 2012; 34 1316_CR10 1316_CR55 1316_CR52 1316_CR53 1316_CR50 1316_CR18 1316_CR19 1316_CR16 1316_CR14 1316_CR58 1316_CR15 1316_CR59 1316_CR12 1316_CR56 1316_CR13 1316_CR57 B Alexe (1316_CR2) 2012; 34 M Everingham (1316_CR8) 2015; 111 1316_CR21 1316_CR22 1316_CR20 1316_CR29 1316_CR27 1316_CR28 N Qian (1316_CR39) 1999; 12 1316_CR25 1316_CR26 1316_CR23 P Felzenszwalb (1316_CR11) 2010; 32 JRR Uijlings (1316_CR51) 2013; 104 |
| References_xml | – reference: Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., Guadarrama, S., & Murphy, K. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. In CVPR. – reference: Viola, P., & Jones, M. (2001a). Rapid object detection using a boosted cascade of simple features. In CVPR. – reference: EveringhamMVan GoolLWilliamsCKIWinnJZissermanAThe PASCAL Visual Object Classes (VOC) ChallengeInternational Journal of Computer Vision20108830333810.1007/s11263-009-0275-4 – reference: Felzenszwalb, P., Girshick, R., & McAllester, D. (2010a). Cascade object detection with deformable part models. In CVPR. – reference: FelzenszwalbPGirshickRMcAllesterDRamananDObject detection with discriminatively trained part based modelsIEEE Transactions on Pattern Analysis and Machine Intelligence20103291627164510.1109/TPAMI.2009.167 – reference: Sun, C., Shrivastava, A., Singh, S., & Gupta, A. (2017). Revisiting unreasonable effectiveness of data in deep learning era. In ICCV. – reference: Xu, D., Zhu, Y., Choy, C., & Fei-Fei, L. (2017). Scene graph generation by iterative message passing. In Computer Vision and Pattern Recognition (CVPR). – reference: Yao, B., & Fei-Fei, L. (2010). Modeling mutual context of object and human pose in human-object interaction activities. In CVPR. – reference: Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (2017). Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI. – reference: UijlingsJRRvan de SandeKEAGeversTSmeuldersAWMSelective search for object recognitionInternational Journal of Computer Vision201310415417110.1007/s11263-013-0620-5 – reference: Lu, C., Krishna, R., Bernstein, M., & Fei-Fei, L. (2016). Visual relationship detection with language priors. In European Conference on Computer Vision. – reference: Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto. – reference: AlexeBDeselaersTFerrariVMeasuring the objectness of image windowsIEEE Transactions on PAMI2012342189220210.1109/TPAMI.2012.28 – reference: Liang, X., Lee, L., & Xing, E. P. (2017). Deep variation-structured reinforcement learning for visual relationship and attribute detection. In CVPR. – reference: Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. In NeurIPS. – reference: Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., & Fei-fei, L. (2009). ImageNet: A large-scale hierarchical image database. In CVPR. – reference: Hinton, G. E., Vinyals, O., & Dean, J. (2014). Distilling the knowledge in a neural network. In NeurIPS. – reference: Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions. In CVPR. – reference: Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In NeurIPS. – reference: Li, Y., Ouyang, W., Wang, X., & Tang, X. (2017). ViP-CNN: Visual phrase guided convolutional neural network. In CVPR. – reference: Papadopoulos, D.P., Uijlings, J.R.R., Keller, F., & Ferrari, V. (2016). We don’t need no bounding-boxes: Training object class detectors using only human verification. In CVPR. – reference: Zellers, R., Yatskar, M., Thomson, S., & Choi, Y. (2018). Neural motifs: Scene graph parsing with global context. In CVPR. – reference: Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In CVPR. – reference: Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR. – reference: Redmon, J., & Farhadi, A. (2017). YOLO9000: better, faster, stronger. In CVPR. – reference: KrishnaRZhuYGrothOJohnsonJHataKKravitzJChenSKalantidisYLiLJShammaDABernsteinMFei-FeiLVisual genome: Connecting language and vision using crowdsourced dense image annotationsIJCV201712313273364073810.1007/s11263-016-0981-7 – reference: Su, H., Deng, J., & Fei-Fei, L. (2012). Crowdsourcing annotations for visual object detection. In AAAI Human Computation Workshop. – reference: PrestASchmidCFerrariVWeakly supervised learning of interactions between humans and objectsIEEE Transactions on Pattern Analysis and Machine Intelligence20123460161410.1109/TPAMI.2011.158 – reference: Lin, T.Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona P, Ramanan, D., Zitnick, C.L., & Dollár, P. (2014). Microsoft COCO: Common objects in context. In ECCV. – reference: Peyre, J., Laptev, I., Schmid, C., & Sivic, J. (2017). Weakly-supervised learning of visual relations. In CVPR. – reference: Griffin, G., Holub, A., & Perona, P. (2007). The Caltech-256. Technical report, Caltech. – reference: Lin, T., Goyal, P., Girshick, R., He, K., & Dollar, P. (2017). Focal loss for dense object detection. In ICCV. – reference: Kolesnikov, A., Kuznetsova, A., Lampert, C., & Ferrari, V. (2018). Detecting visual relationships using box attention. arXiv:1807.02136. – reference: Zhang, H., Kyaw, Z., Chang, S.F., & Chua, T.S. (2017a). Visual translation embedding network for visual relation detection. In CVPR. – reference: Dai, B., Zhang, Y., & Lin, D. (2017). Detecting visual relationships with deep relational networks. In CVPR. – reference: GuptaAKembhaviADavisLObserving human-object interactions: Using spatial and functional compatibility for recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence2009311775178910.1109/TPAMI.2009.83 – reference: Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., & Berg, A.C. (2016). SSD: Single shot multibox detector. In ECCV. – reference: Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In NeurIPS. – reference: Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., & Zisserman, A. (2012). The PASCAL visual object classes challenge 2012 (VOC2012) results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html. – reference: Girshick, R. (2015). Fast R-CNN. In ICCV. – reference: Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A., & Fei-Fei, L. (2015). ImageNet large scale visual recognition challenge. IJCV. – reference: Zhang, H., Kyaw, Z., Yu, J., & Chang, S.F. (2017b). PPR-FCN: weakly supervised visual relation detection via parallel pairwise R-FCN. In ICCV – reference: Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In CVPR. – reference: Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In CVPR. – reference: Gao, C., Zou, Y., & Huang, J.B. (2018). iCAN: Instance-centric attention network for human-object interaction detection. In BMVC. – reference: Uijlings, J., Popov, S., & Ferrari, V. (2018). Revisiting knowledge transfer for training object class detectors. In CVPR. – reference: Papadopoulos, D.P., Uijlings, J.R., Keller, F., & Ferrari, V. (2017). Extreme clicking for efficient object annotation. In ICCV. – reference: He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR. – reference: Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., & Chen, L. (2018). Mobilenetv2: Inverted residuals and linear bottleneck. In CVPR. – reference: Fei-FeiLFergusRPeronaPOne-shot learning of object categoriesIEEE Transactions on Pattern Analysis and Machine Intelligence200628459461110.1109/TPAMI.2006.79 – reference: Gkioxari, G., Girshick, R., Dollár, P., & He, K. (2018). Detecting and recognizing human-object interactions. CVPR. – reference: EveringhamMEslamiSvan GoolLWilliamsCWinnJZissermanAThe PASCAL visual object classes challenge: A retrospectiveInternational Journal of Computer Vision20151119813610.1007/s11263-014-0733-5 – reference: ViolaPJonesMRobust real-time object detectionInternational Journal of Computer Vision200144 – reference: Veit, A., Alldrin, N., Chechik, G., Krasin, I., Gupta, A., & Belongie, S. (2017). Learning from noisy large-scale datasets with minimal supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 839–847). http://openaccess.thecvf.com/content_cvpr_2017/papers/Veit_Learning_From_Noisy_CVPR_2017_paper.pdf. – reference: Gupta, S., & Malik, J. (2015). Visual semantic role labeling. arXiv preprint arXiv:1505.04474. – reference: Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. – reference: Alexe, B., Deselaers, T., & Ferrari, V. (2010). What is an object? In CVPR. – reference: QianNOn the momentum term in gradient descent learning algorithmsNeural Networks1999121145151146555910.1016/S0893-6080(98)00116-6 – reference: Liang, K., Guo, Y., Chang, H., & Chen, X. (2018). Visual relationship detection with deep structural ranking. In AAAI. – ident: 1316_CR35 doi: 10.1109/CVPR.2016.99 – ident: 1316_CR36 doi: 10.1109/ICCV.2017.528 – ident: 1316_CR56 doi: 10.1109/CVPR.2010.5540235 – ident: 1316_CR26 – ident: 1316_CR10 doi: 10.1109/CVPR.2010.5539906 – ident: 1316_CR22 – ident: 1316_CR46 doi: 10.1109/ICCV.2017.97 – ident: 1316_CR45 – ident: 1316_CR47 doi: 10.1109/CVPR.2015.7298594 – ident: 1316_CR21 doi: 10.1109/CVPR.2017.351 – ident: 1316_CR37 doi: 10.1109/ICCV.2017.554 – ident: 1316_CR5 doi: 10.1109/CVPR.2009.5206848 – volume: 88 start-page: 303 year: 2010 ident: 1316_CR6 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-009-0275-4 – ident: 1316_CR44 doi: 10.1109/CVPR.2018.00474 – volume: 34 start-page: 601 year: 2012 ident: 1316_CR38 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2011.158 – ident: 1316_CR31 doi: 10.1007/978-3-319-10602-1_48 – volume: 12 start-page: 145 issue: 1 year: 1999 ident: 1316_CR39 publication-title: Neural Networks doi: 10.1016/S0893-6080(98)00116-6 – ident: 1316_CR4 doi: 10.1109/CVPR.2017.352 – ident: 1316_CR48 doi: 10.1109/CVPR.2016.308 – ident: 1316_CR23 – ident: 1316_CR30 doi: 10.1109/ICCV.2017.324 – ident: 1316_CR28 doi: 10.1609/aaai.v32i1.12274 – ident: 1316_CR19 doi: 10.1109/CVPR.2016.90 – ident: 1316_CR15 doi: 10.1109/CVPR.2018.00872 – volume: 123 start-page: 32 issue: 1 year: 2017 ident: 1316_CR24 publication-title: IJCV doi: 10.1007/s11263-016-0981-7 – volume: 4 start-page: 4 year: 2001 ident: 1316_CR54 publication-title: International Journal of Computer Vision – ident: 1316_CR16 – ident: 1316_CR12 – volume: 104 start-page: 154 year: 2013 ident: 1316_CR51 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-013-0620-5 – ident: 1316_CR57 doi: 10.1109/CVPR.2018.00611 – volume: 31 start-page: 1775 year: 2009 ident: 1316_CR17 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2009.83 – ident: 1316_CR27 doi: 10.1109/CVPR.2017.766 – volume: 34 start-page: 2189 year: 2012 ident: 1316_CR2 publication-title: IEEE Transactions on PAMI doi: 10.1109/TPAMI.2012.28 – ident: 1316_CR32 doi: 10.1007/978-3-319-46448-0_2 – volume: 111 start-page: 98 year: 2015 ident: 1316_CR8 publication-title: International Journal of Computer Vision doi: 10.1007/s11263-014-0733-5 – ident: 1316_CR53 – ident: 1316_CR40 doi: 10.1109/CVPR.2017.690 – volume: 32 start-page: 1627 issue: 9 year: 2010 ident: 1316_CR11 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2009.167 – ident: 1316_CR33 doi: 10.1007/978-3-319-46448-0_51 – volume: 28 start-page: 594 issue: 4 year: 2006 ident: 1316_CR9 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence doi: 10.1109/TPAMI.2006.79 – ident: 1316_CR7 – ident: 1316_CR50 doi: 10.1109/CVPR.2018.00121 – ident: 1316_CR34 – ident: 1316_CR59 doi: 10.1109/ICCV.2017.454 – ident: 1316_CR20 – ident: 1316_CR43 doi: 10.1007/s11263-015-0816-y – ident: 1316_CR1 doi: 10.1109/CVPR.2010.5540226 – ident: 1316_CR25 – ident: 1316_CR55 doi: 10.1109/CVPR.2017.330 – ident: 1316_CR52 – ident: 1316_CR13 doi: 10.1109/ICCV.2015.169 – ident: 1316_CR49 doi: 10.1609/aaai.v31i1.11231 – ident: 1316_CR14 doi: 10.1109/CVPR.2014.81 – ident: 1316_CR42 – ident: 1316_CR3 doi: 10.1109/CVPR.2017.195 – ident: 1316_CR18 – ident: 1316_CR29 doi: 10.1109/CVPR.2017.469 – ident: 1316_CR41 doi: 10.1109/CVPR.2016.91 – ident: 1316_CR58 doi: 10.1109/CVPR.2017.331 |
| SSID | ssj0002823 |
| Score | 2.7415674 |
| Snippet | We present Open Images V4, a dataset of 9.2M images with unified annotations for image classification, object detection and visual relationship detection. The... |
| SourceID | crossref springer |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 1956 |
| SubjectTerms | Artificial Intelligence Computer Imaging Computer Science Image Processing and Computer Vision Pattern Recognition Pattern Recognition and Graphics Vision |
| Subtitle | Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale |
| Title | The Open Images Dataset V4 |
| URI | https://link.springer.com/article/10.1007/s11263-020-01316-z |
| Volume | 128 |
| WOSCitedRecordID | wos000519831900002&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: SpringerLINK Contemporary 1997-Present customDbUrl: eissn: 1573-1405 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002823 issn: 0920-5691 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1JS8NAFH5o9eDFumLrQg7eNJCZyaQzR1GLXoq4lN5CMgsIGqWJHvrrfZNMWgpS0OvkJYQ3b-UtH8C5FtZanZFQGQzfYp6hSqHfCyOrpMAsSErVgE0MRiMxmcgHPxRWtt3ubUmyttSLYTdC65qja6RiJAln67CB7k44wIbHp_Hc_mIS0QDIIyVPJPGjMr9_Y9kdLddCaxcz7P7v53Zg24eUwVUjA7uwZoo96PrwMvDKW-JRi-DQnu1DH6UkcD0lwf07GpYyuMkqdGtVMI4P4GV4-3x9F3q0hFBRSapQ51wyq2OrFBWai8QVZjNmMEfJ3dx4EsvIas2ROsm5IZFU1Ogow5AqIYZbdgid4qMwRxAkhokox8QNH8WcGuH21A2y3GCwZBijPSAt01LlV4k7RIu3dLEE2fEjRX6kNT_SWQ8u5u98Nos0VlJftnxOvVKVK8j7fyM_hi1aX5Xruj2BTjX9Mqewqb6r13J6VkvTD9TXv9k |
| linkProvider | Springer Nature |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1JS8NAFH5oFfRiXbF1y8GbBjKZTJo5ilparEW0lt5CMgsUtEoTPfTX-yaZtBSkoNfJSwjfvJW3AVzKSGstE-IKhe5bwBIUKbR7rqcFjzAK4lyUyyZa_X40GvEn2xSWVdXuVUqy0NSLZjfiFzlHU0hFSejO1mEjQItlJuY_vwzn-heDiHKBPFKykBPbKvP7N5bN0XIutDAx7fr_fm4XdqxL6dyUPLAHa2qyD3XrXjpWeDM8qjY4VGcH0EQucUxNidN9R8WSOXdJjmYtd4bBIby27we3HdduS3CFz0nuypRxqmWghfAjyaLQJGYTqjBGSU3feBhwT0vJkDpMmSIeF76SXoIuVUgU0_QIapOPiToGJ1Q08lIM3PBRwHwVmTl1rSRV6CwpSv0GkAq0WNhR4majxVu8GIJs8IgRj7jAI5414Gr-zmc5SGMl9XWFc2yFKltB3vwb-QVsdQaPvbjX7T-cwLZfXJupwD2FWj79UmewKb7zcTY9LzjrB92bwr0 |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEB60inixPrH1tQdvGppNsmn2KNZiUUJBLb2FZB8gaCxN9NBf72werQUpiNfdSVhmZ3ZmmJlvAC5loLWWMbWEQvfNYzGqFNo9y9aCBxgFcS7KYRPdMAzGYz780cVfVLvXKcmyp8GgNKV5ZyJ1Z9H4Rp0i_2iKqlzqW7N12PBMIb2J159G87cYA4pymDxSMp_Tqm3m938sm6blvGhhbvrN_x90F3YqV5PclLKxB2sq3Ydm5XaSSqkzXKonO9RrB9BG6SGm1oQM3vHByUgvztHc5WTkHcJL_-759t6qpihYwuE0t2TCuKulp4VwAskC3yRsY1dh7JKYfnLf47aWkiG1nzBFbS4cJe0YXS2fKqbdI2ikH6k6BuIrN7ATDOhwy2OOCgx-XTdOFDpRynWdFtCagZGoIMbNpIu3aAGObPgRIT-igh_RrAVX828mJcDGSurrmudRpWzZCvL238gvYGvY60ePg_DhBLad4tZMYe4pNPLppzqDTfGVv2bT80LIvgGsQMuh |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The+Open+Images+Dataset+V4&rft.jtitle=International+journal+of+computer+vision&rft.au=Kuznetsova%2C+Alina&rft.au=Rom%2C+Hassan&rft.au=Alldrin%2C+Neil&rft.au=Uijlings%2C+Jasper&rft.date=2020-07-01&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=128&rft.issue=7&rft.spage=1956&rft.epage=1981&rft_id=info:doi/10.1007%2Fs11263-020-01316-z&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s11263_020_01316_z |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon |