Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations

Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle th...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:International journal of computer vision Ročník 123; číslo 1; s. 32 - 73
Hlavní autori: Krishna, Ranjay, Zhu, Yuke, Groth, Oliver, Johnson, Justin, Hata, Kenji, Kravitz, Joshua, Chen, Stephanie, Kalantidis, Yannis, Li, Li-Jia, Shamma, David A., Bernstein, Michael S., Fei-Fei, Li
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York Springer US 01.05.2017
Springer
Springer Nature B.V
Predmet:
ISSN:0920-5691, 1573-1405
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that “the person is riding a horse-drawn carriage.” In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.
AbstractList Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that "the person is riding a horse-drawn carriage." In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of [Formula omitted] objects, [Formula omitted] attributes, and [Formula omitted] pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.
(ProQuest: ... denotes formulae and/or non-USASCII text omitted; see image) Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that "the person is riding a horse-drawn carriage." In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of ... objects, ... attributes, and ... pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that “the person is riding a horse-drawn carriage.” In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of 35 objects, 26 attributes, and 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.
Audience Academic
Author Hata, Kenji
Li, Li-Jia
Zhu, Yuke
Kravitz, Joshua
Shamma, David A.
Johnson, Justin
Fei-Fei, Li
Kalantidis, Yannis
Bernstein, Michael S.
Krishna, Ranjay
Groth, Oliver
Chen, Stephanie
Author_xml – sequence: 1
  givenname: Ranjay
  orcidid: 0000-0001-8784-2531
  surname: Krishna
  fullname: Krishna, Ranjay
  email: ranjaykrishna@cs.stanford.edu
  organization: Stanford University
– sequence: 2
  givenname: Yuke
  surname: Zhu
  fullname: Zhu, Yuke
  organization: Stanford University
– sequence: 3
  givenname: Oliver
  surname: Groth
  fullname: Groth, Oliver
  organization: Dresden University of Technology
– sequence: 4
  givenname: Justin
  surname: Johnson
  fullname: Johnson, Justin
  organization: Stanford University
– sequence: 5
  givenname: Kenji
  surname: Hata
  fullname: Hata, Kenji
  organization: Stanford University
– sequence: 6
  givenname: Joshua
  surname: Kravitz
  fullname: Kravitz, Joshua
  organization: Stanford University
– sequence: 7
  givenname: Stephanie
  surname: Chen
  fullname: Chen, Stephanie
  organization: Stanford University
– sequence: 8
  givenname: Yannis
  surname: Kalantidis
  fullname: Kalantidis, Yannis
  organization: Yahoo Inc
– sequence: 9
  givenname: Li-Jia
  surname: Li
  fullname: Li, Li-Jia
  organization: Snapchat Inc
– sequence: 10
  givenname: David A.
  surname: Shamma
  fullname: Shamma, David A.
  organization: Centrum Wiskunde & Informatica (CWI)
– sequence: 11
  givenname: Michael S.
  surname: Bernstein
  fullname: Bernstein, Michael S.
  organization: Stanford University
– sequence: 12
  givenname: Li
  surname: Fei-Fei
  fullname: Fei-Fei, Li
  organization: Stanford University
BookMark eNp9kU2L1TAUhoOM4J3RH-Cu4EYXHXPy0TbuLldn5sIFQR1xF9L0tGRokzFpUf-9KXXhDChZBJLnOck57zk588EjIS-BXgKl9dsEwCpeUqhKqhoo6ydkB7LmJQgqz8iOKkZLWSl4Rs5TuqOUsobxHfn21aXFjMU1-jDhu-IQvEc7Oz8UJ-OHxQxYGN8VGXPBF7dpvTnE8KNLYYkWu-I9-oTFcVrJvfdhNnMm03PytDdjwhd_9gtye_Xhy-GmPH28Ph72p9JKDnPZScu7lvVCcLQcABBFK1nHFa26jlqrTMVq02LT0pZjY5tGMVUL2Us0wjB-QV5vde9j-L5gmvXkksVxNB7DkjQoKpgQtBIZffUIvcs9-Pw7DbkqcKaalbrcqMGMqJ3vwxyNzavDydk89d7l872UVEkALrPw5oGQmRl_zoNZUtLHz58esvXG2hhSithr67aB5UfcqIHqNU69xalznHqNU9fZhEfmfXSTib_-67DNSZn1A8a_Gv6n9BumDrIM
CitedBy_id crossref_primary_10_1016_j_engappai_2024_108949
crossref_primary_10_1016_j_eswa_2025_126965
crossref_primary_10_1016_j_inffus_2025_103047
crossref_primary_10_1109_TIP_2020_2993403
crossref_primary_10_1109_TIP_2025_3543114
crossref_primary_10_1145_3687973
crossref_primary_10_3390_math13101622
crossref_primary_10_1016_j_ijhm_2022_103146
crossref_primary_10_1007_s11263_021_01465_9
crossref_primary_10_1016_j_eswa_2022_117174
crossref_primary_10_1145_3648370
crossref_primary_10_1145_3742434
crossref_primary_10_1016_j_patcog_2025_112439
crossref_primary_10_1109_TKDE_2022_3142821
crossref_primary_10_1007_s41109_019_0226_0
crossref_primary_10_1016_j_autcon_2020_103116
crossref_primary_10_1016_j_knosys_2025_113532
crossref_primary_10_1016_j_patcog_2019_01_038
crossref_primary_10_1109_TCSVT_2019_2947482
crossref_primary_10_1145_3705006
crossref_primary_10_3390_rs12234003
crossref_primary_10_1073_pnas_2115730119
crossref_primary_10_1111_cogs_13448
crossref_primary_10_1007_s40747_024_01746_z
crossref_primary_10_1038_s41551_020_0577_y
crossref_primary_10_3390_app12094674
crossref_primary_10_3390_rs14225839
crossref_primary_10_1007_s11263_024_02049_z
crossref_primary_10_1007_s11227_021_04283_5
crossref_primary_10_1016_j_neucom_2018_11_004
crossref_primary_10_4218_etrij_2019_0093
crossref_primary_10_1016_j_jbi_2025_104888
crossref_primary_10_1109_TPAMI_2024_3432552
crossref_primary_10_1016_j_eswa_2024_125108
crossref_primary_10_1109_TII_2021_3089689
crossref_primary_10_3233_SW_233510
crossref_primary_10_1109_ACCESS_2022_3165617
crossref_primary_10_1007_s11263_018_1125_z
crossref_primary_10_1109_TGRS_2021_3079918
crossref_primary_10_1007_s11042_020_10466_8
crossref_primary_10_1145_3604557
crossref_primary_10_1016_j_neucom_2021_04_069
crossref_primary_10_1016_j_ins_2020_04_034
crossref_primary_10_1109_ACCESS_2022_3230590
crossref_primary_10_1145_3716389
crossref_primary_10_1016_j_neucom_2022_04_126
crossref_primary_10_1145_3686899
crossref_primary_10_1007_s10489_022_03795_8
crossref_primary_10_1007_s11633_022_1386_4
crossref_primary_10_1007_s11263_023_01929_0
crossref_primary_10_1155_2018_5847460
crossref_primary_10_3390_app13085107
crossref_primary_10_1109_TGRS_2025_3534288
crossref_primary_10_1109_ACCESS_2024_3450908
crossref_primary_10_1109_ACCESS_2021_3137893
crossref_primary_10_1109_LRA_2022_3221310
crossref_primary_10_3390_fi15020070
crossref_primary_10_1007_s11042_024_19315_4
crossref_primary_10_1007_s11390_020_0305_9
crossref_primary_10_1145_3707447
crossref_primary_10_1007_s11760_024_03027_1
crossref_primary_10_1371_journal_pone_0260784
crossref_primary_10_1109_TIP_2021_3118983
crossref_primary_10_1109_ACCESS_2020_3018546
crossref_primary_10_1109_TPAMI_2024_3508072
crossref_primary_10_1145_3687957
crossref_primary_10_1007_s10462_021_10104_1
crossref_primary_10_1109_TIP_2018_2877939
crossref_primary_10_1145_3617833
crossref_primary_10_3390_electronics11111778
crossref_primary_10_1109_TPAMI_2021_3132034
crossref_primary_10_1016_j_inffus_2019_03_005
crossref_primary_10_1109_TIP_2023_3275870
crossref_primary_10_1109_TIP_2024_3391692
crossref_primary_10_1109_TMM_2022_3141603
crossref_primary_10_1007_s13748_023_00295_9
crossref_primary_10_1016_j_knosys_2025_113300
crossref_primary_10_1016_j_displa_2024_102941
crossref_primary_10_1109_TMM_2020_3037461
crossref_primary_10_1109_TCYB_2022_3142013
crossref_primary_10_1007_s10462_025_11163_4
crossref_primary_10_1007_s11227_023_05318_9
crossref_primary_10_1109_LRA_2022_3142401
crossref_primary_10_1109_TMM_2019_2931815
crossref_primary_10_1007_s11042_021_11722_1
crossref_primary_10_1109_TCSVT_2023_3268997
crossref_primary_10_1016_j_imavis_2020_103968
crossref_primary_10_1007_s10958_024_07434_0
crossref_primary_10_1109_JBHI_2022_3214086
crossref_primary_10_1162_tacl_a_00409
crossref_primary_10_1016_j_neucom_2021_10_016
crossref_primary_10_1162_tacl_a_00408
crossref_primary_10_1109_TCSVT_2024_3374786
crossref_primary_10_1016_j_neucom_2021_10_014
crossref_primary_10_1109_TPAMI_2023_3339628
crossref_primary_10_1109_JPROC_2023_3279374
crossref_primary_10_1109_TMM_2023_3345172
crossref_primary_10_1016_j_ocecoaman_2022_106244
crossref_primary_10_1109_TASLP_2023_3304468
crossref_primary_10_1061_JCCEE5_CPENG_5473
crossref_primary_10_1109_TCSVT_2024_3443417
crossref_primary_10_1088_1757_899X_1116_1_012184
crossref_primary_10_1016_j_compag_2024_109452
crossref_primary_10_3390_math13111874
crossref_primary_10_3390_info15120766
crossref_primary_10_1007_s10489_024_05389_y
crossref_primary_10_1016_j_patcog_2025_111781
crossref_primary_10_1002_ail2_55
crossref_primary_10_1145_3572833
crossref_primary_10_1016_j_patcog_2024_111039
crossref_primary_10_1109_TMM_2023_3266897
crossref_primary_10_1016_j_future_2022_02_016
crossref_primary_10_3233_NAI_240712
crossref_primary_10_1007_s13042_022_01538_2
crossref_primary_10_1016_j_jvcir_2018_12_027
crossref_primary_10_1145_3697349
crossref_primary_10_1016_j_rineng_2025_105397
crossref_primary_10_1016_j_displa_2021_102069
crossref_primary_10_1080_15732479_2024_2355929
crossref_primary_10_1016_j_ijpe_2020_107663
crossref_primary_10_1109_TIP_2022_3199089
crossref_primary_10_1109_TMM_2023_3254205
crossref_primary_10_1109_LRA_2020_2967677
crossref_primary_10_3233_MGS_230132
crossref_primary_10_1109_TMM_2022_3214090
crossref_primary_10_1109_ACCESS_2021_3114968
crossref_primary_10_1109_TMM_2024_3385997
crossref_primary_10_1016_j_patcog_2024_110399
crossref_primary_10_1109_ACCESS_2019_2919342
crossref_primary_10_1109_TVCG_2025_3585077
crossref_primary_10_1080_10447318_2024_2406053
crossref_primary_10_1049_iet_cvi_2018_5226
crossref_primary_10_1016_j_knosys_2023_110951
crossref_primary_10_1145_3615868
crossref_primary_10_1016_j_patcog_2024_110398
crossref_primary_10_1016_j_eswa_2022_118285
crossref_primary_10_1007_s10489_023_04722_1
crossref_primary_10_1007_s11704_021_1248_1
crossref_primary_10_1145_3485042
crossref_primary_10_3390_electronics11213556
crossref_primary_10_1007_s42524_024_4074_y
crossref_primary_10_3390_rs13112038
crossref_primary_10_1016_j_knosys_2025_113355
crossref_primary_10_1109_TPAMI_2020_3040591
crossref_primary_10_1109_TIP_2024_3448248
crossref_primary_10_1109_TMM_2024_3521785
crossref_primary_10_1002_eng2_12785
crossref_primary_10_1007_s13369_023_07687_y
crossref_primary_10_1016_j_compind_2024_104171
crossref_primary_10_1109_ACCESS_2023_3332098
crossref_primary_10_1109_TMM_2019_2896516
crossref_primary_10_1109_TMM_2023_3327537
crossref_primary_10_1109_TPAMI_2021_3139957
crossref_primary_10_1016_j_patcog_2020_107707
crossref_primary_10_1016_j_engappai_2021_104574
crossref_primary_10_1109_TASLP_2021_3138719
crossref_primary_10_1109_TPAMI_2023_3327677
crossref_primary_10_1109_TNNLS_2020_2986029
crossref_primary_10_1016_j_eswa_2025_127407
crossref_primary_10_1016_j_patcog_2022_109015
crossref_primary_10_1016_j_neucom_2021_12_076
crossref_primary_10_1007_s41019_022_00200_9
crossref_primary_10_1109_TMM_2021_3120194
crossref_primary_10_1007_s11042_022_13592_7
crossref_primary_10_1016_j_neucom_2020_09_089
crossref_primary_10_1109_ACCESS_2025_3549478
crossref_primary_10_1109_TIP_2021_3139234
crossref_primary_10_1109_TMM_2022_3207581
crossref_primary_10_1109_TPAMI_2021_3052490
crossref_primary_10_3390_data6020012
crossref_primary_10_1109_ACCESS_2020_2975594
crossref_primary_10_1109_TCYB_2023_3310892
crossref_primary_10_1007_s41095_021_0247_3
crossref_primary_10_3390_app142311470
crossref_primary_10_1109_JSTSP_2020_2989701
crossref_primary_10_1109_TPAMI_2021_3054048
crossref_primary_10_1145_3700877
crossref_primary_10_1016_j_ipm_2024_103772
crossref_primary_10_1016_j_patrec_2024_06_008
crossref_primary_10_3390_math12020286
crossref_primary_10_1016_j_engappai_2023_107732
crossref_primary_10_1007_s10994_017_5674_0
crossref_primary_10_1007_s11063_023_11190_8
crossref_primary_10_1109_ACCESS_2025_3574298
crossref_primary_10_1109_TPAMI_2023_3339661
crossref_primary_10_1007_s00371_024_03618_6
crossref_primary_10_1007_s10489_023_05235_7
crossref_primary_10_1016_j_isprsjprs_2017_07_010
crossref_primary_10_3390_app11073009
crossref_primary_10_1038_s41598_024_60832_x
crossref_primary_10_1145_3365003
crossref_primary_10_1016_j_patcog_2022_109272
crossref_primary_10_1109_ACCESS_2021_3067607
crossref_primary_10_1016_j_eswa_2025_129602
crossref_primary_10_1145_3701733
crossref_primary_10_1109_TNNLS_2022_3184821
crossref_primary_10_1109_TCSVT_2022_3233042
crossref_primary_10_1145_3767328
crossref_primary_10_3390_rs16071168
crossref_primary_10_1016_j_cviu_2024_104154
crossref_primary_10_1371_journal_pone_0325543
crossref_primary_10_1007_s10044_023_01199_z
crossref_primary_10_1109_MCI_2017_2708559
crossref_primary_10_1109_TAFFC_2022_3171091
crossref_primary_10_1109_TITS_2025_3527011
crossref_primary_10_1016_j_imavis_2025_105723
crossref_primary_10_1007_s11042_023_15779_y
crossref_primary_10_1007_s11280_023_01202_x
crossref_primary_10_1109_TCYB_2020_2979258
crossref_primary_10_1109_TMM_2020_3043084
crossref_primary_10_1016_j_knosys_2023_110773
crossref_primary_10_1016_j_patcog_2023_110084
crossref_primary_10_1007_s10791_020_09386_w
crossref_primary_10_1016_j_advengsoft_2024_103779
crossref_primary_10_1109_TNNLS_2023_3276796
crossref_primary_10_1109_TPAMI_2024_3506283
crossref_primary_10_1016_j_patcog_2025_111766
crossref_primary_10_1109_TPAMI_2024_3404450
crossref_primary_10_1109_TVCG_2022_3225327
crossref_primary_10_1109_TMM_2023_3239229
crossref_primary_10_1145_3415176
crossref_primary_10_1007_s11263_023_01949_w
crossref_primary_10_1016_j_ipm_2024_103990
crossref_primary_10_1080_13658816_2025_2506533
crossref_primary_10_1109_ACCESS_2020_3021312
crossref_primary_10_1016_j_patcog_2022_109044
crossref_primary_10_3389_fnbot_2019_00093
crossref_primary_10_1038_s42256_022_00516_1
crossref_primary_10_3390_computers10090105
crossref_primary_10_1155_2023_8600853
crossref_primary_10_1109_TIP_2024_3384096
crossref_primary_10_1109_TCYB_2021_3052522
crossref_primary_10_3233_NAI_240719
crossref_primary_10_1016_j_neucom_2021_03_084
crossref_primary_10_1038_s41598_025_04384_8
crossref_primary_10_1049_iet_cvi_2019_0896
crossref_primary_10_1007_s41095_021_0234_8
crossref_primary_10_3390_app11167426
crossref_primary_10_1016_j_ins_2021_01_049
crossref_primary_10_1016_j_knosys_2022_109239
crossref_primary_10_1016_j_knosys_2023_110505
crossref_primary_10_1049_ipr2_12367
crossref_primary_10_1109_TIP_2024_3396063
crossref_primary_10_1088_1742_6596_1748_4_042060
crossref_primary_10_14778_3611540_3611600
crossref_primary_10_1016_j_patcog_2017_09_028
crossref_primary_10_1109_TIP_2023_3334099
crossref_primary_10_1109_ACCESS_2024_3424230
crossref_primary_10_1109_TMM_2021_3120544
crossref_primary_10_1007_s11263_021_01458_8
crossref_primary_10_1109_TPAMI_2023_3292266
crossref_primary_10_1007_s11042_023_14596_7
crossref_primary_10_1007_s11042_023_15272_6
crossref_primary_10_1109_TGRS_2021_3130436
crossref_primary_10_1016_j_eswa_2023_120698
crossref_primary_10_1109_ACCESS_2020_3042484
crossref_primary_10_1007_s13278_021_00821_4
crossref_primary_10_1109_TMM_2021_3098988
crossref_primary_10_1109_TGRS_2024_3413174
crossref_primary_10_1109_TNNLS_2021_3104937
crossref_primary_10_1109_TPAMI_2023_3234170
crossref_primary_10_3390_s24061796
crossref_primary_10_1016_j_patrec_2020_09_014
crossref_primary_10_1016_j_dsp_2025_105459
crossref_primary_10_1109_TPAMI_2021_3117983
crossref_primary_10_1145_3715093
crossref_primary_10_3390_app9194130
crossref_primary_10_1109_TCDS_2021_3079278
crossref_primary_10_1109_TCSVT_2021_3107035
crossref_primary_10_1007_s00530_024_01608_1
crossref_primary_10_1016_j_bdr_2020_100175
crossref_primary_10_1016_j_patcog_2020_107563
crossref_primary_10_1109_TIP_2023_3330304
crossref_primary_10_1109_TIP_2024_3390984
crossref_primary_10_1007_s11263_025_02436_0
crossref_primary_10_1016_j_media_2024_103264
crossref_primary_10_1016_j_patcog_2021_107847
crossref_primary_10_1109_TPAMI_2023_3343736
crossref_primary_10_3389_fdata_2021_736709
crossref_primary_10_1016_j_eswa_2023_120267
crossref_primary_10_1109_TIP_2022_3197972
crossref_primary_10_1109_TMM_2023_3248160
crossref_primary_10_3390_a16020097
crossref_primary_10_1002_mar_21876
crossref_primary_10_1016_j_neunet_2024_107028
crossref_primary_10_1109_TNNLS_2020_2967597
crossref_primary_10_1007_s11263_023_01871_1
crossref_primary_10_1109_TIP_2025_3540296
crossref_primary_10_1007_s11760_024_03449_x
crossref_primary_10_1016_j_dsp_2025_105220
crossref_primary_10_3390_buildings12060857
crossref_primary_10_1007_s10489_024_05383_4
crossref_primary_10_1002_widm_70024
crossref_primary_10_1109_TPAMI_2023_3282889
crossref_primary_10_1007_s11042_023_15317_w
crossref_primary_10_1016_j_eswa_2023_122698
crossref_primary_10_1016_j_neucom_2021_03_073
crossref_primary_10_1109_TMM_2021_3092187
crossref_primary_10_1007_s13735_021_00208_3
crossref_primary_10_1109_TCYB_2023_3243999
crossref_primary_10_1109_TPAMI_2025_3531452
crossref_primary_10_1109_TCSVT_2023_3344569
crossref_primary_10_1007_s11263_021_01547_8
crossref_primary_10_1016_j_ijhm_2018_03_017
crossref_primary_10_1109_TNNLS_2024_3388094
crossref_primary_10_1109_ACCESS_2023_3347192
crossref_primary_10_1109_TPAMI_2023_3273712
crossref_primary_10_1186_s12880_022_00800_x
crossref_primary_10_1109_TGRS_2024_3449154
crossref_primary_10_1109_TIP_2023_3348297
crossref_primary_10_1007_s11771_021_4641_x
crossref_primary_10_1016_j_inffus_2025_103260
crossref_primary_10_1109_TPAMI_2019_2950025
crossref_primary_10_1016_j_bdr_2020_100159
crossref_primary_10_1109_TMM_2021_3062543
crossref_primary_10_1007_s13735_019_00178_7
crossref_primary_10_3390_make6020055
crossref_primary_10_1016_j_inffus_2023_01_008
crossref_primary_10_1145_3587267
crossref_primary_10_1016_j_imavis_2024_105259
crossref_primary_10_1007_s41060_024_00536_7
crossref_primary_10_3390_sym12040511
crossref_primary_10_1016_j_ipm_2024_103716
crossref_primary_10_1016_j_neunet_2024_107002
crossref_primary_10_1016_j_patcog_2024_110562
crossref_primary_10_1109_TMM_2024_3407695
crossref_primary_10_1145_3708350
crossref_primary_10_3390_app12178453
crossref_primary_10_1145_3638558
crossref_primary_10_1007_s10115_024_02166_8
crossref_primary_10_1109_TMM_2023_3297312
crossref_primary_10_1007_s00521_022_08048_w
crossref_primary_10_1016_j_knosys_2020_106639
crossref_primary_10_1109_TPAMI_2019_2909864
crossref_primary_10_1016_j_inffus_2021_07_009
crossref_primary_10_1016_j_inffus_2023_101940
crossref_primary_10_1038_s43586_022_00125_7
crossref_primary_10_3390_s24082567
crossref_primary_10_1016_j_inffus_2025_103009
crossref_primary_10_1109_ACCESS_2025_3577519
crossref_primary_10_1109_TCSVT_2023_3277827
crossref_primary_10_1007_s13748_025_00370_3
crossref_primary_10_1016_j_ipm_2022_103008
crossref_primary_10_1109_TPAMI_2020_3013834
crossref_primary_10_1007_s11280_021_00976_2
crossref_primary_10_1016_j_cviu_2023_103857
crossref_primary_10_1016_j_knosys_2020_106150
crossref_primary_10_3390_app131910894
crossref_primary_10_1109_TPAMI_2025_3568644
crossref_primary_10_1109_JSTSP_2023_3323654
crossref_primary_10_1109_TCSVT_2020_2991866
crossref_primary_10_1007_s11063_023_11314_0
crossref_primary_10_1109_ACCESS_2021_3138168
crossref_primary_10_3390_s20236758
crossref_primary_10_1007_s10115_024_02087_6
crossref_primary_10_1016_j_neucom_2022_02_069
crossref_primary_10_1109_ACCESS_2021_3139000
crossref_primary_10_1016_j_bdr_2023_100380
crossref_primary_10_1109_ACCESS_2023_3243952
crossref_primary_10_1109_ACCESS_2024_3394530
crossref_primary_10_1155_2022_4378553
crossref_primary_10_1007_s11042_022_13796_x
crossref_primary_10_1145_3415203
crossref_primary_10_1016_j_neucom_2022_01_022
crossref_primary_10_1007_s11042_022_12317_0
crossref_primary_10_1109_TPAMI_2020_3025814
crossref_primary_10_1371_journal_pone_0287557
crossref_primary_10_1016_j_knosys_2025_113083
crossref_primary_10_1007_s11771_021_4808_5
crossref_primary_10_1016_j_knosys_2024_111401
crossref_primary_10_1109_TNNLS_2020_3002583
crossref_primary_10_1109_TPAMI_2023_3305243
crossref_primary_10_1016_j_knosys_2023_110253
crossref_primary_10_1111_coin_12202
crossref_primary_10_1007_s10846_021_01566_0
crossref_primary_10_1109_TPAMI_2021_3121705
crossref_primary_10_1016_j_cviu_2022_103557
crossref_primary_10_3389_fnbot_2022_844753
crossref_primary_10_1145_3701229
crossref_primary_10_1162_neco_a_01552
crossref_primary_10_1109_TIFS_2022_3226905
crossref_primary_10_1016_j_inffus_2025_103623
crossref_primary_10_1109_TCSVT_2023_3296889
crossref_primary_10_1016_j_ipm_2022_103154
crossref_primary_10_3390_s22176419
crossref_primary_10_1016_j_dsp_2021_103134
crossref_primary_10_1007_s11063_024_11609_w
crossref_primary_10_1007_s00521_024_10318_8
crossref_primary_10_1016_j_asoc_2022_109077
crossref_primary_10_1109_ACCESS_2025_3558429
crossref_primary_10_3390_s23031307
crossref_primary_10_1088_1742_6596_1961_1_012005
crossref_primary_10_1109_TPAMI_2021_3075846
crossref_primary_10_3233_IDT_240297
crossref_primary_10_3389_fbuil_2020_00097
crossref_primary_10_1109_TGRS_2025_3589986
crossref_primary_10_1016_j_knosys_2023_111318
crossref_primary_10_1016_j_knosys_2021_107313
crossref_primary_10_1016_j_robot_2022_104183
crossref_primary_10_3390_s24010292
crossref_primary_10_3390_buildings14103194
crossref_primary_10_1145_3460474
crossref_primary_10_1109_TMM_2023_3241517
crossref_primary_10_1007_s11042_023_15291_3
crossref_primary_10_1080_01431161_2019_1594439
crossref_primary_10_1007_s11263_025_02388_5
crossref_primary_10_1148_radiol_2019190613
crossref_primary_10_1016_j_eswa_2021_116159
crossref_primary_10_1093_jigpal_jzae026
crossref_primary_10_1109_TCSVT_2021_3121062
crossref_primary_10_1016_j_neucom_2022_01_042
crossref_primary_10_1016_j_neucom_2024_128851
crossref_primary_10_1007_s11042_023_15798_9
crossref_primary_10_1007_s11263_025_02499_z
crossref_primary_10_1109_TMM_2021_3073624
crossref_primary_10_1109_TNNLS_2021_3135655
crossref_primary_10_1109_ACCESS_2019_2901118
crossref_primary_10_1016_j_cviu_2021_103257
crossref_primary_10_1007_s11227_025_07246_2
crossref_primary_10_1109_TIM_2022_3229702
crossref_primary_10_1109_TMM_2018_2885228
crossref_primary_10_1109_TCSVT_2024_3443097
crossref_primary_10_1016_j_inffus_2024_102270
crossref_primary_10_1109_TPAMI_2020_3021025
crossref_primary_10_3390_app15105608
crossref_primary_10_1007_s11042_023_17945_8
crossref_primary_10_1007_s41095_021_0252_6
crossref_primary_10_1109_TPAMI_2020_3042192
crossref_primary_10_1007_s10489_022_03845_1
crossref_primary_10_1016_j_jvcir_2020_102762
crossref_primary_10_3390_app13095658
crossref_primary_10_3390_app13095657
crossref_primary_10_1109_TNNLS_2023_3309104
crossref_primary_10_1109_TPAMI_2023_3328298
crossref_primary_10_1016_j_knosys_2024_111433
crossref_primary_10_1007_s11263_025_02564_7
crossref_primary_10_1145_3711680
crossref_primary_10_1016_j_adapen_2025_100212
crossref_primary_10_1109_TCSVT_2022_3220426
crossref_primary_10_1016_j_neucom_2025_130496
crossref_primary_10_1007_s11633_022_1414_4
crossref_primary_10_1016_j_eswa_2024_123526
crossref_primary_10_3390_data7020013
crossref_primary_10_1109_TCSVT_2024_3408256
crossref_primary_10_1109_TCSVT_2020_2990989
crossref_primary_10_1109_TIP_2022_3145158
crossref_primary_10_1109_TIP_2022_3183434
crossref_primary_10_1109_TNNLS_2021_3105284
crossref_primary_10_32604_cmc_2025_060363
crossref_primary_10_1007_s11042_023_17956_5
crossref_primary_10_1109_TPAMI_2024_3411595
crossref_primary_10_1109_TCSVT_2022_3207228
crossref_primary_10_1109_TCSS_2022_3223539
crossref_primary_10_1109_TMM_2023_3338078
crossref_primary_10_1016_j_neunet_2023_08_052
crossref_primary_10_1109_TMM_2024_3381835
crossref_primary_10_1016_j_ipm_2025_104379
crossref_primary_10_1109_TMM_2024_3369968
crossref_primary_10_1111_lnc3_12417
crossref_primary_10_1007_s10489_024_05437_7
crossref_primary_10_1016_j_patrec_2025_01_008
crossref_primary_10_1016_j_cag_2021_04_033
crossref_primary_10_1109_TMM_2020_3031062
crossref_primary_10_1109_TIP_2024_3374644
crossref_primary_10_3390_s22062245
crossref_primary_10_1109_ACCESS_2023_3310544
crossref_primary_10_1016_j_knosys_2023_110280
crossref_primary_10_1007_s11263_024_02127_2
crossref_primary_10_1016_j_patrec_2025_02_014
crossref_primary_10_1109_ACCESS_2024_3360113
crossref_primary_10_1109_TCSS_2023_3264696
crossref_primary_10_1016_j_inffus_2025_103419
crossref_primary_10_3390_electronics14071427
crossref_primary_10_1007_s11263_023_01912_9
crossref_primary_10_1109_TPAMI_2022_3230934
crossref_primary_10_26599_BDMA_2024_9020026
crossref_primary_10_1016_j_inffus_2025_103652
crossref_primary_10_1016_j_cviu_2023_103821
crossref_primary_10_1111_coin_12624
crossref_primary_10_1007_s00138_025_01697_6
crossref_primary_10_1016_j_inffus_2024_102269
crossref_primary_10_1109_TIP_2024_3459800
crossref_primary_10_1016_j_patcog_2019_107075
crossref_primary_10_1145_3618320
crossref_primary_10_1109_TAI_2023_3266183
crossref_primary_10_1109_TCSVT_2021_3068214
crossref_primary_10_1016_j_neucom_2024_128895
crossref_primary_10_1016_j_jvcir_2019_03_004
crossref_primary_10_1038_s41598_022_21149_9
crossref_primary_10_1016_j_neucom_2022_03_071
crossref_primary_10_1109_ACCESS_2025_3602396
crossref_primary_10_1109_ACCESS_2025_3541194
crossref_primary_10_1145_3585010
crossref_primary_10_3390_electronics12245007
crossref_primary_10_1109_TMM_2021_3060948
crossref_primary_10_3390_app131911103
crossref_primary_10_1016_j_knosys_2024_111469
crossref_primary_10_1109_TPAMI_2018_2844853
crossref_primary_10_1007_s11042_022_12860_w
crossref_primary_10_1038_s44387_025_00002_0
crossref_primary_10_1007_s11042_023_15078_6
crossref_primary_10_1007_s11042_019_7732_z
crossref_primary_10_1109_ACCESS_2021_3074937
crossref_primary_10_1016_j_eswa_2022_118669
crossref_primary_10_1016_j_patcog_2021_108214
crossref_primary_10_1109_ACCESS_2024_3416370
crossref_primary_10_1016_j_patcog_2023_109548
crossref_primary_10_1109_TIP_2021_3093784
crossref_primary_10_1016_j_imavis_2023_104864
crossref_primary_10_1109_TNNLS_2018_2792062
crossref_primary_10_1145_3575865
crossref_primary_10_1007_s11263_025_02397_4
crossref_primary_10_1109_TMM_2021_3090595
crossref_primary_10_1016_j_chb_2023_107997
crossref_primary_10_1016_j_neucom_2023_127052
crossref_primary_10_3390_electronics14010009
crossref_primary_10_1109_TASLP_2021_3130972
crossref_primary_10_1007_s13735_022_00258_1
crossref_primary_10_1007_s11263_023_01976_7
crossref_primary_10_1109_TIP_2022_3148867
crossref_primary_10_1109_TVCG_2021_3114770
crossref_primary_10_1016_j_eswa_2020_113656
crossref_primary_10_1016_j_compeleceng_2024_109626
crossref_primary_10_1016_j_patcog_2022_108980
crossref_primary_10_1145_3447651
crossref_primary_10_1016_j_jvcir_2022_103529
crossref_primary_10_1155_2021_2662064
crossref_primary_10_1016_j_neunet_2021_02_001
crossref_primary_10_1109_TGRS_2024_3360089
crossref_primary_10_1016_j_imavis_2022_104570
crossref_primary_10_1109_TCSVT_2023_3284812
crossref_primary_10_1109_TIP_2020_3029901
crossref_primary_10_1109_LRA_2023_3332554
crossref_primary_10_1007_s10489_022_04022_0
crossref_primary_10_1007_s11432_024_4234_4
crossref_primary_10_1109_TCSVT_2020_2965966
crossref_primary_10_1109_TPAMI_2024_3437288
crossref_primary_10_3390_ijgi10070488
crossref_primary_10_1038_sdata_2018_251
crossref_primary_10_1145_3640014
crossref_primary_10_1007_s11042_023_15321_0
crossref_primary_10_1007_s10462_025_11337_0
crossref_primary_10_1016_j_neucom_2018_02_065
crossref_primary_10_1145_3447685
crossref_primary_10_1109_TFUZZ_2024_3425664
crossref_primary_10_1007_s10489_022_04259_9
crossref_primary_10_1007_s11042_020_09001_6
crossref_primary_10_1109_ACCESS_2021_3131393
crossref_primary_10_1109_TNNLS_2022_3226871
crossref_primary_10_1109_TPAMI_2022_3181116
crossref_primary_10_1145_3564702
crossref_primary_10_1109_TPAMI_2022_3180025
crossref_primary_10_1007_s11063_023_11403_0
crossref_primary_10_1016_j_isprsjprs_2025_05_002
crossref_primary_10_1007_s00521_022_08072_w
crossref_primary_10_1016_j_imavis_2023_104840
crossref_primary_10_1108_IJCS_12_2018_0026
crossref_primary_10_1109_ACCESS_2025_3549781
crossref_primary_10_1109_ACCESS_2020_2979742
crossref_primary_10_1007_s11063_022_10980_w
crossref_primary_10_1109_TNNLS_2022_3227717
crossref_primary_10_1007_s41060_025_00827_7
crossref_primary_10_1109_TASE_2024_3378010
crossref_primary_10_1016_j_eswa_2024_125769
crossref_primary_10_1109_TGRS_2022_3185678
crossref_primary_10_1007_s10489_022_04202_y
crossref_primary_10_1016_j_neucom_2020_12_099
crossref_primary_10_1109_TASE_2024_3368696
crossref_primary_10_1109_TCSVT_2023_3307554
crossref_primary_10_1109_TNNLS_2022_3152990
crossref_primary_10_1109_JSAC_2022_3221990
crossref_primary_10_1007_s11263_022_01692_8
crossref_primary_10_1016_j_neucom_2020_12_029
crossref_primary_10_1109_TMM_2023_3238514
crossref_primary_10_1109_TKDE_2023_3270328
crossref_primary_10_1016_j_inffus_2025_102951
crossref_primary_10_1007_s10032_020_00361_1
crossref_primary_10_1080_23311916_2022_2104333
crossref_primary_10_1016_j_cose_2021_102452
crossref_primary_10_32604_cmc_2024_052618
crossref_primary_10_1016_j_knosys_2022_109861
crossref_primary_10_1109_LRA_2024_3497718
crossref_primary_10_1109_TMM_2021_3128744
crossref_primary_10_1007_s11263_023_01858_y
crossref_primary_10_1016_j_neucom_2022_09_136
crossref_primary_10_32604_cmes_2025_059192
crossref_primary_10_1109_LRA_2024_3524910
crossref_primary_10_1109_TMM_2024_3361157
crossref_primary_10_1109_TPAMI_2020_3015894
crossref_primary_10_1145_3554727
crossref_primary_10_1016_j_buildenv_2024_112174
crossref_primary_10_1145_3583682
crossref_primary_10_1007_s10844_022_00695_8
crossref_primary_10_1016_j_neucom_2024_128460
crossref_primary_10_1162_tacl_a_00010
crossref_primary_10_1016_j_cviu_2023_103646
crossref_primary_10_1038_s41598_021_98390_1
crossref_primary_10_1016_j_compag_2025_109908
crossref_primary_10_1016_j_neucom_2022_01_068
crossref_primary_10_1109_JSTSP_2022_3180220
crossref_primary_10_1109_TMM_2024_3521844
crossref_primary_10_1016_j_patcog_2021_108485
crossref_primary_10_1017_S1351324918000104
crossref_primary_10_1007_s11042_023_17577_y
crossref_primary_10_1109_TMM_2023_3277279
crossref_primary_10_1007_s13218_021_00719_5
crossref_primary_10_1109_TCSVT_2024_3525158
crossref_primary_10_1007_s10462_022_10258_6
crossref_primary_10_32604_cmc_2023_038264
crossref_primary_10_1007_s10462_023_10488_2
crossref_primary_10_1145_3514250
crossref_primary_10_1016_j_patcog_2022_108545
crossref_primary_10_1109_ACCESS_2022_3145465
crossref_primary_10_3233_AIC_210172
crossref_primary_10_1109_ACCESS_2025_3572529
crossref_primary_10_1016_j_displa_2022_102329
crossref_primary_10_1007_s11263_024_02343_w
crossref_primary_10_1016_j_eswa_2025_126943
crossref_primary_10_1016_j_knosys_2021_107347
crossref_primary_10_1016_j_patrec_2021_12_013
crossref_primary_10_1109_TIP_2022_3205212
crossref_primary_10_3390_s22186816
crossref_primary_10_1016_j_biocon_2019_01_023
crossref_primary_10_1016_j_patcog_2025_112013
crossref_primary_10_1145_3717612
crossref_primary_10_1007_s41060_025_00836_6
crossref_primary_10_1109_TNNLS_2021_3086066
crossref_primary_10_1145_3374217
crossref_primary_10_1016_j_asoc_2024_111395
crossref_primary_10_1109_TCSVT_2023_3281507
crossref_primary_10_1109_TPAMI_2023_3243306
crossref_primary_10_1109_TCYB_2020_3015084
crossref_primary_10_1016_j_imavis_2021_104280
crossref_primary_10_1016_j_imavis_2021_104281
crossref_primary_10_1109_TPAMI_2023_3275585
crossref_primary_10_3389_fpls_2024_1452551
crossref_primary_10_1109_ACCESS_2025_3605490
crossref_primary_10_1145_3514041
crossref_primary_10_1016_j_engappai_2023_107123
crossref_primary_10_1109_LRA_2020_2969921
crossref_primary_10_1109_ACCESS_2023_3334791
crossref_primary_10_1109_ACCESS_2024_3521032
crossref_primary_10_1109_TMM_2025_3557703
crossref_primary_10_1016_j_iswa_2025_200578
crossref_primary_10_1109_TETCI_2024_3518613
crossref_primary_10_3390_app14031169
crossref_primary_10_1016_j_jml_2025_104624
crossref_primary_10_1017_S0263574722001205
crossref_primary_10_1093_jcde_qwac084
crossref_primary_10_1007_s10489_022_04355_w
crossref_primary_10_1109_ACCESS_2024_3493193
crossref_primary_10_1109_TIP_2020_2992888
crossref_primary_10_1145_3701031
crossref_primary_10_1016_j_ipm_2019_102152
crossref_primary_10_1109_TASLP_2020_3010650
crossref_primary_10_1016_j_knosys_2025_113986
crossref_primary_10_1155_2021_9922697
crossref_primary_10_1016_j_neucom_2025_129642
crossref_primary_10_1016_j_patcog_2024_111096
crossref_primary_10_3390_app12147053
crossref_primary_10_1109_TCYB_2020_3041595
crossref_primary_10_3390_s22239399
crossref_primary_10_1109_ACCESS_2021_3113781
crossref_primary_10_1016_j_inffus_2021_02_022
crossref_primary_10_3390_s25113252
crossref_primary_10_1007_s41060_025_00835_7
crossref_primary_10_1109_TCSVT_2022_3183648
crossref_primary_10_3389_frai_2023_1067125
crossref_primary_10_1007_s11263_023_01885_9
crossref_primary_10_1109_TPAMI_2025_3562422
crossref_primary_10_1007_s10844_022_00722_8
crossref_primary_10_1109_JSAC_2022_3221950
crossref_primary_10_1155_2022_2315341
crossref_primary_10_1007_s10115_025_02384_8
crossref_primary_10_7717_peerj_cs_353
crossref_primary_10_1016_j_artmed_2022_102346
crossref_primary_10_1016_j_jml_2023_104459
crossref_primary_10_1109_TIP_2024_3485518
crossref_primary_10_1109_ACCESS_2025_3554728
crossref_primary_10_1007_s00530_023_01175_x
crossref_primary_10_1007_s11263_025_02524_1
crossref_primary_10_1016_j_patcog_2020_107806
crossref_primary_10_1007_s00530_025_01681_0
crossref_primary_10_1016_j_neucom_2020_03_098
crossref_primary_10_1109_TCSVT_2024_3399933
crossref_primary_10_1007_s11263_025_02502_7
crossref_primary_10_1109_TIP_2022_3226624
crossref_primary_10_1109_ACCESS_2023_3338548
crossref_primary_10_1109_ACCESS_2021_3101789
crossref_primary_10_1186_s40648_020_00181_2
crossref_primary_10_1007_s00530_022_01012_7
crossref_primary_10_1162_tacl_a_00566
crossref_primary_10_1016_j_patcog_2020_107812
crossref_primary_10_1109_TCSVT_2023_3251395
crossref_primary_10_1016_j_knosys_2025_113884
crossref_primary_10_1016_j_neucom_2025_129867
crossref_primary_10_1016_j_patrec_2023_11_023
crossref_primary_10_1109_TPAMI_2022_3148210
crossref_primary_10_3233_SW_243660
crossref_primary_10_3390_app12188947
crossref_primary_10_32604_cmc_2023_036564
crossref_primary_10_1109_JSAC_2025_3559115
crossref_primary_10_3390_info16030203
crossref_primary_10_3390_jimaging7080125
crossref_primary_10_1007_s00521_024_09914_5
crossref_primary_10_3390_jimaging7080123
crossref_primary_10_1002_rob_21826
crossref_primary_10_1109_JSTARS_2025_3571939
crossref_primary_10_3390_math11173751
crossref_primary_10_1016_j_sigpro_2025_110177
crossref_primary_10_1016_j_neucom_2020_02_041
crossref_primary_10_1007_s43681_024_00517_3
crossref_primary_10_3390_app13074571
crossref_primary_10_1007_s11263_024_02144_1
crossref_primary_10_1007_s00146_020_01091_y
crossref_primary_10_1016_j_neucom_2024_128042
crossref_primary_10_1109_TIP_2022_3220051
crossref_primary_10_1109_TKDE_2024_3384270
crossref_primary_10_1017_S1351324918000086
crossref_primary_10_1016_j_patcog_2024_111176
crossref_primary_10_1109_TPAMI_2025_3581174
crossref_primary_10_1109_TKDE_2024_3399746
crossref_primary_10_1007_s11548_024_03257_1
crossref_primary_10_1109_TMM_2022_3219642
crossref_primary_10_1145_3282469
crossref_primary_10_3390_s20174897
crossref_primary_10_1109_TPAMI_2024_3420239
crossref_primary_10_3389_fmed_2022_1070072
crossref_primary_10_1016_j_neucom_2025_130827
crossref_primary_10_1109_TPAMI_2022_3198965
crossref_primary_10_1007_s00521_024_10211_4
crossref_primary_10_1007_s11042_024_20271_2
crossref_primary_10_1109_LRA_2024_3390588
crossref_primary_10_1109_TMM_2018_2812605
crossref_primary_10_1109_TPAMI_2022_3173208
crossref_primary_10_1109_TCSVT_2022_3181490
crossref_primary_10_1109_TPAMI_2021_3137605
crossref_primary_10_3390_s21237982
crossref_primary_10_1109_TSMC_2023_3319964
crossref_primary_10_1007_s00530_021_00864_9
crossref_primary_10_1145_3548688
crossref_primary_10_1007_s41095_020_0158_8
crossref_primary_10_1016_j_patcog_2017_05_015
crossref_primary_10_1007_s11063_022_10766_0
crossref_primary_10_3389_fnbot_2020_00043
crossref_primary_10_1016_j_neucom_2021_03_104
crossref_primary_10_1007_s42979_020_00238_4
crossref_primary_10_1109_JIOT_2024_3492066
crossref_primary_10_1109_TMI_2019_2893944
crossref_primary_10_32604_cmc_2023_037463
crossref_primary_10_1109_TIP_2022_3192709
crossref_primary_10_1007_s00371_024_03469_1
crossref_primary_10_3389_frai_2023_1084740
crossref_primary_10_1109_TIP_2021_3096333
crossref_primary_10_1007_s11263_020_01291_5
crossref_primary_10_1016_j_procs_2024_10_207
crossref_primary_10_1007_s11042_023_14344_x
crossref_primary_10_1016_j_neucom_2022_05_003
crossref_primary_10_1109_TIP_2024_3358726
crossref_primary_10_1007_s00530_022_01010_9
crossref_primary_10_1145_3722449_3722467
crossref_primary_10_1109_TIP_2023_3270040
crossref_primary_10_1007_s11263_022_01624_6
crossref_primary_10_1038_s41467_024_48114_6
crossref_primary_10_1145_3451390
crossref_primary_10_1109_TPAMI_2023_3275156
crossref_primary_10_1016_j_patrec_2025_07_020
crossref_primary_10_1145_3706423
crossref_primary_10_3390_s24134329
crossref_primary_10_1109_TIP_2022_3177318
crossref_primary_10_1145_3362065
crossref_primary_10_1016_j_neunet_2021_07_019
crossref_primary_10_1109_TPAMI_2019_2894139
crossref_primary_10_1145_3538649
crossref_primary_10_1109_TPAMI_2024_3479776
crossref_primary_10_1016_j_neucom_2024_128082
crossref_primary_10_1109_TIP_2024_3514352
crossref_primary_10_3390_e25040553
crossref_primary_10_1016_j_eswa_2023_119773
crossref_primary_10_1109_TCSVT_2021_3130197
crossref_primary_10_1007_s11042_023_14333_0
crossref_primary_10_1142_S1469026824500342
crossref_primary_10_1177_0278364917713117
crossref_primary_10_1016_j_jvcir_2023_103923
crossref_primary_10_1007_s42452_025_07045_7
crossref_primary_10_1109_TAFFC_2024_3396144
crossref_primary_10_1016_j_isprsjprs_2024_05_001
crossref_primary_10_1109_TPAMI_2025_3537283
crossref_primary_10_3390_jimaging10120300
crossref_primary_10_3390_s20247080
crossref_primary_10_1007_s11265_021_01664_0
crossref_primary_10_1038_s41598_023_48916_6
crossref_primary_10_1016_j_eswa_2025_128857
crossref_primary_10_1145_3580366
crossref_primary_10_1016_j_neucom_2023_126605
crossref_primary_10_1016_j_neucom_2021_10_121
crossref_primary_10_1109_TCSVT_2024_3444895
crossref_primary_10_1145_3634918
crossref_primary_10_1109_TCYB_2019_2956975
crossref_primary_10_1016_j_eswa_2023_122955
crossref_primary_10_1109_ACCESS_2024_3486153
crossref_primary_10_1007_s00530_023_01162_2
crossref_primary_10_1109_ACCESS_2022_3154423
crossref_primary_10_1109_TPAMI_2017_2723009
crossref_primary_10_1007_s00371_022_02695_9
crossref_primary_10_1109_TPAMI_2024_3386891
crossref_primary_10_1017_dsj_2022_7
crossref_primary_10_1109_TPAMI_2022_3148470
crossref_primary_10_1007_s11042_023_15967_w
crossref_primary_10_1007_s13735_023_00283_8
crossref_primary_10_1007_s11263_020_01316_z
crossref_primary_10_1109_ACCESS_2022_3186471
crossref_primary_10_1109_TCSVT_2023_3235704
crossref_primary_10_1016_j_cviu_2021_103333
crossref_primary_10_3390_electronics14061158
crossref_primary_10_1145_3723879
crossref_primary_10_1109_TPAMI_2022_3210780
crossref_primary_10_1016_j_patrec_2021_01_024
crossref_primary_10_1109_TIP_2021_3106813
crossref_primary_10_1007_s10462_022_10151_2
crossref_primary_10_1016_j_inffus_2022_09_012
crossref_primary_10_7717_peerj_cs_2639
crossref_primary_10_3233_SW_233469
crossref_primary_10_1109_TCSVT_2023_3310296
crossref_primary_10_1109_TPAMI_2018_2799846
crossref_primary_10_1145_3617592
crossref_primary_10_1007_s11042_018_6389_3
crossref_primary_10_1007_s11263_023_01817_7
crossref_primary_10_1007_s10791_023_09422_5
crossref_primary_10_1109_ACCESS_2021_3090981
crossref_primary_10_1109_ACCESS_2023_3268744
crossref_primary_10_1109_TPAMI_2018_2890628
crossref_primary_10_1109_TCSVT_2024_3497997
crossref_primary_10_1145_3316767
crossref_primary_10_1007_s00371_024_03343_0
crossref_primary_10_1109_TNNLS_2020_2979270
crossref_primary_10_1109_TMM_2023_3318289
crossref_primary_10_1109_TNNLS_2019_2945133
crossref_primary_10_1016_j_robot_2018_12_009
crossref_primary_10_1016_j_patrec_2023_05_035
crossref_primary_10_1007_s11263_022_01622_8
crossref_primary_10_1007_s00236_021_00400_2
crossref_primary_10_1007_s11704_023_2525_y
crossref_primary_10_1007_s11063_021_10618_3
crossref_primary_10_1016_j_eswa_2025_126692
crossref_primary_10_1177_30504554251319445
crossref_primary_10_1016_j_engappai_2025_110358
crossref_primary_10_3390_app13137868
crossref_primary_10_1007_s11042_019_08021_1
crossref_primary_10_1016_j_cviu_2022_103617
crossref_primary_10_1109_TMM_2025_3535303
crossref_primary_10_1145_3649503
crossref_primary_10_1109_TMM_2023_3237166
crossref_primary_10_1038_s41586_020_2669_y
crossref_primary_10_1007_s11633_022_1369_5
crossref_primary_10_1109_JSTARS_2023_3316302
crossref_primary_10_1016_j_patcog_2025_111641
crossref_primary_10_3390_s25103072
crossref_primary_10_1016_j_aei_2023_101889
crossref_primary_10_1109_TPAMI_2020_3029008
crossref_primary_10_1109_TPAMI_2023_3261659
crossref_primary_10_1109_TMM_2023_3237164
crossref_primary_10_1007_s11548_023_03022_w
crossref_primary_10_1007_s41745_019_0099_3
crossref_primary_10_1016_j_jksuci_2023_03_021
crossref_primary_10_1007_s11042_022_12264_w
crossref_primary_10_1093_cercor_bhac117
crossref_primary_10_1109_TCSVT_2022_3155795
crossref_primary_10_1109_ACCESS_2024_3467062
crossref_primary_10_1016_j_neucom_2022_11_003
crossref_primary_10_1631_FITEE_2400250
crossref_primary_10_1007_s12559_024_10345_6
crossref_primary_10_1145_3186549_3186562
crossref_primary_10_3390_su142013236
crossref_primary_10_1007_s13735_019_00188_5
crossref_primary_10_3390_s22093433
crossref_primary_10_1109_TPAMI_2024_3402143
crossref_primary_10_1016_j_cviu_2023_103901
crossref_primary_10_1109_TMM_2022_3185900
crossref_primary_10_1016_j_eswa_2025_126478
crossref_primary_10_1109_TPAMI_2021_3114582
crossref_primary_10_1109_TPAMI_2024_3507000
crossref_primary_10_1016_j_cag_2021_04_003
crossref_primary_10_1016_j_isprsjprs_2024_06_002
crossref_primary_10_1007_s00500_020_05539_7
crossref_primary_10_1016_j_neucom_2019_08_042
crossref_primary_10_1109_TIP_2020_3011807
crossref_primary_10_1109_TMM_2022_3216770
crossref_primary_10_1109_LRA_2024_3477129
crossref_primary_10_1145_3472291
crossref_primary_10_1007_s00530_025_01696_7
crossref_primary_10_3390_robotics12040114
crossref_primary_10_1145_3473140
crossref_primary_10_1007_s11263_025_02577_2
crossref_primary_10_1007_s11263_021_01546_9
crossref_primary_10_1016_j_inffus_2025_103124
crossref_primary_10_1007_s42979_025_04277_7
crossref_primary_10_1109_TMM_2020_3026892
crossref_primary_10_1016_j_knosys_2024_112912
crossref_primary_10_1109_TIP_2022_3224872
crossref_primary_10_1109_TKDE_2024_3373500
crossref_primary_10_1007_s10489_021_02304_7
crossref_primary_10_3390_make5010018
crossref_primary_10_1016_j_cpet_2021_09_010
crossref_primary_10_1016_j_autcon_2022_104535
crossref_primary_10_1016_j_patcog_2020_107680
crossref_primary_10_1016_j_neucom_2023_126684
crossref_primary_10_1007_s12652_021_03075_2
crossref_primary_10_1016_j_neucom_2023_126440
crossref_primary_10_1016_j_knosys_2020_106504
crossref_primary_10_32604_cmc_2023_037861
crossref_primary_10_1155_2022_5884625
crossref_primary_10_1007_s42979_024_02791_8
crossref_primary_10_1007_s00371_024_03563_4
crossref_primary_10_1080_2150704X_2020_1722330
crossref_primary_10_1016_j_engappai_2024_109040
crossref_primary_10_1109_TPAMI_2021_3078577
crossref_primary_10_1007_s11263_020_01295_1
crossref_primary_10_3390_math10142525
crossref_primary_10_1109_TMM_2021_3097502
crossref_primary_10_1109_TMM_2021_3109430
crossref_primary_10_1145_3580501
crossref_primary_10_1016_j_knosys_2024_112925
crossref_primary_10_1016_j_eswa_2025_129525
crossref_primary_10_1109_TPAMI_2023_3291237
crossref_primary_10_3233_SW_212959
crossref_primary_10_1007_s11633_022_1410_8
crossref_primary_10_1109_TPAMI_2024_3357503
crossref_primary_10_1145_3394955
crossref_primary_10_1155_2021_5538927
crossref_primary_10_1145_3615355
crossref_primary_10_1145_3590773
crossref_primary_10_1109_TNNLS_2020_3045034
crossref_primary_10_1007_s11063_022_10844_3
crossref_primary_10_1016_j_imavis_2024_105105
crossref_primary_10_1038_s41467_025_62385_7
crossref_primary_10_1109_TPAMI_2024_3413013
crossref_primary_10_3390_electronics9111882
crossref_primary_10_1145_3545572
crossref_primary_10_1007_s00371_024_03608_8
crossref_primary_10_1016_j_ipm_2022_103106
crossref_primary_10_1142_S0219467825500445
crossref_primary_10_1109_TNNLS_2022_3159990
crossref_primary_10_1049_ipr2_13124
crossref_primary_10_1109_TCSVT_2023_3303945
crossref_primary_10_1109_TMM_2022_3217414
crossref_primary_10_1007_s11042_021_11293_1
crossref_primary_10_1007_s11063_022_11106_y
crossref_primary_10_1016_j_artmed_2023_102611
crossref_primary_10_3390_robotics12060158
crossref_primary_10_1016_j_procir_2022_05_084
crossref_primary_10_1109_LSP_2020_3025128
crossref_primary_10_1038_s41467_019_11012_3
crossref_primary_10_1109_TCSVT_2022_3181604
crossref_primary_10_1007_s10032_021_00367_3
crossref_primary_10_1016_j_patcog_2020_107427
crossref_primary_10_3390_biomimetics7030127
crossref_primary_10_1109_TPAMI_2019_2956930
crossref_primary_10_1109_TCYB_2019_2931042
crossref_primary_10_1109_TMM_2024_3358696
crossref_primary_10_1016_j_media_2025_103644
crossref_primary_10_1016_j_autcon_2025_106490
crossref_primary_10_1109_TIP_2023_3311917
crossref_primary_10_1109_TMM_2022_3154149
crossref_primary_10_1109_TMM_2022_3190686
crossref_primary_10_1016_j_neucom_2019_09_055
crossref_primary_10_3390_rs14133118
crossref_primary_10_1016_j_patrec_2021_02_015
crossref_primary_10_1109_ACCESS_2025_3540388
crossref_primary_10_1016_j_cviu_2019_05_001
crossref_primary_10_1145_3522689
crossref_primary_10_1109_TCSVT_2023_3243725
crossref_primary_10_1109_TGRS_2023_3250471
crossref_primary_10_7717_peerj_cs_974
crossref_primary_10_1109_TNNLS_2022_3183827
crossref_primary_10_1016_j_neucom_2022_04_020
crossref_primary_10_1007_s12369_020_00657_6
crossref_primary_10_1109_ACCESS_2025_3551267
crossref_primary_10_3390_info11100479
crossref_primary_10_1016_j_eswa_2025_129784
crossref_primary_10_1109_TPAMI_2023_3329339
crossref_primary_10_1109_TPAMI_2024_3387349
crossref_primary_10_1126_science_ade7981
crossref_primary_10_15625_1813_9663_20929
crossref_primary_10_3390_electronics10030325
crossref_primary_10_1109_TIP_2021_3123553
crossref_primary_10_1109_TMM_2022_3192729
crossref_primary_10_1109_TPAMI_2024_3366154
crossref_primary_10_1016_j_eswa_2023_121391
crossref_primary_10_1109_TPAMI_2024_3389030
crossref_primary_10_1007_s10489_024_05416_y
crossref_primary_10_1007_s41109_020_00282_2
crossref_primary_10_3390_ijgi9060354
crossref_primary_10_1007_s11263_020_01353_8
crossref_primary_10_1109_TIP_2021_3097180
crossref_primary_10_1109_TPAMI_2024_3462996
crossref_primary_10_1109_TMM_2023_3265816
crossref_primary_10_1109_TMM_2020_3046855
crossref_primary_10_1109_TMM_2020_3024822
crossref_primary_10_1145_3585388
crossref_primary_10_1109_TCSVT_2023_3282349
crossref_primary_10_1016_j_eswa_2025_128344
crossref_primary_10_1016_j_neucom_2023_03_020
crossref_primary_10_1109_TPAMI_2024_3438887
crossref_primary_10_1109_TPAMI_2025_3571946
crossref_primary_10_1109_TCSVT_2021_3085907
crossref_primary_10_1007_s44163_025_00482_8
crossref_primary_10_1109_TAFFC_2024_3428704
crossref_primary_10_1109_TCSVT_2020_3032650
crossref_primary_10_1016_j_jfranklin_2024_106942
crossref_primary_10_1016_j_neucom_2025_131021
crossref_primary_10_1109_TMM_2023_3261443
crossref_primary_10_12677_sa_2025_142041
crossref_primary_10_1631_FITEE_2100116
crossref_primary_10_1145_3532627
crossref_primary_10_1145_3550278
crossref_primary_10_1016_j_autcon_2022_104580
crossref_primary_10_1016_j_cviu_2023_103721
crossref_primary_10_1007_s00530_022_00962_2
crossref_primary_10_1007_s00530_024_01470_1
crossref_primary_10_1109_TASE_2025_3593344
crossref_primary_10_1007_s10846_024_02124_0
crossref_primary_10_1109_TCSVT_2023_3297842
crossref_primary_10_1016_j_eswa_2024_123847
crossref_primary_10_1007_s11042_023_17594_x
crossref_primary_10_1145_3657631
crossref_primary_10_1109_ACCESS_2023_3299877
crossref_primary_10_1016_j_neucom_2022_10_079
crossref_primary_10_1016_j_compbiomed_2023_106646
crossref_primary_10_3390_app10196942
crossref_primary_10_1007_s13735_022_00228_7
crossref_primary_10_1016_j_compeleceng_2024_110000
crossref_primary_10_1109_TCSVT_2024_3392619
crossref_primary_10_1109_TKDE_2022_3224228
crossref_primary_10_1007_s00138_025_01683_y
crossref_primary_10_1016_j_eswa_2023_121168
crossref_primary_10_1109_TPAMI_2023_3286760
crossref_primary_10_1109_JAS_2022_105734
crossref_primary_10_1016_j_neucom_2019_02_065
crossref_primary_10_1145_3673902
crossref_primary_10_1007_s13369_024_09067_6
crossref_primary_10_1109_LRA_2023_3320014
crossref_primary_10_1016_j_eswa_2024_125817
crossref_primary_10_3390_systems13010038
crossref_primary_10_1007_s00521_022_07726_z
crossref_primary_10_1007_s12369_024_01157_7
crossref_primary_10_1109_LGRS_2020_3038569
crossref_primary_10_1016_j_imavis_2024_105293
crossref_primary_10_1016_j_eswa_2025_129216
crossref_primary_10_1016_j_imavis_2021_104126
crossref_primary_10_1080_01969722_2021_2018543
crossref_primary_10_1016_j_rse_2024_114573
crossref_primary_10_1109_TMM_2025_3557624
crossref_primary_10_1016_j_patcog_2023_110221
crossref_primary_10_1108_RIA_09_2022_0226
crossref_primary_10_1145_3505244
crossref_primary_10_1007_s11548_024_03141_y
crossref_primary_10_1109_TMM_2018_2888822
crossref_primary_10_1007_s00530_024_01287_y
crossref_primary_10_1016_j_neucom_2024_128969
crossref_primary_10_1109_TMM_2022_3233258
crossref_primary_10_1016_j_patcog_2020_107359
crossref_primary_10_1016_j_engappai_2025_111984
crossref_primary_10_1007_s40747_025_01823_x
crossref_primary_10_3390_rs11161901
crossref_primary_10_1145_3716846
crossref_primary_10_1109_TMM_2022_3214989
crossref_primary_10_38124_ijisrt_IJISRT24MAR1602
crossref_primary_10_1038_s41467_025_61040_5
crossref_primary_10_1016_j_datak_2024_102285
crossref_primary_10_1145_3458281
crossref_primary_10_1016_j_imavis_2024_105283
crossref_primary_10_1109_TCSVT_2023_3336371
crossref_primary_10_1109_TPAMI_2021_3119754
crossref_primary_10_1016_j_neunet_2024_106642
crossref_primary_10_1109_TPAMI_2024_3369699
crossref_primary_10_1109_TMM_2020_3011317
crossref_primary_10_1016_j_robot_2022_104294
crossref_primary_10_1007_s00530_024_01604_5
crossref_primary_10_1007_s12559_019_09686_4
crossref_primary_10_1109_LRA_2022_3157567
crossref_primary_10_1007_s11263_025_02369_8
crossref_primary_10_1109_ACCESS_2022_3213652
crossref_primary_10_3390_electronics14040816
crossref_primary_10_1109_TCSVT_2019_2953692
crossref_primary_10_1007_s10489_025_06717_6
crossref_primary_10_1016_j_aei_2024_102678
crossref_primary_10_1016_j_neucom_2023_126287
crossref_primary_10_1145_3565572
crossref_primary_10_1007_s00371_025_03981_y
crossref_primary_10_1016_j_aei_2024_102671
crossref_primary_10_1007_s12559_019_09697_1
crossref_primary_10_1145_3329188
crossref_primary_10_1016_j_eswa_2025_129479
crossref_primary_10_1007_s10462_020_09832_7
crossref_primary_10_3390_robotics13080112
crossref_primary_10_1016_j_cviu_2017_08_006
crossref_primary_10_1016_j_knosys_2024_111550
crossref_primary_10_1145_3563390
crossref_primary_10_1016_j_array_2025_100394
crossref_primary_10_1109_TPAMI_2023_3268066
crossref_primary_10_1145_3656580
crossref_primary_10_1016_j_media_2020_101872
crossref_primary_10_1007_s10514_023_10098_5
crossref_primary_10_1109_JRFID_2022_3207017
crossref_primary_10_1109_TIM_2024_3413183
crossref_primary_10_1109_TMM_2024_3395901
crossref_primary_10_1016_j_patrec_2018_04_031
crossref_primary_10_1007_s00371_022_02524_z
crossref_primary_10_1109_TIP_2021_3076556
crossref_primary_10_1016_j_cviu_2024_104071
crossref_primary_10_1016_j_neunet_2022_05_008
crossref_primary_10_1155_2021_6383646
crossref_primary_10_1109_TCSVT_2022_3164230
crossref_primary_10_1109_TNNLS_2022_3192475
crossref_primary_10_3390_computers13120305
crossref_primary_10_1109_TIP_2021_3113563
crossref_primary_10_1109_TNNLS_2022_3196831
crossref_primary_10_1016_j_patcog_2025_112186
crossref_primary_10_1016_j_neucom_2023_126274
crossref_primary_10_1016_j_patcog_2025_112183
crossref_primary_10_1007_s11063_019_09997_5
crossref_primary_10_1109_TMM_2020_3033125
crossref_primary_10_1016_j_patrec_2022_12_011
crossref_primary_10_1109_TPAMI_2019_2943456
crossref_primary_10_1109_TMM_2024_3369863
crossref_primary_10_3390_e26100876
crossref_primary_10_1155_2022_9400999
crossref_primary_10_3390_buildings13020377
crossref_primary_10_1109_ACCESS_2024_3482107
crossref_primary_10_1016_j_eswa_2025_129250
crossref_primary_10_1016_j_patcog_2024_110708
crossref_primary_10_1145_3439734
crossref_primary_10_1007_s11831_024_10190_8
crossref_primary_10_1109_TMM_2023_3328189
crossref_primary_10_1109_TCSVT_2023_3343520
crossref_primary_10_1145_3386725
crossref_primary_10_1145_3604284
crossref_primary_10_1109_ACCESS_2021_3129782
crossref_primary_10_1007_s11042_020_09865_8
crossref_primary_10_1109_TIP_2023_3318949
crossref_primary_10_1109_LSP_2022_3178899
crossref_primary_10_1109_TCSVT_2023_3324595
crossref_primary_10_1109_TCSVT_2024_3364153
crossref_primary_10_1016_j_imavis_2021_104327
crossref_primary_10_1016_j_imavis_2021_104328
crossref_primary_10_3390_electronics9122038
crossref_primary_10_1016_j_patcog_2021_108300
crossref_primary_10_1109_TPAMI_2020_3004830
crossref_primary_10_1109_TNNLS_2023_3333542
crossref_primary_10_1109_ACCESS_2020_3024639
crossref_primary_10_1109_TCSVT_2025_3528657
crossref_primary_10_1109_TASLP_2021_3120644
crossref_primary_10_1016_j_cviu_2024_104095
crossref_primary_10_1016_j_patcog_2022_108848
crossref_primary_10_1109_TKDE_2020_2998805
crossref_primary_10_1016_j_neucom_2020_04_066
crossref_primary_10_1016_j_neucom_2024_127686
crossref_primary_10_1109_TMM_2023_3331583
crossref_primary_10_3233_JCM_226762
crossref_primary_10_1007_s11042_024_20235_6
crossref_primary_10_1016_j_neunet_2024_106200
crossref_primary_10_1049_cvi2_12087
crossref_primary_10_1007_s00766_021_00366_0
crossref_primary_10_1109_TIP_2020_3038354
crossref_primary_10_1145_3689637
crossref_primary_10_1016_j_eswa_2024_124785
crossref_primary_10_1016_j_patcog_2023_109420
crossref_primary_10_1049_cvi2_12099
crossref_primary_10_4018_IJSWIS_309421
crossref_primary_10_1109_TMM_2022_3173131
crossref_primary_10_1109_TIP_2023_3310332
crossref_primary_10_1016_j_asoc_2025_113310
crossref_primary_10_1109_JIOT_2022_3158088
crossref_primary_10_1109_TCSS_2023_3244068
crossref_primary_10_1109_ACCESS_2021_3070212
crossref_primary_10_1016_j_compbiomed_2024_108709
crossref_primary_10_1016_j_neunet_2024_106813
crossref_primary_10_1109_ACCESS_2020_3048693
crossref_primary_10_1007_s40747_023_01170_9
crossref_primary_10_1109_TPAMI_2020_2973983
crossref_primary_10_1049_cvi2_12093
crossref_primary_10_1145_3577925
crossref_primary_10_1007_s10489_022_04010_4
crossref_primary_10_1109_TMM_2022_3160060
crossref_primary_10_1109_TNNLS_2019_2899045
crossref_primary_10_1038_s41598_025_11237_x
crossref_primary_10_1109_TPAMI_2023_3346405
crossref_primary_10_1109_TPAMI_2024_3429301
crossref_primary_10_1007_s00521_024_09693_z
crossref_primary_10_1007_s11063_020_10314_8
crossref_primary_10_1016_j_knosys_2021_108107
crossref_primary_10_1007_s11263_020_01300_7
crossref_primary_10_1109_TCSVT_2022_3162650
crossref_primary_10_1109_TCSVT_2023_3278492
crossref_primary_10_1109_TNNLS_2024_3418857
crossref_primary_10_1109_TPAMI_2023_3332246
crossref_primary_10_1145_3728635
crossref_primary_10_1007_s00354_024_00243_8
crossref_primary_10_3390_app13127115
crossref_primary_10_1016_j_imavis_2021_104194
crossref_primary_10_1016_j_neucom_2025_129345
crossref_primary_10_1109_TIFS_2024_3360880
crossref_primary_10_1016_j_imavis_2022_104451
crossref_primary_10_1016_j_imavis_2023_104751
crossref_primary_10_1016_j_patcog_2025_112381
crossref_primary_10_1109_TMM_2020_2976552
crossref_primary_10_1145_3383465
crossref_primary_10_1145_3295748
crossref_primary_10_1007_s00138_022_01361_3
crossref_primary_10_1088_1742_6596_2003_1_012001
crossref_primary_10_7746_jkros_2023_18_4_436
crossref_primary_10_1109_TCSVT_2024_3508752
crossref_primary_10_1007_s10489_022_03317_6
crossref_primary_10_1142_S0218126625501142
crossref_primary_10_1109_ACCESS_2023_3255887
crossref_primary_10_1109_TNNLS_2018_2813306
crossref_primary_10_1145_3306346_3322941
crossref_primary_10_1109_TAI_2022_3194869
crossref_primary_10_1145_3558391
crossref_primary_10_1016_j_neucom_2022_07_028
crossref_primary_10_1109_TMM_2021_3093725
crossref_primary_10_1016_j_compbiomed_2025_110729
crossref_primary_10_1145_3538533
crossref_primary_10_1007_s10994_024_06610_2
crossref_primary_10_1016_j_patcog_2021_108358
crossref_primary_10_1109_TCSVT_2022_3231437
crossref_primary_10_1109_ACCESS_2022_3229654
crossref_primary_10_3390_electronics9030380
crossref_primary_10_1016_j_neunet_2022_01_011
crossref_primary_10_1016_j_eswa_2024_123231
crossref_primary_10_1109_TNNLS_2024_3458898
crossref_primary_10_1016_j_patcog_2023_109634
crossref_primary_10_1016_j_cviu_2017_05_001
crossref_primary_10_1111_cgf_14683
crossref_primary_10_1016_j_jag_2024_103672
crossref_primary_10_1111_cgf_14687
crossref_primary_10_1007_s42979_022_01322_7
crossref_primary_10_1109_TMM_2022_3142413
crossref_primary_10_1016_j_eswa_2022_118998
crossref_primary_10_3390_s22031045
crossref_primary_10_1016_j_patcog_2021_108106
crossref_primary_10_1109_TIP_2023_3266887
crossref_primary_10_1109_TCSVT_2021_3067449
crossref_primary_10_1016_j_neucom_2019_03_035
crossref_primary_10_1109_TCSVT_2020_3030656
crossref_primary_10_1007_s11432_023_4230_2
crossref_primary_10_1016_j_ipm_2020_102432
crossref_primary_10_1007_s13735_022_00246_5
crossref_primary_10_1109_TCSVT_2021_3051277
crossref_primary_10_1098_rsfs_2018_0020
crossref_primary_10_1016_j_cviu_2019_102829
crossref_primary_10_1038_s41551_025_01497_3
crossref_primary_10_1109_TMM_2023_3314153
crossref_primary_10_1109_TPAMI_2022_3178485
crossref_primary_10_1109_ACCESS_2020_2975093
crossref_primary_10_1109_TPAMI_2018_2883466
crossref_primary_10_1007_s00371_018_1566_y
crossref_primary_10_1007_s11263_020_01355_6
crossref_primary_10_3233_IDT_24027
crossref_primary_10_1007_s00530_024_01481_y
crossref_primary_10_1109_JPROC_2024_3525147
crossref_primary_10_1016_j_displa_2022_102210
crossref_primary_10_1109_TCSS_2024_3402270
crossref_primary_10_1109_TPAMI_2023_3285009
crossref_primary_10_1109_TPAMI_2020_2992222
crossref_primary_10_1016_j_patrec_2018_02_013
crossref_primary_10_1016_j_heliyon_2024_e36272
crossref_primary_10_1007_s10489_022_03559_4
crossref_primary_10_1016_j_patcog_2025_112348
crossref_primary_10_1109_TMM_2024_3521729
crossref_primary_10_1109_TIP_2019_2952085
crossref_primary_10_3390_fi16070247
crossref_primary_10_1016_j_patcog_2021_108367
crossref_primary_10_1109_TVCG_2023_3340679
crossref_primary_10_1109_TMM_2019_2951226
crossref_primary_10_1109_TCSVT_2023_3317447
crossref_primary_10_1016_j_patcog_2024_110900
crossref_primary_10_1109_ACCESS_2019_2908035
crossref_primary_10_1109_TPAMI_2024_3442301
crossref_primary_10_1016_j_inffus_2019_08_009
crossref_primary_10_1007_s11042_022_13793_0
crossref_primary_10_1177_18479790221078130
crossref_primary_10_1016_j_inffus_2022_11_011
crossref_primary_10_1007_s11263_025_02365_y
crossref_primary_10_1109_TMM_2022_3169061
crossref_primary_10_1109_ACCESS_2023_3283495
crossref_primary_10_1109_TIP_2023_3263110
crossref_primary_10_1177_0278364919897133
crossref_primary_10_1109_TKDE_2025_3584054
crossref_primary_10_1109_TMM_2022_3169065
crossref_primary_10_1134_S1054661820030256
crossref_primary_10_1007_s00530_024_01394_w
crossref_primary_10_1007_s11263_025_02572_7
crossref_primary_10_1109_TPAMI_2019_2947440
crossref_primary_10_1109_TNNLS_2022_3185320
crossref_primary_10_1016_j_cviu_2023_103799
crossref_primary_10_1016_j_neunet_2025_108094
crossref_primary_10_1109_TPAMI_2023_3326851
crossref_primary_10_1093_llc_fqae029
crossref_primary_10_1016_j_inffus_2021_02_006
crossref_primary_10_1109_ACCESS_2019_2912792
crossref_primary_10_1007_s11633_022_1408_2
crossref_primary_10_1109_TVCG_2021_3114683
crossref_primary_10_1109_TIP_2022_3181511
crossref_primary_10_1145_3725352
crossref_primary_10_3390_app10010391
crossref_primary_10_1109_MIS_2023_3265176
crossref_primary_10_32604_cmc_2023_038177
crossref_primary_10_1016_j_artint_2024_104147
crossref_primary_10_3390_sym12091504
crossref_primary_10_1109_LRA_2022_3145964
crossref_primary_10_1109_TPAMI_2019_2922396
crossref_primary_10_1109_TPAMI_2023_3274139
crossref_primary_10_1007_s00521_022_07617_3
crossref_primary_10_3390_drones8090421
crossref_primary_10_1007_s11042_024_20220_z
crossref_primary_10_1016_j_patcog_2023_109848
crossref_primary_10_1109_TPAMI_2024_3368158
crossref_primary_10_1145_3366710
crossref_primary_10_1109_TPAMI_2022_3170302
crossref_primary_10_1007_s11063_022_10796_8
crossref_primary_10_1016_j_cviu_2017_06_005
crossref_primary_10_1109_JIOT_2024_3396285
crossref_primary_10_1109_TMM_2024_3521708
crossref_primary_10_1016_j_eswa_2025_127926
crossref_primary_10_1109_TIP_2020_2967584
crossref_primary_10_1016_j_inffus_2024_102302
crossref_primary_10_1007_s11042_022_13279_z
crossref_primary_10_1016_j_datak_2021_101975
crossref_primary_10_1007_s10489_025_06372_x
crossref_primary_10_1109_TPAMI_2024_3409772
crossref_primary_10_1016_j_cviu_2021_103187
crossref_primary_10_1145_2897824_2925939
crossref_primary_10_1109_TSC_2024_3407588
crossref_primary_10_1007_s11042_023_17685_9
crossref_primary_10_1109_TMM_2022_3155928
crossref_primary_10_1007_s10489_020_02115_2
crossref_primary_10_1109_ACCESS_2023_3347988
crossref_primary_10_1007_s11760_024_03013_7
crossref_primary_10_1109_TCSS_2023_3270164
crossref_primary_10_1007_s11432_024_4231_5
crossref_primary_10_1109_TCSVT_2024_3425513
crossref_primary_10_1002_cpe_6866
crossref_primary_10_1109_ACCESS_2021_3069041
crossref_primary_10_1109_TGRS_2023_3280546
crossref_primary_10_1016_j_neucom_2020_07_091
crossref_primary_10_1109_TPAMI_2017_2754246
crossref_primary_10_1007_s11042_022_12776_5
crossref_primary_10_1109_TKDE_2023_3267036
crossref_primary_10_1162_tacl_a_00385
Cites_doi 10.1109/ICCV.2015.279
10.1109/TPAMI.2011.158
10.1007/978-3-319-10605-2_27
10.1145/2812802
10.1109/CVPR.2014.319
10.1007/978-0-387-35602-0_8
10.1145/219717.219748
10.1145/1526709.1526724
10.1109/CVPR.2012.6248095
10.3115/1220575.1220666
10.1109/ICCV.2013.178
10.4324/9780203781036
10.1109/CVPR.2008.4587462
10.1109/CVPR.2015.7298878
10.3115/v1/W14-3348
10.1007/978-3-642-33715-4_54
10.1159/000276535
10.1109/CVPR.2009.5206772
10.1109/CVPR.2014.37
10.1109/CVPR.2015.7298752
10.1109/CVPR.2015.7298856
10.1109/CVPR.2014.81
10.1109/CVPR.2015.7298932
10.1016/0004-3702(84)90038-9
10.1109/TPAMI.2011.155
10.1109/CVPR.2015.7298935
10.1109/CVPR.2013.387
10.1007/978-3-540-74198-5_14
10.1109/CVPR.2015.7298990
10.3115/v1/D14-1110
10.3115/1219840.1219893
10.1145/2858036.2858115
10.1007/978-3-642-15561-1_2
10.1109/CVPR.2015.7298713
10.1109/CVPR.2009.5206848
10.1145/2702123.2702508
10.1109/ICCV.2015.292
10.1109/CVPR.2015.7299087
10.5244/C.29.52
10.18653/v1/W15-2812
10.1109/CVPR.2010.5539970
10.1109/CVPR.2015.7298594
10.1109/ICCV.2015.9
10.1109/ICCV.2015.169
10.1016/j.cviu.2005.09.012
10.1109/CVPR.2013.12
10.3115/1218955.1219009
10.3115/v1/W14-2404
10.1007/978-3-319-10593-2_27
10.1109/CVPR.2015.7298744
10.1109/TPAMI.2008.128
10.1007/s11263-015-0816-y
10.1073/pnas.1422953112
10.1109/CVPR.2010.5540235
10.3115/1225403.1225421
10.1109/5254.708428
10.1613/jair.3994
10.3115/1613715.1613751
10.1007/s11263-013-0695-z
10.1162/tacl_a_00166
10.1007/978-3-319-10602-1_48
10.1007/s11263-005-4635-4
10.1007/s11263-007-0090-8
10.1109/TPAMI.2009.83
10.1109/CVPR.2009.5206594
10.1007/s11263-009-0275-4
10.4018/jswis.2012070103
10.1609/aimag.v31i3.2303
10.1109/CVPR.2011.5995711
10.1007/978-3-642-15561-1_11
10.1007/978-3-540-88682-2_3
10.1162/neco.1997.9.8.1735
10.1109/CVPR.2015.7298754
ContentType Journal Article
Copyright The Author(s) 2017
COPYRIGHT 2017 Springer
International Journal of Computer Vision is a copyright of Springer, 2017.
Copyright_xml – notice: The Author(s) 2017
– notice: COPYRIGHT 2017 Springer
– notice: International Journal of Computer Vision is a copyright of Springer, 2017.
DBID C6C
AAYXX
CITATION
ISR
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8FD
8FE
8FG
8FK
8FL
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
PYYUZ
Q9U
DOI 10.1007/s11263-016-0981-7
DatabaseName Springer Nature OA Free Journals
CrossRef
Gale In Context: Science
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ProQuest ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni Edition)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials
ProQuest Central
Business Premium Collection
Technology collection
ProQuest One
ProQuest Central Korea
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global (OCUL)
Computing Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic
ProQuest One Academic Middle East (New)
ProQuest One Business (UW System Shared)
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ABI/INFORM Collection China
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ABI/INFORM China
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
ProQuest One Academic (New)
Business Premium Collection (Alumni)
DatabaseTitleList
ABI/INFORM Global (Corporate)

Computer and Information Systems Abstracts
Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1573-1405
EndPage 73
ExternalDocumentID 4322015771
A550951135
10_1007_s11263_016_0981_7
Genre Feature
GeographicLocations United States--US
GeographicLocations_xml – name: United States--US
GrantInformation_xml – fundername: Stanford University Computer Science Department
– fundername: Brown Institute for Media Innovation (US)
– fundername: Yahoo Inc.
– fundername: Toyota USA
  funderid: http://dx.doi.org/10.13039/100004362
– fundername: Office of Naval Research
  funderid: http://dx.doi.org/10.13039/100000006
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
2.D
203
28-
29J
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
6TJ
78A
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACIHN
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACUHS
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEAQA
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
C6C
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EAS
EBLON
EBS
EDO
EIOEI
EJD
EMK
EPL
ESBYG
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IAO
IHE
IJ-
IKXTQ
ISR
ITC
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Y
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M0N
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TAE
TEORI
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7S
Z7V
Z7W
Z7X
Z7Y
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8P
Z8Q
Z8R
Z8S
Z8T
Z8W
Z92
ZMTXR
~8M
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ICD
PHGZM
PHGZT
PQGLB
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
PKEHL
PQEST
PQUKI
Q9U
PUEGO
ID FETCH-LOGICAL-c531t-d5c3db2f443ec3111ee4b52d3906dd0cc9a627abe8b0b3e8c88929745f5ea4a23
IEDL.DBID RSV
ISICitedReferencesCount 3442
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000400276400003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0920-5691
IngestDate Thu Sep 04 19:58:07 EDT 2025
Tue Nov 04 22:22:54 EST 2025
Sat Nov 29 10:25:22 EST 2025
Wed Nov 26 10:08:05 EST 2025
Sat Nov 29 06:42:27 EST 2025
Tue Nov 18 21:52:22 EST 2025
Fri Feb 21 02:26:39 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Crowdsourcing
Computer vision
Relationships
Scene graph
Language
Dataset
Objects
Attributes
Question answering
Image
Knowledge
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c531t-d5c3db2f443ec3111ee4b52d3906dd0cc9a627abe8b0b3e8c88929745f5ea4a23
Notes SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-8784-2531
OpenAccessLink https://link.springer.com/10.1007/s11263-016-0981-7
PQID 1892132984
PQPubID 1456341
PageCount 42
ParticipantIDs proquest_miscellaneous_1904244064
proquest_journals_1892132984
gale_infotracacademiconefile_A550951135
gale_incontextgauss_ISR_A550951135
crossref_citationtrail_10_1007_s11263_016_0981_7
crossref_primary_10_1007_s11263_016_0981_7
springer_journals_10_1007_s11263_016_0981_7
PublicationCentury 2000
PublicationDate 2017-05-01
PublicationDateYYYYMMDD 2017-05-01
PublicationDate_xml – month: 05
  year: 2017
  text: 2017-05-01
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle International journal of computer vision
PublicationTitleAbbrev Int J Comput Vis
PublicationYear 2017
Publisher Springer US
Springer
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer
– name: Springer Nature B.V
References Ronchi, M. R., & Perona, P. (2015). Describing common human visual actions in images. In X. Xie, M. W. Jones, & G. K. L. Tam (Eds.), Proceedings of the British machine vision conference (BMVC 2015) (pp. 52.1–52.12). BMVA Press.
Farhadi, A., Hejrati, M., Sadeghi, M. A., Young, P., Rashtchian, C., Hockenmaier, J., et al. (2010). Every picture tells a story: Generating sentences from images. In Computer vision–ECCV 2010 (pp. 15–29). Springer.
RussellBCTorralbaAMurphyKPFreemanWTLabelme: A database and web-based tool for image annotationInternational Journal of Computer Vision2008771–315717310.1007/s11263-007-0090-8
Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S. J., & McClosky, D. (2014). The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations (pp. 55–60).
Baker, C. F., Fillmore, C. J., & Lowe, J. B. (1998). The Berkeley framenet project. In Proceedings of the 36th annual meeting of the association for computational linguistics and 17th international conference on computational linguistics—Volume 1, ACL’98 (pp. 86–90). Stroudsburg, PA: Association for Computational Linguistics.
Karpathy, A., & Fei-Fei, L. (2015). Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3128–3137).
Papineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics (pp. 311–318). Association for Computational Linguistics.
Sadeghi, F., Divvala, S. K., & Farhadi, A. (2015). Viske: Visual knowledge extraction and question answering by visual verification of relation phrases. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1456–1464).
Mihalcea, R., Chklovski, T. A., & Kilgarriff, A. (2004). The senseval-3 English lexical sample task. Association for Computational Linguistics, UNT Digital Library.
Ren, M., Kiros, R., & Zemel, R. (2015a). Image question answering: A visual semantic embedding model and a new dataset. arXiv:1505.02074.
Bird, S. (2006). NLTK: The natural language toolkit. In Proceedings of the COLING/ACL on interactive presentation sessions (pp. 69–72). Association for Computational Linguistics.
Choi, W., Chao, Y.-W., Pantofaru, C., & Savarese, S. (2013). Understanding indoor scenes using 3D geometric phrases. In 2013 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 33–40). IEEE.
Zhu, J., Nie, Z., Liu, X., Zhang, B., & Wen, J.-R. (2009). Statsnowball: A statistical approach to extracting entity relationships. In Proceedings of the 18th international conference on world wide web (pp. 101–110). ACM.
Yang, Y., Baker, S., Kannan, A., & Ramanan, D. (2012). Recognizing proxemics in personal photos. In 2012 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3522–3529). IEEE.
Hayes, P. J. (1978). The naive physics manifesto. Geneva: Institut pour les études sémantiques et cognitives/Université de Genève.
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.
Gupta, A., & Davis, L. S. (2008). Beyond nouns: Exploiting prepositions and comparative adjectives for learning visual classifiers. In Computer vision–ECCV 2008 (pp. 16–29). Springer.
Ma, L., Lu, Z., & Li, H. (2015). Learning to answer questions from image using convolutional neural network. arXiv:1506.00333.
Krishna, R., Hata, K., Chen, S., Kravitz, J., Shamma, D. A., Fei-Fei, L., et al. (2016). Embracing error to enable rapid crowdsourcing. In CHI’16-SIGCHI conference on human factors in computing system.
Malisiewicz, T., Efros, A., et al. (2008). Recognition by association via learning per-exemplar distances. In IEEE conference on computer vision and pattern recognition, 2008 (CVPR 2008) (pp. 1–8). IEEE.
Kiros, R., Salakhutdinov, R., & Zemel, R. (2014). Multimodal neural language models. In Proceedings of the 31st international conference on machine learning (ICML-14) (pp. 595–603).
Chen, X., Fang, H., Lin, T.-Y., Vedantam, R., Gupta, S., Dollar, P., et al. (2015). Microsoft COCO captions: Data collection and evaluation server. arXiv:1504.00325.
Isola, P., Lim, J. J., & Adelson, E. H. (2015). Discovering states and transformations in image collections. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1383–1391).
Wah, C., Branson, S., Welinder, P., Perona, P., & Belongie, S. (2011). The Caltech-UCSD birds-200-2011 dataset. Technical Report CNS-TR-2011-001, California Institute of Technology.
YoungPLaiAHodoshMHockenmaierJFrom image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptionsTransactions of the Association for Computational Linguistics201426778
Chen, X., Shrivastava, A., & Gupta, A. (2013). Neil: Extracting visual knowledge from web data. In 2013 IEEE international conference on computer vision (ICCV) (pp. 1409–1416). IEEE.
Ren, S., He, K., Girshick, R., & Sun, J. (2015b). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91–99).
FerrucciDBrownEChu-CarrollJFanJGondekDKalyanpurAALallyAMurdockJWNybergEPragerJBuilding watson: An overview of the deepqa projectAI Magazine20103135979
Zeng, D., Liu, K., Lai, S., Zhou, G., & Zhao, J. (2014). Relation classification via convolutional deep neural network. In Proceedings of COLING (pp. 2335–2344).
ForbusKDQualitative process theoryArtificial Intelligence19842418516810.1016/0004-3702(84)90038-9
Mao, J., Xu, W., Yang, Y., Wang, J., & Yuille, A. L. (2014). Explain images with multimodal recurrent neural networks. arXiv:1410.1090.
Lebret, R., Pinheiro, P. O., & Collobert, R. (2015). Phrase-based image captioning. arXiv:1502.03671.
Denkowski, M., & Lavie, A. (2014). Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation. Citeseer.
Griffin, G., Holub, A., & Perona, P. (2007). Caltech-256 object category dataset. Technical Report 7694.
Ramanathan, V., Li, C., Deng, J., Han, W., Li, Z., Gu, K., et al. (2015). Learning semantic relationships for better action retrieval in images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1100–1109).
Girshick, R. (2015). Fast R-CNN. In Proceedings of the IEEE international conference on computer vision (pp. 1440–1448).
Ordonez, V., Kulkarni, G., & Berg, T. L. (2011). Im2text: Describing images using 1 million captioned photographs. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, & K. Weinberger (Eds.), Advances in neural information processing systems (Vol. 24, pp. 1143–1151). Red Hook: Curran Associates, Inc.
Chang, A. X., Savva, M., & Manning, C. D. (2014). Semantic parsing for text to 3D scene generation. In ACL 2014 (p. 17).
Fei-FeiLFergusRPeronaPLearning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categoriesComputer Vision and Image Understanding20071061597010.1016/j.cviu.2005.09.012
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A. C., Salakhutdinov, R., Zemel, R. S., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. CoRR. arXiv:1502.03044.
Culotta, A., & Sorensen, J. (2004). Dependency tree kernels for relation extraction. In Proceedings of the 42nd annual meeting on association for computational linguistics (p. 423). Association for Computational Linguistics.
Chen, X., Liu, Z., & Sun, M. (2014). A unified model for word sense representation and disambiguation. In EMNLP (pp. 1025–1035). Citeseer.
Johnson, J., Krishna, R., Stark, M., Li, L.-J., Shamma, D. A., Bernstein, M., et al. (2015). Image retrieval using scene graphs. In IEEE conference on computer vision and pattern recognition (CVPR).
Vedantam, R., Lawrence Zitnick, C., & Parikh, D. (2015a). Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4566–4575).
Vedantam, R., Lin, X., Batra, T., Lawrence Zitnick, C., & Parikh, D. (2015b). Learning common sense through visual abstraction. In Proceedings of the IEEE international conference on computer vision (pp. 2542–2550).
Zhou, G., Zhang, M., Ji, D. H., & Zhu, Q. (2007). Tree kernel-based relation extraction with context-sensitive structured parse tree information. In EMNLP-CoNLL 2007 (p. 728).
Zitnick, C. L., & Parikh, D. (2013). Bringing semantics into focus using visual abstraction. In 2013 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3009–3016). IEEE.
Steinbach, M., Karypis, G., Kumar, V., et al. (2000). A comparison of document clustering techniques. In KDD workshop on text mining, Boston (Vol. 400, pp. 525–526).
Snow, R., O’Connor, B., Jurafsky, D., & Ng, A. Y. (2008). Cheap and fast—But is it good?: Evaluating non-expert annotations for natural language tasks. In Proceedings of the conference on empirical methods in natural language processing (pp. 254–263). Association for Computational Linguistics.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., et al. (2014). Microsoft COCO: Common objects in context. In Computer vision–ECCV 2014 (pp. 740–755). Springer.
Izadinia, H., Sadeghi, F., & Farhadi, A. (2014). Incorporating scene context and object layout into appearance modeling. In 2014 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 232–239). IEEE.
Fang, H., Gupta, S., Iandola, F., Srivastava, R. K., Deng, L., Dollár, P., et al. (2015). From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1473–1482).
PattersonGXuCSuHHaysJThe sun attribute database: Beyond categories for dee
981_CR50
981_CR58
981_CR57
981_CR56
981_CR55
981_CR54
981_CR52
981_CR51
D Ferrucci (981_CR26) 2010; 31
981_CR101
981_CR102
981_CR49
981_CR48
981_CR100
981_CR106
981_CR103
981_CR104
M Varma (981_CR95) 2005; 62
981_CR109
981_CR107
981_CR108
981_CR47
981_CR46
981_CR45
BC Russell (981_CR79) 2008; 77
B Thomee (981_CR93) 2016; 59
981_CR44
981_CR43
A Torralba (981_CR94) 2008; 30
981_CR112
981_CR39
981_CR38
981_CR110
981_CR111
M Everingham (981_CR20) 2010; 88
981_CR36
981_CR35
981_CR34
981_CR33
981_CR32
981_CR31
981_CR29
981_CR27
L Fei-Fei (981_CR24) 2007; 106
A Gupta (981_CR37) 2009; 31
981_CR1
981_CR3
981_CR2
981_CR5
981_CR25
981_CR4
981_CR7
981_CR23
981_CR22
981_CR9
981_CR21
981_CR8
981_CR19
981_CR17
981_CR16
981_CR15
981_CR92
981_CR91
981_CR90
981_CR14
981_CR13
J Bruner (981_CR6) 1990; 33
981_CR12
981_CR11
981_CR99
981_CR10
981_CR98
MA Hearst (981_CR40) 1998; 13
981_CR97
981_CR96
S Hochreiter (981_CR41) 1997; 9
P Dollar (981_CR18) 2012; 34
981_CR83
981_CR82
981_CR81
981_CR80
M Hodosh (981_CR42) 2013; 47
981_CR89
981_CR88
981_CR87
981_CR86
981_CR85
981_CR84
P Young (981_CR105) 2014; 2
KD Forbus (981_CR28) 1984; 24
GA Miller (981_CR65) 1995; 38
C Leacock (981_CR53) 1998; 24
981_CR71
G Patterson (981_CR70) 2014; 108
981_CR78
981_CR77
981_CR76
981_CR75
981_CR74
981_CR73
D Geman (981_CR30) 2015; 112
981_CR61
981_CR60
981_CR69
981_CR68
981_CR67
981_CR64
981_CR63
981_CR62
981_CR59
F Niu (981_CR66) 2012; 8
A Prest (981_CR72) 2012; 34
References_xml – reference: Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008). Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in ’real-life’ images: Detection, alignment, and recognition.
– reference: Lampert, C. H., Nickisch, H., & Harmeling, S. (2009). Learning to detect unseen object classes by between-class attribute transfer. In IEEE conference on computer vision and pattern recognition, 2009 (CVPR 2009) (pp. 951–958). IEEE.
– reference: MillerGAWordnet: a lexical database for englishCommunications of the ACM19953811394110.1145/219717.219748
– reference: Fang, H., Gupta, S., Iandola, F., Srivastava, R. K., Deng, L., Dollár, P., et al. (2015). From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1473–1482).
– reference: Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., et al. (2014). Microsoft COCO: Common objects in context. In Computer vision–ECCV 2014 (pp. 740–755). Springer.
– reference: Zhou, G., Zhang, M., Ji, D. H., & Zhu, Q. (2007). Tree kernel-based relation extraction with context-sensitive structured parse tree information. In EMNLP-CoNLL 2007 (p. 728).
– reference: Chang, A. X., Savva, M., & Manning, C. D. (2014). Semantic parsing for text to 3D scene generation. In ACL 2014 (p. 17).
– reference: HochreiterSSchmidhuberJLong short-term memoryNeural Computation1997981735178010.1162/neco.1997.9.8.1735
– reference: PrestASchmidCFerrariVWeakly supervised learning of interactions between humans and objectsIEEE Transactions on Pattern Analysis and Machine Intelligence201234360161410.1109/TPAMI.2011.158
– reference: Manning, C. D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S. J., & McClosky, D. (2014). The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations (pp. 55–60).
– reference: FerrucciDBrownEChu-CarrollJFanJGondekDKalyanpurAALallyAMurdockJWNybergEPragerJBuilding watson: An overview of the deepqa projectAI Magazine20103135979
– reference: Mihalcea, R., Chklovski, T. A., & Kilgarriff, A. (2004). The senseval-3 English lexical sample task. Association for Computational Linguistics, UNT Digital Library.
– reference: ForbusKDQualitative process theoryArtificial Intelligence19842418516810.1016/0004-3702(84)90038-9
– reference: Vedantam, R., Lin, X., Batra, T., Lawrence Zitnick, C., & Parikh, D. (2015b). Learning common sense through visual abstraction. In Proceedings of the IEEE international conference on computer vision (pp. 2542–2550).
– reference: Sadeghi, M. A., & Farhadi, A. (2011). Recognition using visual phrases. In 2011 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1745–1752). IEEE.
– reference: TorralbaAFergusRFreemanWT80 million tiny images: A large data set for nonparametric object and scene recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence200830111958197010.1109/TPAMI.2008.128
– reference: Baker, C. F., Fillmore, C. J., & Lowe, J. B. (1998). The Berkeley framenet project. In Proceedings of the 36th annual meeting of the association for computational linguistics and 17th international conference on computational linguistics—Volume 1, ACL’98 (pp. 86–90). Stroudsburg, PA: Association for Computational Linguistics.
– reference: NiuFZhangCRéCShavlikJElementary: Large-scale knowledge-base construction via machine learning and statistical inferenceInternational Journal on Semantic Web and Information Systems (IJSWIS)201283427310.4018/jswis.2012070103
– reference: Gao, H., Mao, J., Zhou, J., Huang, Z., Wang, L., & Xu, W. (2015). Are you talking to a machine? Dataset and methods for multilingual image question. In Advances in neural information processing systems (pp. 2296–2304).
– reference: Papineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics (pp. 311–318). Association for Computational Linguistics.
– reference: Salehi, N., Irani, L. C., & Bernstein, M. S. (2015). We are dynamo: Overcoming stalling and friction in collective action for crowd workers. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 1621–1630). ACM.
– reference: Ramanathan, V., Li, C., Deng, J., Han, W., Li, Z., Gu, K., et al. (2015). Learning semantic relationships for better action retrieval in images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1100–1109).
– reference: Dauphin, Y., de Vries, H., & Bengio, Y. (2015). Equilibrated adaptive learning rates for non-convex optimization. In Advances in neural information processing systems (pp. 1504–1512).
– reference: Farhadi, A., Endres, I., Hoiem, D., & Forsyth, D. (2009). Describing objects by their attributes. In IEEE conference on computer vision and pattern recognition, 2009 (CVPR 2009) (pp. 1778–1785). IEEE.
– reference: Johnson, J., Krishna, R., Stark, M., Li, L.-J., Shamma, D. A., Bernstein, M., et al. (2015). Image retrieval using scene graphs. In IEEE conference on computer vision and pattern recognition (CVPR).
– reference: Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In 2014 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 580–587). IEEE.
– reference: Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2015). Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3156–3164).
– reference: Steinbach, M., Karypis, G., Kumar, V., et al. (2000). A comparison of document clustering techniques. In KDD workshop on text mining, Boston (Vol. 400, pp. 525–526).
– reference: Socher, R., Huval, B., Manning, C. D., & Ng, A. Y. (2012). Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning (pp. 1201–1211). Association for Computational Linguistics.
– reference: Hayes, P. J. (1985). The second naive physics manifesto. Theories of the commonsense world (pp. 1–36).
– reference: Ren, S., He, K., Girshick, R., & Sun, J. (2015b). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91–99).
– reference: Lu, C., Krishna, R., Bernstein, M., & Fei-Fei, L. (2016). Visual relationship detection using language priors. In European conference on computer vision (ECCV). IEEE.
– reference: Isola, P., Lim, J. J., & Adelson, E. H. (2015). Discovering states and transformations in image collections. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1383–1391).
– reference: Wah, C., Branson, S., Welinder, P., Perona, P., & Belongie, S. (2011). The Caltech-UCSD birds-200-2011 dataset. Technical Report CNS-TR-2011-001, California Institute of Technology.
– reference: Zhu, Y., Fathi, A., & Fei-Fei, L. (2014). Reasoning about object affordances in a knowledge base representation. In European conference on computer vision.
– reference: Bunescu, R. C., & Mooney, R. J. (2005). A shortest path dependency kernel for relation extraction. In Proceedings of the conference on human language technology and empirical methods in natural language processing (pp. 724–731). Association for Computational Linguistics.
– reference: GemanDGemanSHallonquistNYounesLVisual turing test for computer vision systemsProceedings of the National Academy of Sciences20151121236183623
– reference: Malisiewicz, T., Efros, A., et al. (2008). Recognition by association via learning per-exemplar distances. In IEEE conference on computer vision and pattern recognition, 2008 (CVPR 2008) (pp. 1–8). IEEE.
– reference: Chen, X., Liu, Z., & Sun, M. (2014). A unified model for word sense representation and disambiguation. In EMNLP (pp. 1025–1035). Citeseer.
– reference: Ferrari, V., & Zisserman, A. (2007). Learning visual attributes. In Advances in neural information processing systems (pp. 433–440).
– reference: Snow, R., O’Connor, B., Jurafsky, D., & Ng, A. Y. (2008). Cheap and fast—But is it good?: Evaluating non-expert annotations for natural language tasks. In Proceedings of the conference on empirical methods in natural language processing (pp. 254–263). Association for Computational Linguistics.
– reference: Pal, A. R., & Saha, D. (2015). Word sense disambiguation: A survey. arXiv:1508.01346.
– reference: Gupta, A., & Davis, L. S. (2008). Beyond nouns: Exploiting prepositions and comparative adjectives for learning visual classifiers. In Computer vision–ECCV 2008 (pp. 16–29). Springer.
– reference: GuoDong, Z., Jian, S., Jie, Z., & Min, Z. (2005). Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics (pp. 427–434). Association for Computational Linguistics.
– reference: Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.
– reference: Culotta, A., & Sorensen, J. (2004). Dependency tree kernels for relation extraction. In Proceedings of the 42nd annual meeting on association for computational linguistics (p. 423). Association for Computational Linguistics.
– reference: Kiros, R., Salakhutdinov, R., & Zemel, R. (2014). Multimodal neural language models. In Proceedings of the 31st international conference on machine learning (ICML-14) (pp. 595–603).
– reference: PattersonGXuCSuHHaysJThe sun attribute database: Beyond categories for deeper scene understandingInternational Journal of Computer Vision20141081–2598110.1007/s11263-013-0695-z
– reference: Chen, X., Fang, H., Lin, T.-Y., Vedantam, R., Gupta, S., Dollar, P., et al. (2015). Microsoft COCO captions: Data collection and evaluation server. arXiv:1504.00325.
– reference: Malinowski, M., Rohrbach, M., & Fritz, M. (2015). Ask your neurons: A neural-based approach to answering questions about images. In Proceedings of the IEEE international conference on computer vision (pp. 1–9).
– reference: DollarPWojekCSchieleBPeronaPPedestrian detection: An evaluation of the state of the artIEEE Transactions on Pattern Analysis and Machine Intelligence201234474376110.1109/TPAMI.2011.155
– reference: GuptaAKembhaviADavisLSObserving human–object interactions: Using spatial and functional compatibility for recognitionIEEE Transactions on Pattern Analysis and Machine Intelligence200931101775178910.1109/TPAMI.2009.83
– reference: Schuler, K. K. (2005). VerbNet: A broad-coverage, comprehensive verb lexicon. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA, USA (AAI3179808).
– reference: Chen, X., & Lawrence Zitnick, C. (2015). Mind’s eye: A recurrent visual representation for image caption generation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2422–2431).
– reference: Ren, M., Kiros, R., & Zemel, R. (2015a). Image question answering: A visual semantic embedding model and a new dataset. arXiv:1505.02074.
– reference: Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). ImageNet large scale visual recognition challenge. International journal of computer vision (IJCV) (pp. 1–42).
– reference: Perronnin, F., Sánchez, J., & Mensink, T. (2010). Improving the fisher kernel for large-scale image classification. In Computer vision–ECCV 2010 (pp. 143–156). Springer.
– reference: Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2013). Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229.
– reference: Yu, L., Park, E., Berg, A. C., & Berg, T. L. (2015). Visual madlibs: Fill in the blank image generation and question answering. arXiv:1506.00278.
– reference: Zhu, J., Nie, Z., Liu, X., Zhang, B., & Wen, J.-R. (2009). Statsnowball: A statistical approach to extracting entity relationships. In Proceedings of the 18th international conference on world wide web (pp. 101–110). ACM.
– reference: Hou, C.-S. J., Noy, N. F., & Musen, M. A. (2002). A template-based approach toward acquisition of logical sentences. In Intelligent information processing (pp. 77–89). Springer.
– reference: Choi, W., Chao, Y.-W., Pantofaru, C., & Savarese, S. (2013). Understanding indoor scenes using 3D geometric phrases. In 2013 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 33–40). IEEE.
– reference: Malinowski, M., & Fritz, M. (2014). A multi-world approach to question answering about real-world scenes based on uncertain input. In Advances in neural information processing systems (pp. 1682–1690).
– reference: Ma, L., Lu, Z., & Li, H. (2015). Learning to answer questions from image using convolutional neural network. arXiv:1506.00333.
– reference: Izadinia, H., Sadeghi, F., & Farhadi, A. (2014). Incorporating scene context and object layout into appearance modeling. In 2014 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 232–239). IEEE.
– reference: ThomeeBShammaDAFriedlandGElizaldeBNiKPolandDBorthDLiL-JYFCC100M: The new data in multimedia researchCommunications of the ACM2016592647310.1145/2812802
– reference: Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Zitnick, C. L., et al. (2015). VQA: Visual question answering. In International conference on computer vision (ICCV).
– reference: LeacockCMillerGAChodorowMUsing corpus statistics and wordnet relations for sense identificationComputational Linguistics1998241147165
– reference: RussellBCTorralbaAMurphyKPFreemanWTLabelme: A database and web-based tool for image annotationInternational Journal of Computer Vision2008771–315717310.1007/s11263-007-0090-8
– reference: Zeng, D., Liu, K., Lai, S., Zhou, G., & Zhao, J. (2014). Relation classification via convolutional deep neural network. In Proceedings of COLING (pp. 2335–2344).
– reference: Betteridge, J., Carlson, A., Hong, S. A., Hruschka, E. R, Jr., Law, E. L., Mitchell, T. M., et al. (2009). Toward never ending language learning. In AAAI spring symposium: Learning by reading and learning to read (pp. 1–2).
– reference: Girshick, R. (2015). Fast R-CNN. In Proceedings of the IEEE international conference on computer vision (pp. 1440–1448).
– reference: Griffin, G., Holub, A., & Perona, P. (2007). Caltech-256 object category dataset. Technical Report 7694.
– reference: Firestone, C., & Scholl, B. J. (2015). Cognition does not affect perception: Evaluating the evidence for top-down effects. Behavioral and brain sciences (pp. 1–72).
– reference: Schuster, S., Krishna, R., Chang, A., Fei-Fei, L., & Manning, C. D. (2015). Generating semantically precise scene graphs from textual descriptions for improved image retrieval. In Proceedings of the fourth workshop on vision and language (pp. 70–80). Citeseer.
– reference: HearstMADumaisSTOsmanEPlattJScholkopfBSupport vector machinesIEEE Intelligent Systems and their Applications1998134182810.1109/5254.708428
– reference: EveringhamMVan GoolLWilliamsCKWinnJZissermanAThe pascal visual object classes (VOC) challengeInternational Journal of Computer Vision201088230333810.1007/s11263-009-0275-4
– reference: Farhadi, A., Hejrati, M., Sadeghi, M. A., Young, P., Rashtchian, C., Hockenmaier, J., et al. (2010). Every picture tells a story: Generating sentences from images. In Computer vision–ECCV 2010 (pp. 15–29). Springer.
– reference: Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105).
– reference: Rothe, S., & Schütze, H. (2015). Autoextend: Extending word embeddings to embeddings for synsets and lexemes. arXiv:1507.01127.
– reference: Chen, X., Shrivastava, A., & Gupta, A. (2013). Neil: Extracting visual knowledge from web data. In 2013 IEEE international conference on computer vision (ICCV) (pp. 1409–1416). IEEE.
– reference: BrunerJCulture and human development: A new lookHuman Development199033634435510.1159/000276535
– reference: Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., et al. (2015). Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2625–2634).
– reference: Fei-FeiLFergusRPeronaPLearning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categoriesComputer Vision and Image Understanding20071061597010.1016/j.cviu.2005.09.012
– reference: Sadeghi, F., Divvala, S. K., & Farhadi, A. (2015). Viske: Visual knowledge extraction and question answering by visual verification of relation phrases. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1456–1464).
– reference: Ronchi, M. R., & Perona, P. (2015). Describing common human visual actions in images. In X. Xie, M. W. Jones, & G. K. L. Tam (Eds.), Proceedings of the British machine vision conference (BMVC 2015) (pp. 52.1–52.12). BMVA Press.
– reference: Yang, Y., Baker, S., Kannan, A., & Ramanan, D. (2012). Recognizing proxemics in personal photos. In 2012 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3522–3529). IEEE.
– reference: Yao, B., & Fei-Fei, L. (2010). Modeling mutual context of object and human pose in human–object interaction activities. In 2010 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 17–24). IEEE.
– reference: Denkowski, M., & Lavie, A. (2014). Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation. Citeseer.
– reference: Antol, S., Zitnick, C. L., & Parikh, D. (2014). Zero-shot learning via visual abstraction. In European conference on computer vision (pp. 401–416). Springer.
– reference: Silberman, N., Hoiem, D., Kohli, P., & Fergus, R. (2012). Indoor segmentation and support inference from RGBD images. In ECCV.
– reference: Zitnick, C. L., & Parikh, D. (2013). Bringing semantics into focus using visual abstraction. In 2013 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3009–3016). IEEE.
– reference: Vedantam, R., Lawrence Zitnick, C., & Parikh, D. (2015a). Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4566–4575).
– reference: Bird, S. (2006). NLTK: The natural language toolkit. In Proceedings of the COLING/ACL on interactive presentation sessions (pp. 69–72). Association for Computational Linguistics.
– reference: Mao, J., Xu, W., Yang, Y., Wang, J., & Yuille, A. L. (2014). Explain images with multimodal recurrent neural networks. arXiv:1410.1090.
– reference: Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A. C., Salakhutdinov, R., Zemel, R. S., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. CoRR. arXiv:1502.03044.
– reference: Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv:1301.3781.
– reference: Xiao, J., Hays, J., Ehinger, K., Oliva, A., Torralba, A., et al. (2010). Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3485–3492). IEEE.
– reference: Zhu, Y., Zhang, C., Ré, C., & Fei-Fei, L. (2015). Building a large-scale multimodal knowledge base system for answering visual queries. arXiv:1507.05670.
– reference: Karpathy, A., & Fei-Fei, L. (2015). Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3128–3137).
– reference: Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, 2009 (CVPR 2009) (pp. 248–255). IEEE.
– reference: Yao, B., Yang, X., & Zhu, S.-C. (2007). Introduction to a large-scale general purpose ground truth database: methodology, annotation tool and benchmarks. In Energy minimization methods in computer vision and pattern recognition (pp. 169–183). Springer.
– reference: YoungPLaiAHodoshMHockenmaierJFrom image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptionsTransactions of the Association for Computational Linguistics201426778
– reference: Lebret, R., Pinheiro, P. O., & Collobert, R. (2015). Phrase-based image captioning. arXiv:1502.03671.
– reference: Ordonez, V., Kulkarni, G., & Berg, T. L. (2011). Im2text: Describing images using 1 million captioned photographs. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, & K. Weinberger (Eds.), Advances in neural information processing systems (Vol. 24, pp. 1143–1151). Red Hook: Curran Associates, Inc.
– reference: VarmaMZissermanAA statistical approach to texture classification from single imagesInternational Journal of Computer Vision2005621–2618110.1007/s11263-005-4635-4
– reference: Hayes, P. J. (1978). The naive physics manifesto. Geneva: Institut pour les études sémantiques et cognitives/Université de Genève.
– reference: HodoshMYoungPHockenmaierJFraming image description as a ranking task: Data, models and evaluation metricsJournal of Artificial Intelligence Research201347185389931042791270.68347
– reference: Goering, C., Rodner, E., Freytag, A., & Denzler, J. (2014). Nonparametric part transfer for fine-grained recognition. In 2014 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2489–2496). IEEE.
– reference: Schank, R. C., & Abelson, R. P. (2013). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Hove: Psychology Press.
– reference: Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9).
– reference: Krishna, R., Hata, K., Chen, S., Kravitz, J., Shamma, D. A., Fei-Fei, L., et al. (2016). Embracing error to enable rapid crowdsourcing. In CHI’16-SIGCHI conference on human factors in computing system.
– ident: 981_CR111
– ident: 981_CR1
  doi: 10.1109/ICCV.2015.279
– volume: 34
  start-page: 601
  issue: 3
  year: 2012
  ident: 981_CR72
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2011.158
– ident: 981_CR110
  doi: 10.1007/978-3-319-10605-2_27
– volume: 59
  start-page: 64
  issue: 2
  year: 2016
  ident: 981_CR93
  publication-title: Communications of the ACM
  doi: 10.1145/2812802
– ident: 981_CR27
– ident: 981_CR51
– ident: 981_CR33
  doi: 10.1109/CVPR.2014.319
– ident: 981_CR88
– ident: 981_CR43
  doi: 10.1007/978-0-387-35602-0_8
– volume: 38
  start-page: 39
  issue: 11
  year: 1995
  ident: 981_CR65
  publication-title: Communications of the ACM
  doi: 10.1145/219717.219748
– ident: 981_CR4
– ident: 981_CR39
– ident: 981_CR109
  doi: 10.1145/1526709.1526724
– ident: 981_CR108
– ident: 981_CR102
  doi: 10.1109/CVPR.2012.6248095
– ident: 981_CR56
– ident: 981_CR7
  doi: 10.3115/1220575.1220666
– ident: 981_CR12
  doi: 10.1109/ICCV.2013.178
– ident: 981_CR83
  doi: 10.4324/9780203781036
– ident: 981_CR60
  doi: 10.1109/CVPR.2008.4587462
– ident: 981_CR19
  doi: 10.1109/CVPR.2015.7298878
– ident: 981_CR62
– volume: 24
  start-page: 147
  issue: 1
  year: 1998
  ident: 981_CR53
  publication-title: Computational Linguistics
– ident: 981_CR17
  doi: 10.3115/v1/W14-3348
– ident: 981_CR87
  doi: 10.1007/978-3-642-33715-4_54
– volume: 33
  start-page: 344
  issue: 6
  year: 1990
  ident: 981_CR6
  publication-title: Human Development
  doi: 10.1159/000276535
– ident: 981_CR22
  doi: 10.1109/CVPR.2009.5206772
– ident: 981_CR15
– ident: 981_CR46
  doi: 10.1109/CVPR.2014.37
– ident: 981_CR80
  doi: 10.1109/CVPR.2015.7298752
– ident: 981_CR10
  doi: 10.1109/CVPR.2015.7298856
– ident: 981_CR32
  doi: 10.1109/CVPR.2014.81
– ident: 981_CR48
  doi: 10.1109/CVPR.2015.7298932
– ident: 981_CR57
– ident: 981_CR77
– volume: 24
  start-page: 85
  issue: 1
  year: 1984
  ident: 981_CR28
  publication-title: Artificial Intelligence
  doi: 10.1016/0004-3702(84)90038-9
– ident: 981_CR29
– volume: 34
  start-page: 743
  issue: 4
  year: 2012
  ident: 981_CR18
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2011.155
– ident: 981_CR34
– ident: 981_CR98
  doi: 10.1109/CVPR.2015.7298935
– ident: 981_CR63
– ident: 981_CR112
  doi: 10.1109/CVPR.2013.387
– ident: 981_CR104
  doi: 10.1007/978-3-540-74198-5_14
– ident: 981_CR47
  doi: 10.1109/CVPR.2015.7298990
– ident: 981_CR91
– ident: 981_CR11
  doi: 10.3115/v1/D14-1110
– ident: 981_CR35
  doi: 10.3115/1219840.1219893
– ident: 981_CR50
  doi: 10.1145/2858036.2858115
– ident: 981_CR74
– ident: 981_CR23
  doi: 10.1007/978-3-642-15561-1_2
– ident: 981_CR106
– ident: 981_CR73
  doi: 10.1109/CVPR.2015.7298713
– ident: 981_CR99
– ident: 981_CR16
  doi: 10.1109/CVPR.2009.5206848
– ident: 981_CR54
– ident: 981_CR82
  doi: 10.1145/2702123.2702508
– ident: 981_CR68
– ident: 981_CR97
  doi: 10.1109/ICCV.2015.292
– ident: 981_CR96
  doi: 10.1109/CVPR.2015.7299087
– ident: 981_CR90
– ident: 981_CR76
  doi: 10.5244/C.29.52
– ident: 981_CR101
– ident: 981_CR75
– ident: 981_CR85
  doi: 10.18653/v1/W15-2812
– ident: 981_CR3
– ident: 981_CR100
  doi: 10.1109/CVPR.2010.5539970
– ident: 981_CR92
  doi: 10.1109/CVPR.2015.7298594
– ident: 981_CR61
– ident: 981_CR84
– ident: 981_CR69
– ident: 981_CR59
  doi: 10.1109/ICCV.2015.9
– ident: 981_CR31
  doi: 10.1109/ICCV.2015.169
– volume: 106
  start-page: 59
  issue: 1
  year: 2007
  ident: 981_CR24
  publication-title: Computer Vision and Image Understanding
  doi: 10.1016/j.cviu.2005.09.012
– ident: 981_CR13
  doi: 10.1109/CVPR.2013.12
– ident: 981_CR14
  doi: 10.3115/1218955.1219009
– ident: 981_CR8
  doi: 10.3115/v1/W14-2404
– ident: 981_CR2
  doi: 10.1007/978-3-319-10593-2_27
– ident: 981_CR45
  doi: 10.1109/CVPR.2015.7298744
– ident: 981_CR49
– volume: 30
  start-page: 1958
  issue: 11
  year: 2008
  ident: 981_CR94
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2008.128
– ident: 981_CR78
  doi: 10.1007/s11263-015-0816-y
– ident: 981_CR67
– ident: 981_CR44
– volume: 112
  start-page: 3618
  issue: 12
  year: 2015
  ident: 981_CR30
  publication-title: Proceedings of the National Academy of Sciences
  doi: 10.1073/pnas.1422953112
– ident: 981_CR38
– ident: 981_CR9
– ident: 981_CR103
  doi: 10.1109/CVPR.2010.5540235
– ident: 981_CR5
  doi: 10.3115/1225403.1225421
– ident: 981_CR107
– volume: 13
  start-page: 18
  issue: 4
  year: 1998
  ident: 981_CR40
  publication-title: IEEE Intelligent Systems and their Applications
  doi: 10.1109/5254.708428
– volume: 47
  start-page: 853
  issue: 1
  year: 2013
  ident: 981_CR42
  publication-title: Journal of Artificial Intelligence Research
  doi: 10.1613/jair.3994
– ident: 981_CR89
  doi: 10.3115/1613715.1613751
– ident: 981_CR25
– volume: 108
  start-page: 59
  issue: 1–2
  year: 2014
  ident: 981_CR70
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-013-0695-z
– volume: 2
  start-page: 67
  year: 2014
  ident: 981_CR105
  publication-title: Transactions of the Association for Computational Linguistics
  doi: 10.1162/tacl_a_00166
– ident: 981_CR86
– ident: 981_CR55
  doi: 10.1007/978-3-319-10602-1_48
– volume: 62
  start-page: 61
  issue: 1–2
  year: 2005
  ident: 981_CR95
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-005-4635-4
– volume: 77
  start-page: 157
  issue: 1–3
  year: 2008
  ident: 981_CR79
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-007-0090-8
– volume: 31
  start-page: 1775
  issue: 10
  year: 2009
  ident: 981_CR37
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2009.83
– ident: 981_CR52
  doi: 10.1109/CVPR.2009.5206594
– ident: 981_CR58
– volume: 88
  start-page: 303
  issue: 2
  year: 2010
  ident: 981_CR20
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-009-0275-4
– volume: 8
  start-page: 42
  issue: 3
  year: 2012
  ident: 981_CR66
  publication-title: International Journal on Semantic Web and Information Systems (IJSWIS)
  doi: 10.4018/jswis.2012070103
– volume: 31
  start-page: 59
  issue: 3
  year: 2010
  ident: 981_CR26
  publication-title: AI Magazine
  doi: 10.1609/aimag.v31i3.2303
– ident: 981_CR81
  doi: 10.1109/CVPR.2011.5995711
– ident: 981_CR71
  doi: 10.1007/978-3-642-15561-1_11
– ident: 981_CR36
  doi: 10.1007/978-3-540-88682-2_3
– volume: 9
  start-page: 1735
  issue: 8
  year: 1997
  ident: 981_CR41
  publication-title: Neural Computation
  doi: 10.1162/neco.1997.9.8.1735
– ident: 981_CR64
– ident: 981_CR21
  doi: 10.1109/CVPR.2015.7298754
SSID ssj0002823
Score 2.7092626
Snippet Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question...
(ProQuest: ... denotes formulae and/or non-USASCII text omitted; see image) Despite progress in perceptual tasks such as image classification, computers still...
(ProQuest: ... denotes formulae and/or non-USASCII text omitted; see image).Despite progress in perceptual tasks such as image classification, computers still...
SourceID proquest
gale
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 32
SubjectTerms Analysis
Annotations
Artificial Intelligence
Carriages
Cognition & reasoning
Cognitive tasks
Computer Imaging
Computer Science
Computer simulation
Computer vision
Computers
Crowdsourcing
Datasets
Genomes
Genomics
Graph representations
Image Processing and Computer Vision
Image processing systems
Image retrieval
Language
Mathematical models
Natural language processing
Pattern Recognition
Pattern Recognition and Graphics
Questioning
Studies
Tasks
Vision
Vision systems
SummonAdditionalLinks – databaseName: ABI/INFORM Global (OCUL)
  dbid: M0C
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB6VlgMXSnmoCwUZhIQEikjiPJxeqtXSlkpQIR7V3iy_gipRpzS7_H5mHGdLQfTSqzOO44xnPJ9nPAPwEvVdyByW4HaDAEWXOlGKBM-5LHeiULUVodhEfXws5vPmUzxw62NY5agTg6K2naEz8reZaHIqii6KvfOfCVWNIu9qLKFxCzbIsqGQvo_pbKWJEU4MpeQRIpVVk41ezXB1LsvJg5khnm5EltRX9qW_tfM_btKw-xxs3vS778HdaHey6bBQtmDN-fuwGW1QFiW8x6axzMPY9gDmJ6f9EvseOt-duV0WYmMMhUuzD_G0kylv2Um4ps5CEAKbIby3g2fAsneIlR07OiPKqffd4P7vH8K3g_2vs_dJLMiQGBTVRWJLw63O26LgznDUks4hd3PLm7SyNjWmUVVeK-2ETjV3wgicPAKWsi2dKlTOH8G677zbBlZVRcu5aqrWiELoQrfCWoNgUwtj8XdNIB3ZIU3MVk5FM37IyzzLxEFJEWrEQVlP4PWqy_mQquM64hfEY0kpMDzF2HxXy76XR18-yymCNrQ7M15O4FUkajsc3Kh4ZQGnQFmzrlDujNyXUQn08pL1E3i-eoziSz4Z5V23RJqGfM9oVSHNm3GN_fGK_33_4-sHfAJ3crI_QmTmDqwvLpbuKdw2vxan_cWzICW_AV7_FZo
  priority: 102
  providerName: ProQuest
Title Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
URI https://link.springer.com/article/10.1007/s11263-016-0981-7
https://www.proquest.com/docview/1892132984
https://www.proquest.com/docview/1904244064
Volume 123
WOSCitedRecordID wos000400276400003&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVPQU
  databaseName: ABI/INFORM Collection
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 20171231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: 7WY
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/abicomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ABI/INFORM Global
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 20171231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: M0C
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/abiglobal
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Advanced Technologies & Aerospace Database
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 20171231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: P5Z
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/hightechjournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Computer Science Database
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 20171231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: K7-
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/compscijour
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 20171231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: BENPR
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: SpringerLink
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3db9MwED-xjQde2MaH6Ngqg5CQQJGS2Ekc3krZYAKqqoNSeLH8FTRpS9DS8vdzdpyOjQ8JXiLFuThO7Pv45c53AE9Q3vnMYRGqGwQoKlORlI7xrE1Sy5ksDPfFJorJhC8W5TTs4277aPfeJekl9eVmtyR1PscEEXDJk6jYgC3UdtzVa5idzNfiFzFEVz8ecVGWl0nvyvxdF1eU0XWR_Itv1Kuco-3_GuwO3A4WJhl1S2IXbtj6DmwHa5MEXm6xqS_o0LfdhcX8tF3hva9t3ZzbF8RHwWgXGE3ehf-aRNaGzP2GdOLDDcgYgbzpfACGvEJUbMnxuaMc1XXTOfrbe_Dx6PDD-E0USi9EGplyGZlMU6PSijFqNUV5aC3OY2poGefGxFqXMk8LqSxXsaKWa87RzipYVmVWMpnS-7BZN7V9ACTPWUWpLPNKc8YVUxU3RiOsVFwbmpYDiPs5EDrkJXflMc7EZUZl9zGFi0VzH1MUA3i2vuVbl5Tjb8SP3cQKl-yidtE0X-WqbcXxyUyMEJ6hhZnQbABPA1HV4MO1DJsT8BVcfqwrlPv9AhGB3VuR4NsjrC85G8Cj9WVkVOd9kbVtVkhTOi8z2k9I87xfND918afx7_0T9UO4lTrDw4dk7sPm8mJlD-Cm_r48bS-GsFF8-jyErZeHk-kMz94WER7fx2M8TrMvQ89JPwApnhBw
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3db9MwED9NAwleNsaHVhjMIBASKKL5dpAQqjbGqpYKwZj6ZhzbQZOYM-qWiX-Kv5E7J-kYiL3tgVfnHMfx7853vvMdwGOUdz5zWIDbDRooZVoGUhLjGRNGhicy19wXm8gnEz6dFu9X4Gd3F4bCKjuZ6AW1rhWdkb8IeRFRUXSevD75FlDVKPKudiU0GliMzI9TNNncq-Euru-TKNp7c7CzH7RVBQKFeJsHOlWxLqMqSWKjYmR1Y_ATI43Gf6Z1X6lCZlEuS8PLfhkbrjiOjFp3WqVGJpISHaDIv5LEPCe-GuXBUvKj-dKUrkeTLM2KsPOi-qt6YUQe0xDt94KHQX5uH_xzN_jLLet3u731_-0_3YC1Vq9mg4YRNmDF2Juw3urYrJVgDpu6MhZd2y2YHh65BfZ9a2x9bF4yH_ujKBycjdvTXCatZof-Gj7zQRZsZ1af6sbzodmusc6w4TFRDqytm_AGdxs-XcqU78Cqra3ZBJZlSRXHssgqxRNeJmXFtVZoTJdcaVyeHvS75ReqzcZORUG-irM80oQYQRF4hBiR9-DZsstJk4rkIuJHhClBKT4sxRB9kQvnxPDjBzFAoxT16jBOe_C0JapqHFzJ9koGToGygp2j3OrQJloh58QZ1HrwcPkYxRP5nKQ19QJpCvKto9aINM87TP_2in99_92LB9yGa_sH78ZiPJyM7sH1iHQtH4W6Bavz2cLch6vq-_zIzR54DmXw-bKh_gsGyXOt
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB5VBSEulKdYWsAgEBIo6ubtICG06rKwarVa8ahWXIxjO6hS65TNLhV_jV_HjONsKYjeeuCaTBIn-TwPz-cZgCeo71zlsADNDQYoZVoGUtLEMyaMDE9krrlrNpFPJnw2K6Zr8LPbC0O0yk4nOkWta0Vr5NshLyJqis6T7crTIqbD0evjbwF1kKJMa9dOo4XIrvlxguFb82o8xH_9NIpGbz7uvAt8h4FAIfYWgU5VrMuoSpLYqBinvTE43EjHRT_Tuq9UIbMol6XhZb-MDVccR4EeeFqlRiaSih6g-r-UY4xJdMJp-nllBTCUadvYY3iWZkXYZVTdtr0wouxpiLF8wcMgP2MT_7QMf6VoneUbbfzP3-w6XPP-Nhu0E-QGrBl7Eza87828ZmvwUNfeojt2C2b7B80Sr31rbH1kXjLHCVJEE2d7fpWXSavZvtuezxz5gu3M6xPdZkQ0GxrbGDY-IsmBtXVLe2huw6cLeeU7sG5ra-4Cy7KkimNZZJXiCS-TsuJaKwyyS640_qoe9DsoCOWrtFOzkENxWl-a0COImUfoEXkPnq8uOW5LlJwn_JjwJaj0hyVAfJXLphHjD-_FAINV9LfDOO3BMy9U1fhwJf1WDXwFqhZ2RnKrQ57wyq8Rp7DrwaPVaVRblIuS1tRLlCko547eJMq86PD92y3-Nf575z_wIVxBhIu98WR3E65G5II5cuoWrC_mS3MfLqvvi4Nm_sBNVgZfLhrpvwCnaXzR
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Visual+Genome%3A+Connecting+Language+and+Vision+Using+Crowdsourced+Dense+Image+Annotations&rft.jtitle=International+journal+of+computer+vision&rft.au=Krishna%2C+Ranjay&rft.au=Zhu%2C+Yuke&rft.au=Groth%2C+Oliver&rft.au=Johnson%2C+Justin&rft.date=2017-05-01&rft.pub=Springer&rft.issn=0920-5691&rft.volume=123&rft.issue=1&rft.spage=32&rft_id=info:doi/10.1007%2Fs11263-016-0981-7&rft.externalDBID=ISR&rft.externalDocID=A550951135
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon