Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study

Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroun...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing Vol. 22; no. 1; pp. 55 - 69
Main Authors: Borji, Ali, Sihite, D. N., Itti, L.
Format: Journal Article
Language:English
Published: New York, NY IEEE 01.01.2013
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1057-7149, 1941-0042, 1941-0042
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as "visual saliency." Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.
AbstractList Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as "visual saliency." Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.
Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as "visual saliency." Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as "visual saliency." Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains.
Author Itti, L.
Borji, Ali
Sihite, D. N.
Author_xml – sequence: 1
  givenname: Ali
  surname: Borji
  fullname: Borji, Ali
  email: aliborji@gmail.com
  organization: Dept. of Comput. Sci., Univ. of Southern California, Los Angeles, CA, USA
– sequence: 2
  givenname: D. N.
  surname: Sihite
  fullname: Sihite, D. N.
  email: sihite@usc.edu
  organization: Dept. of Comput. Sci., Univ. of Southern California, Los Angeles, CA, USA
– sequence: 3
  givenname: L.
  surname: Itti
  fullname: Itti, L.
  email: itti@usc.edu
  organization: Dept. of Comput. Sci., Univ. of Southern California, Los Angeles, CA, USA
BackLink http://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=26853905$$DView record in Pascal Francis
https://www.ncbi.nlm.nih.gov/pubmed/22868572$$D View this record in MEDLINE/PubMed
BookMark eNqN0c1rFDEYBvAgLfZD74IgARG8zJq8-Zp4Wxa1hYpKq9fhnZlMSZnJbJMZYf970921hR5KTwnJ70ngfU7IQRiDI-QNZwvOmf10df5zAYzDAoAzA-YFOeZW8oIxCQd5z5QpDJf2iJykdMMYl4rrl-QIoNSlMnBMql8zhslPOPm_ji4D9pvkEx07ejYPGIrvY-t6uryOzg0uTNQH-senGXt6ib13odnQLfHh-jNd0tU4rDHuHruc5nbzihx22Cf3er-ekt9fv1ytzoqLH9_OV8uLopFGT4VzDl2tW4YKmGg7RCPzBaJusa473SqZT2ptBcMGoWzQGoslWAuq5rITp-Tj7t11HG9nl6Zq8KlxfY_BjXOqOJRCK7BCP4MaoUswUmT6_hG9GeeYh7RVICXTzGT1bq_menBttY5-wLip_k85gw97gKnBvosYGp8eXFbCMpUd27kmjilF190Tzqq7wqtceHVXeLUvPEf0o0izLXMMU0TfPxV8uwv6PPv7fzQoAUqKf0o4tdM
CODEN IIPRE4
CitedBy_id crossref_primary_10_1016_j_image_2015_02_002
crossref_primary_10_1016_j_visres_2014_06_016
crossref_primary_10_1007_s12559_019_09662_y
crossref_primary_10_1007_s11042_025_20613_8
crossref_primary_10_3389_fpsyg_2020_01877
crossref_primary_10_3390_s21030970
crossref_primary_10_1016_j_neucom_2014_08_062
crossref_primary_10_1108_IJCST_12_2015_0134
crossref_primary_10_1109_TSMC_2013_2279715
crossref_primary_10_1007_s10339_016_0781_6
crossref_primary_10_1002_rob_21608
crossref_primary_10_1016_j_neucom_2016_08_066
crossref_primary_10_1016_j_procs_2015_02_068
crossref_primary_10_1109_TITS_2024_3510116
crossref_primary_10_1109_TMM_2018_2829605
crossref_primary_10_1016_j_ins_2017_08_040
crossref_primary_10_1016_j_neuron_2020_01_016
crossref_primary_10_1016_j_displa_2023_102531
crossref_primary_10_1080_1206212X_2019_1634328
crossref_primary_10_3758_s13428_024_02535_9
crossref_primary_10_1038_s41598_021_97879_z
crossref_primary_10_1080_02564602_2016_1231023
crossref_primary_10_3758_s13414_018_1607_7
crossref_primary_10_1016_j_visres_2020_07_005
crossref_primary_10_1007_s10514_018_9752_3
crossref_primary_10_1523_JNEUROSCI_0164_22_2022
crossref_primary_10_1007_s41095_019_0149_9
crossref_primary_10_1145_3470970
crossref_primary_10_3390_jimaging8040090
crossref_primary_10_1016_j_imavis_2013_08_004
crossref_primary_10_1017_S0140525X16000108
crossref_primary_10_1109_TCSVT_2016_2595324
crossref_primary_10_1109_TCSVT_2023_3278410
crossref_primary_10_1152_jn_01044_2015
crossref_primary_10_1109_TIP_2018_2885229
crossref_primary_10_1109_TITS_2023_3323468
crossref_primary_10_1016_j_visres_2018_10_006
crossref_primary_10_1109_TIP_2021_3050303
crossref_primary_10_1016_j_jvcir_2018_01_014
crossref_primary_10_1007_s11042_016_4118_3
crossref_primary_10_1007_s12559_016_9406_8
crossref_primary_10_1109_ACCESS_2022_3200486
crossref_primary_10_1177_0959651817721404
crossref_primary_10_1177_17470218221101334
crossref_primary_10_7554_eLife_87197
crossref_primary_10_1016_j_neucom_2015_05_033
crossref_primary_10_1109_TGRS_2025_3564459
crossref_primary_10_1007_s00500_017_2931_x
crossref_primary_10_7554_eLife_52984
crossref_primary_10_1109_TIP_2014_2346028
crossref_primary_10_3390_vision3020019
crossref_primary_10_1016_j_visres_2015_01_010
crossref_primary_10_3389_fncom_2020_541581
crossref_primary_10_1109_TPAMI_2020_3002168
crossref_primary_10_1109_TPAMI_2014_2345401
crossref_primary_10_3389_fncom_2014_00109
crossref_primary_10_1145_3131275
crossref_primary_10_1109_TIE_2014_2314066
crossref_primary_10_1109_TIP_2019_2940477
crossref_primary_10_3390_buildings14072114
crossref_primary_10_3758_s13414_015_1010_6
crossref_primary_10_1007_s11042_015_2803_2
crossref_primary_10_1002_col_22421
crossref_primary_10_1016_j_sigpro_2013_12_010
crossref_primary_10_1109_TPAMI_2020_2995909
crossref_primary_10_1109_TMM_2017_2757759
crossref_primary_10_3389_fpsyg_2023_1197567
crossref_primary_10_1177_26331055211065497
crossref_primary_10_1109_TCYB_2015_2404432
crossref_primary_10_1016_j_jvcir_2020_102931
crossref_primary_10_1177_15291006211051956
crossref_primary_10_1007_s10851_019_00882_3
crossref_primary_10_1007_s10514_019_09882_z
crossref_primary_10_1109_TIP_2015_2438546
crossref_primary_10_1016_j_visres_2020_08_001
crossref_primary_10_1109_TPAMI_2025_3583968
crossref_primary_10_1109_TCSVT_2017_2650910
crossref_primary_10_1016_j_conb_2015_03_018
crossref_primary_10_1038_s41598_018_31894_5
crossref_primary_10_1007_s12559_014_9246_3
crossref_primary_10_1016_j_cviu_2017_11_011
crossref_primary_10_1016_j_patrec_2013_04_009
crossref_primary_10_1109_TIP_2018_2817047
crossref_primary_10_1007_s11042_015_3054_y
crossref_primary_10_1016_j_neunet_2021_03_003
crossref_primary_10_1007_s11042_016_4124_5
crossref_primary_10_1080_15230406_2019_1697965
crossref_primary_10_3390_rs10040652
crossref_primary_10_1007_s11263_021_01478_4
crossref_primary_10_1186_s13229_023_00537_6
crossref_primary_10_1109_TIP_2016_2577498
crossref_primary_10_1109_TNNLS_2015_2480683
crossref_primary_10_12688_f1000research_28396_2
crossref_primary_10_12688_f1000research_28396_1
crossref_primary_10_1016_j_image_2016_03_003
crossref_primary_10_3389_fnins_2024_1412527
crossref_primary_10_1016_j_image_2015_05_010
crossref_primary_10_1587_transinf_2016EDP7413
crossref_primary_10_1007_s10115_018_1243_5
crossref_primary_10_1016_j_neucom_2017_03_054
crossref_primary_10_1109_TIM_2021_3108538
crossref_primary_10_1109_TIP_2016_2522380
crossref_primary_10_1016_j_neucom_2014_07_055
crossref_primary_10_7717_peerj_2946
crossref_primary_10_1109_TFUZZ_2019_2911494
crossref_primary_10_1121_10_0006750
crossref_primary_10_3389_fnhum_2017_00491
crossref_primary_10_1007_s11042_020_08644_9
crossref_primary_10_1109_TIP_2020_3036749
crossref_primary_10_1109_TPAMI_2019_2935715
crossref_primary_10_1109_TMM_2018_2851389
crossref_primary_10_1016_j_engappai_2025_110721
crossref_primary_10_1109_TNNLS_2015_2512898
crossref_primary_10_1016_j_enbuild_2025_116040
crossref_primary_10_1109_TIP_2017_2681424
crossref_primary_10_1109_TIM_2024_3350145
crossref_primary_10_1109_TIP_2020_3016464
crossref_primary_10_1038_s41598_022_11723_6
crossref_primary_10_1038_s41562_017_0208_0
crossref_primary_10_3758_s13414_019_01849_7
crossref_primary_10_1080_17470218_2013_844843
crossref_primary_10_1007_s41095_020_0169_5
crossref_primary_10_1109_TIP_2014_2383320
crossref_primary_10_1163_22134913_00002024
crossref_primary_10_1016_j_visres_2015_04_017
crossref_primary_10_3390_app12010309
crossref_primary_10_1109_TIP_2015_2487833
crossref_primary_10_1109_TIP_2019_2945857
crossref_primary_10_1016_j_actpsy_2015_03_003
crossref_primary_10_1007_s10816_016_9310_2
crossref_primary_10_3390_s16122003
crossref_primary_10_1523_JNEUROSCI_3550_13_2014
crossref_primary_10_1121_1_4979055
crossref_primary_10_1007_s11704_017_6613_8
crossref_primary_10_3389_fpsyg_2017_01718
crossref_primary_10_1016_j_compmedimag_2015_02_007
crossref_primary_10_1109_TCSVT_2018_2823769
crossref_primary_10_1111_ejn_12641
crossref_primary_10_1016_j_neucom_2015_04_055
crossref_primary_10_1109_TIP_2025_3567842
crossref_primary_10_1109_TPAMI_2016_2562626
crossref_primary_10_7554_eLife_87197_3
crossref_primary_10_1155_2022_8369368
crossref_primary_10_1109_TIP_2018_2834826
crossref_primary_10_1016_j_dsp_2020_102708
crossref_primary_10_1016_j_engappai_2023_107185
crossref_primary_10_1109_ACCESS_2023_3283344
crossref_primary_10_1016_j_eswa_2025_127652
crossref_primary_10_1109_TMM_2017_2693022
crossref_primary_10_3389_fncom_2016_00124
crossref_primary_10_1109_TIP_2017_2722238
crossref_primary_10_1038_s41598_018_23618_6
crossref_primary_10_3389_fpsyg_2021_590986
crossref_primary_10_1109_TIP_2015_2425544
crossref_primary_10_1146_annurev_vision_120822_072528
crossref_primary_10_1016_j_visres_2017_02_003
crossref_primary_10_1007_s10489_023_04861_5
crossref_primary_10_1177_14780771211025142
crossref_primary_10_1016_j_image_2017_12_003
crossref_primary_10_1109_TPAMI_2018_2815601
crossref_primary_10_1016_j_neucom_2020_06_131
crossref_primary_10_1007_s11042_020_09874_7
crossref_primary_10_3390_biomimetics10040192
crossref_primary_10_1016_j_imavis_2013_06_006
crossref_primary_10_1109_ACCESS_2019_2910572
crossref_primary_10_1371_journal_pone_0138053
crossref_primary_10_1016_j_jvcir_2015_04_010
crossref_primary_10_1007_s11042_022_13125_2
crossref_primary_10_1007_s11042_024_19796_3
crossref_primary_10_1109_ACCESS_2018_2800294
crossref_primary_10_1109_ACCESS_2021_3102465
crossref_primary_10_1109_TNNLS_2015_2461603
crossref_primary_10_1016_j_eswa_2021_116425
crossref_primary_10_3758_s13428_022_01949_7
crossref_primary_10_1038_s41598_020_73494_2
crossref_primary_10_1109_TITS_2019_2915540
crossref_primary_10_1145_3576857
crossref_primary_10_1016_j_visres_2015_03_029
crossref_primary_10_1109_TMM_2018_2867742
crossref_primary_10_1007_s00371_021_02340_x
crossref_primary_10_1109_TPAMI_2019_2963387
crossref_primary_10_1016_j_visres_2014_12_026
crossref_primary_10_1016_j_neucom_2016_05_047
crossref_primary_10_3758_s13428_021_01562_0
crossref_primary_10_1155_2016_7437860
crossref_primary_10_1007_s11554_018_00848_5
crossref_primary_10_1016_j_image_2015_04_007
crossref_primary_10_1109_TCYB_2014_2356200
crossref_primary_10_1109_TIP_2017_2767288
crossref_primary_10_1016_j_image_2019_04_007
crossref_primary_10_1016_j_jvcir_2016_06_013
crossref_primary_10_1016_j_patcog_2020_107757
crossref_primary_10_1016_j_neucom_2023_126775
crossref_primary_10_1109_TCDS_2017_2696439
crossref_primary_10_1016_j_displa_2025_103137
crossref_primary_10_1093_iwcomp_iwaa013
crossref_primary_10_1016_j_visres_2023_108304
crossref_primary_10_3917_anpsy1_211_0101
crossref_primary_10_1109_TIV_2023_3298481
crossref_primary_10_1038_s41598_019_39661_w
crossref_primary_10_1109_TIP_2015_2395713
crossref_primary_10_3389_fncom_2014_00096
crossref_primary_10_3390_rs13071302
crossref_primary_10_1109_LSP_2016_2558489
crossref_primary_10_1093_cercor_bht179
crossref_primary_10_1109_TPAMI_2019_2920636
crossref_primary_10_1109_TBC_2019_2954063
crossref_primary_10_1109_TCSVT_2013_2280096
crossref_primary_10_3389_fpsyg_2019_02915
crossref_primary_10_1007_s12559_016_9430_8
crossref_primary_10_1007_s41095_018_0120_1
crossref_primary_10_1016_j_cobeha_2016_06_009
crossref_primary_10_1073_pnas_1510393112
crossref_primary_10_1016_j_sigpro_2017_11_015
crossref_primary_10_1007_s11042_015_2802_3
crossref_primary_10_3390_s22145169
crossref_primary_10_1007_s12524_019_00944_4
crossref_primary_10_1002_tee_22874
crossref_primary_10_1109_TMM_2017_2777665
crossref_primary_10_1016_j_image_2015_07_012
crossref_primary_10_1016_j_image_2015_07_013
crossref_primary_10_1109_TMM_2021_3139743
crossref_primary_10_1155_2018_2676409
crossref_primary_10_1007_s12559_019_09671_x
crossref_primary_10_1007_s10472_022_09807_0
crossref_primary_10_1109_TCYB_2013_2292054
crossref_primary_10_1016_j_bbr_2015_01_013
crossref_primary_10_1016_j_neucom_2016_08_130
crossref_primary_10_1109_TPAMI_2013_158
crossref_primary_10_1016_j_cag_2023_10_012
crossref_primary_10_1016_j_robot_2016_11_013
crossref_primary_10_3390_vision3030033
crossref_primary_10_4304_jmm_9_7_941_947
crossref_primary_10_1145_2996463
crossref_primary_10_1016_j_image_2015_08_004
crossref_primary_10_1016_j_visres_2013_07_016
crossref_primary_10_1109_TCSVT_2018_2870954
crossref_primary_10_1049_iet_cvi_2019_0624
crossref_primary_10_1007_s11042_024_19368_5
crossref_primary_10_1111_nyas_12606
crossref_primary_10_1016_j_patcog_2017_04_031
crossref_primary_10_1109_TCDS_2020_3002765
crossref_primary_10_1109_TIP_2015_2416632
crossref_primary_10_1109_TMM_2022_3189251
crossref_primary_10_3390_s20143919
crossref_primary_10_1038_s41598_021_87715_9
crossref_primary_10_3390_app15179299
crossref_primary_10_1007_s13319_017_0121_3
crossref_primary_10_1016_j_image_2015_07_002
crossref_primary_10_1007_s11045_016_0456_6
crossref_primary_10_1016_j_jvcir_2018_10_008
crossref_primary_10_1016_j_jobe_2024_110678
crossref_primary_10_1016_j_ufug_2019_126365
crossref_primary_10_1108_JCM_04_2019_3190
crossref_primary_10_1016_j_imavis_2025_105413
crossref_primary_10_1109_TMM_2017_2689327
crossref_primary_10_1111_cgf_13308
crossref_primary_10_1016_j_image_2019_05_001
crossref_primary_10_1109_TPAMI_2020_2964533
crossref_primary_10_1109_TBC_2016_2617291
crossref_primary_10_3390_electronics8121462
crossref_primary_10_3390_sym8080079
crossref_primary_10_3390_app10155143
crossref_primary_10_1016_j_isprsjprs_2017_11_019
crossref_primary_10_1016_j_actpsy_2019_102889
crossref_primary_10_3390_en10050668
crossref_primary_10_1007_s11042_015_2692_4
crossref_primary_10_1007_s11263_021_01490_8
crossref_primary_10_1145_3661312
crossref_primary_10_1109_TMM_2016_2522639
crossref_primary_10_3390_vision3040056
crossref_primary_10_1109_TCYB_2015_2400821
crossref_primary_10_3389_fnhum_2014_00327
crossref_primary_10_1016_j_jocs_2017_07_007
crossref_primary_10_1145_3337066
crossref_primary_10_1109_TIP_2017_2651410
crossref_primary_10_3390_s17040862
crossref_primary_10_1016_j_engappai_2025_111664
crossref_primary_10_1007_s11042_015_2875_z
crossref_primary_10_3389_fnins_2024_1333894
crossref_primary_10_1016_j_neuroscience_2021_09_014
crossref_primary_10_1007_s11042_019_08297_3
Cites_doi 10.1068/p2952
10.1109/34.730558
10.1109/CVPR.2006.54
10.1167/8.7.32
10.1109/ICCV.2009.5459289
10.1016/S0042-6989(99)00163-7
10.1364/JOSAA.20.001407
10.1016/j.cviu.2004.10.009
10.1016/j.visres.2005.03.019
10.1109/34.667881
10.1038/35058500
10.1109/34.990146
10.1109/TRO.2009.2022424
10.1167/9.5.7
10.1109/CVPR.2011.5995506
10.1167/9.11.25
10.1162/jocn.1995.7.1.66
10.1167/9.12.15
10.1109/TIP.2004.834657
10.1109/ICCV.2009.5459467
10.1109/TPAMI.2006.86
10.1109/ICCV.2009.5459462
10.1117/12.512618
10.1016/j.imavis.2011.11.007
10.1016/j.neunet.2006.10.001
10.1167/11.5.5
10.1016/S1364-6613(00)01817-9
10.1038/nrn1411
10.1007/s11263-009-0215-3
10.1007/s11263-010-0354-6
10.1109/TPAMI.2009.53
10.1007/11682110
10.1167/8.14.18
10.1145/566570.566650
10.1088/1741-2560/7/1/016006
10.1167/9.12.10
10.1167/11.3.9
10.1109/34.877520
10.1016/j.imavis.2009.10.006
10.1007/s11263-007-0090-8
10.1109/ROBOT.2009.5152357
10.1109/TPAMI.2011.146
10.1167/9.7.4
10.1109/TPAMI.2012.89
10.1109/CVPR.2007.383267
10.1016/S0042-6989(01)00250-4
10.1038/369742a0
10.1167/7.14.4
10.1167/6.4.7
10.1162/0899766054026639
10.1109/ROMAN.2009.5326240
10.1016/j.visres.2010.05.034
10.1145/1870076.1870080
10.1088/0954-898X/10/4/304
10.1109/TIP.2009.2030969
10.1007/BF01418978
10.1109/CVPR.2007.383337
10.1016/j.visres.2007.06.015
10.1007/s00138-009-0192-0
10.1016/j.tics.2005.02.009
10.1007/s00221-006-0804-0
10.1109/TPAMI.2004.29
10.1109/ICIP.2003.1246946
10.1109/TMM.2005.854410
10.1162/jocn.1997.9.1.27
10.1016/j.visres.2010.01.002
10.1109/CVPR.2011.5995423
10.1109/ICIP.2010.5652280
10.1146/annurev.neuro.24.1.1193
10.1146/annurev.psych.50.1.243
10.1109/TCIAIG.2009.2024532
10.1145/1658349.1658355
10.1109/ROBOT.2010.5509638
10.1016/0010-0285(80)90005-5
10.1167/11.4.14
ContentType Journal Article
Copyright 2014 INIST-CNRS
Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Jan 2013
Copyright_xml – notice: 2014 INIST-CNRS
– notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Jan 2013
DBID 97E
RIA
RIE
AAYXX
CITATION
IQODW
CGR
CUY
CVF
ECM
EIF
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
F28
FR3
DOI 10.1109/TIP.2012.2210727
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE/IET Electronic Library (IEL) (UW System Shared)
CrossRef
Pascal-Francis
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
Engineering Research Database
ANTE: Abstracts in New Technology & Engineering
DatabaseTitleList MEDLINE
MEDLINE - Academic
Technology Research Database
Technology Research Database

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE/IET Electronic Library (IEL) (UW System Shared)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
Government
EISSN 1941-0042
EndPage 69
ExternalDocumentID 2874203331
22868572
26853905
10_1109_TIP_2012_2210727
6253254
Genre orig-research
Research Support, U.S. Gov't, Non-P.H.S
Comparative Study
Research Support, Non-U.S. Gov't
Journal Article
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYXX
CITATION
IQODW
RIG
CGR
CUY
CVF
ECM
EIF
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
F28
FR3
ID FETCH-LOGICAL-c476t-eeeaeb6d0a5203dfaa74476aa6dabbf6d54a74b6930aca28ca979a829925b14f3
IEDL.DBID RIE
ISICitedReferencesCount 459
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000312892000005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1057-7149
1941-0042
IngestDate Wed Oct 01 13:24:01 EDT 2025
Sun Sep 28 00:46:16 EDT 2025
Mon Jun 30 10:22:34 EDT 2025
Mon Jul 21 05:56:17 EDT 2025
Mon Jul 21 09:12:20 EDT 2025
Tue Nov 18 22:26:23 EST 2025
Sat Nov 29 03:20:49 EST 2025
Tue Aug 26 16:46:29 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Computer vision
State of the art
Target tracking
model comparison
Artificial vision
Vision system
Pattern recognition
Object recognition
Computational complexity
Modeling
Eye movement
Robotics
Accuracy
eye movement prediction
Bottom-up attention
Quality control
Object detection
Open market
Visual saliency
Visual attention
Target detection
Comparative study
Quantitative analysis
Image evaluation
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/USG.html
CC BY 4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c476t-eeeaeb6d0a5203dfaa74476aa6dabbf6d54a74b6930aca28ca979a829925b14f3
Notes ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
content type line 23
PMID 22868572
PQID 1272440607
PQPubID 85429
PageCount 15
ParticipantIDs crossref_primary_10_1109_TIP_2012_2210727
pubmed_primary_22868572
pascalfrancis_primary_26853905
crossref_citationtrail_10_1109_TIP_2012_2210727
proquest_miscellaneous_1283652936
proquest_journals_1272440607
proquest_miscellaneous_1273682743
ieee_primary_6253254
PublicationCentury 2000
PublicationDate 2013-Jan.
2013-01-00
2013
2013-Jan
20130101
PublicationDateYYYYMMDD 2013-01-01
PublicationDate_xml – month: 01
  year: 2013
  text: 2013-Jan.
PublicationDecade 2010
PublicationPlace New York, NY
PublicationPlace_xml – name: New York, NY
– name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2013
Publisher IEEE
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: Institute of Electrical and Electronics Engineers
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref59
ref53
ref52
ref55
ref54
ref50
ref46
ref45
ref41
ref49
koch (ref92) 1985; 4
ref8
ref7
mancas (ref42) 2007
judd (ref79) 2011
ref9
kootstra (ref44) 2009
ref6
ref5
ref100
ref101
itti (ref34) 2003; 5200
ref35
han (ref87) 2007
ref30
ref33
ref39
ref38
(ref20) 2005
bian (ref51) 2009
ref24
ref23
ref26
ref25
jost (ref64) 2005; 100
ref21
ref28
ref27
ref29
harel (ref40) 2006; 19
(ref63) 0
ref12
ref15
ref14
ref97
(ref32) 2010
ref96
ref99
hou (ref75) 2012; 34
ref98
ref10
breazeal (ref16) 1999
ref17
ref19
green (ref68) 1966
ref18
gao (ref58) 2007
ref93
ref94
frintrop (ref4) 2006
(ref62) 2005
ref91
ref90
ref89
ref86
ref85
ref88
itti (ref36) 2006
heinke (ref22) 2004
tavakoli (ref56) 2011
ref81
ref84
li (ref48) 2009
ref83
bruce (ref37) 2005
zhang (ref95) 2009
ref80
ref78
nothdurft (ref76) 2005
ref74
ref77
mertsching (ref13) 1999
ref2
ref1
ref71
ref70
ref73
ref72
ramanathan (ref82) 2010
ref67
ref69
ref66
ref65
wang (ref11) 2006; 1
kanan (ref3) 2010
itti (ref31) 1998; 20
guo (ref47) 2010; 19
ref60
ref61
hou (ref43) 2008
References_xml – ident: ref101
  doi: 10.1068/p2952
– volume: 20
  start-page: 1254
  year: 1998
  ident: ref31
  article-title: A model of saliency-based visual attention for rapid scene analysis
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/34.730558
– ident: ref5
  doi: 10.1109/CVPR.2006.54
– ident: ref45
  doi: 10.1167/8.7.32
– ident: ref99
  doi: 10.1109/ICCV.2009.5459289
– ident: ref33
  doi: 10.1016/S0042-6989(99)00163-7
– ident: ref35
  doi: 10.1364/JOSAA.20.001407
– volume: 100
  start-page: 107
  year: 2005
  ident: ref64
  article-title: Assessing the contribution of color in visual attention
  publication-title: Comput Vis Image Understand
  doi: 10.1016/j.cviu.2004.10.009
– volume: 4
  start-page: 219
  year: 1985
  ident: ref92
  article-title: Shifts in selective visual attention: Toward the underlying neural circuitry
  publication-title: Human Neurobiol
– ident: ref66
  doi: 10.1016/j.visres.2005.03.019
– start-page: 1146
  year: 1999
  ident: ref16
  article-title: A context-dependent attention system for a social robot
  publication-title: Proc Int Joint Conf Artif Intell
– ident: ref86
  doi: 10.1109/34.667881
– start-page: 666
  year: 2011
  ident: ref56
  article-title: Fast and efficient saliency detection using sparse sampling and kernel density estimation
  publication-title: Proc 17th Scandin Conf Image Anal
– ident: ref21
  doi: 10.1038/35058500
– ident: ref2
  doi: 10.1109/34.990146
– ident: ref15
  doi: 10.1109/TRO.2009.2022424
– ident: ref96
  doi: 10.1167/9.5.7
– ident: ref57
  doi: 10.1109/CVPR.2011.5995506
– ident: ref91
  doi: 10.1167/9.11.25
– ident: ref25
  doi: 10.1162/jocn.1995.7.1.66
– ident: ref49
  doi: 10.1167/9.12.15
– year: 2007
  ident: ref42
  publication-title: Computational Attention Modelisation and Application to Audio and Image Processing
– year: 2010
  ident: ref32
  publication-title: iLab Neuromorphic Vision C++ Toolkit (iNVT)
– ident: ref7
  doi: 10.1109/TIP.2004.834657
– ident: ref10
  doi: 10.1109/ICCV.2009.5459467
– ident: ref38
  doi: 10.1109/TPAMI.2006.86
– start-page: 1
  year: 2010
  ident: ref3
  article-title: Robust classification of objects, faces, and flowers using national image
  publication-title: Proc Comput Vis Pattern Recognit
– ident: ref50
  doi: 10.1109/ICCV.2009.5459462
– volume: 5200
  start-page: 64
  year: 2003
  ident: ref34
  article-title: Realistic avatar eye and head animation using a neurobiological model of visual attention
  publication-title: Proc SPIE
  doi: 10.1117/12.512618
– year: 2005
  ident: ref37
  publication-title: Advances in neural information processing systems
– ident: ref54
  doi: 10.1016/j.imavis.2011.11.007
– year: 0
  ident: ref63
  publication-title: DIEM Dataset
– ident: ref1
  doi: 10.1016/j.neunet.2006.10.001
– ident: ref90
  doi: 10.1167/11.5.5
– ident: ref85
  doi: 10.1016/S1364-6613(00)01817-9
– start-page: 246
  year: 2009
  ident: ref48
  article-title: Visual saliency based on conditional entropy
  publication-title: Proc Asian Conf Comput Vis
– ident: ref60
  doi: 10.1038/nrn1411
– ident: ref46
  doi: 10.1007/s11263-009-0215-3
– ident: ref55
  doi: 10.1007/s11263-010-0354-6
– ident: ref52
  doi: 10.1109/TPAMI.2009.53
– year: 2006
  ident: ref4
  publication-title: VOCUS A visual attention system for object detection and goal-directed search
  doi: 10.1007/11682110
– ident: ref30
  doi: 10.1167/8.14.18
– ident: ref9
  doi: 10.1145/566570.566650
– volume: 1
  start-page: 347
  year: 2006
  ident: ref11
  article-title: Picture collage
  publication-title: Proc Comput Vis Pattern Recognit
– year: 2011
  ident: ref79
  publication-title: Understanding and predicting where people look
– ident: ref19
  doi: 10.1088/1741-2560/7/1/016006
– start-page: 361
  year: 2007
  ident: ref87
  article-title: High speed visual saliency computation on GPU
  publication-title: Proc Int Conf Image Process
– ident: ref69
  doi: 10.1167/9.12.10
– ident: ref61
  doi: 10.1167/11.3.9
– ident: ref74
  doi: 10.1109/34.877520
– ident: ref14
  doi: 10.1016/j.imavis.2009.10.006
– ident: ref98
  doi: 10.1007/s11263-007-0090-8
– ident: ref88
  doi: 10.1109/ROBOT.2009.5152357
– volume: 34
  start-page: 194
  year: 2012
  ident: ref75
  article-title: Image signature: Highlighting sparse salient regions
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2011.146
– ident: ref72
  doi: 10.1167/9.7.4
– year: 2009
  ident: ref51
  publication-title: Advances in Neuro-Information Processing
– ident: ref24
  doi: 10.1109/TPAMI.2012.89
– ident: ref41
  doi: 10.1109/CVPR.2007.383267
– ident: ref67
  doi: 10.1016/S0042-6989(01)00250-4
– ident: ref26
  doi: 10.1038/369742a0
– year: 1966
  ident: ref68
  publication-title: Signal Detection Theory and Psychophysics
– year: 2009
  ident: ref44
  article-title: Prediction of human eye fixations using symmetry
  publication-title: Proc 31st Annu Conf Cognit Sci Soc
– ident: ref71
  doi: 10.1167/7.14.4
– ident: ref65
  doi: 10.1167/6.4.7
– ident: ref83
  doi: 10.1162/0899766054026639
– start-page: 1
  year: 2007
  ident: ref58
  article-title: The discriminant center-surround hypothesis for bottom-up saliency
  publication-title: Proc Neural Inf Process Syst
– ident: ref18
  doi: 10.1109/ROMAN.2009.5326240
– year: 2008
  ident: ref43
  publication-title: Advances in neural information processing systems
– ident: ref77
  doi: 10.1016/j.visres.2010.05.034
– year: 2006
  ident: ref36
  publication-title: Advances in neural information processing systems
– start-page: 273
  year: 2004
  ident: ref22
  publication-title: Connectionist Models in Psychology
– ident: ref17
  doi: 10.1145/1870076.1870080
– ident: ref100
  doi: 10.1088/0954-898X/10/4/304
– volume: 19
  start-page: 185
  year: 2010
  ident: ref47
  article-title: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2009.2030969
– ident: ref93
  doi: 10.1007/BF01418978
– ident: ref27
  doi: 10.1109/CVPR.2007.383337
– ident: ref39
  doi: 10.1016/j.visres.2007.06.015
– ident: ref80
  doi: 10.1007/s00138-009-0192-0
– ident: ref28
  doi: 10.1016/j.tics.2005.02.009
– year: 2005
  ident: ref76
  publication-title: Neurobiology of Attention
– year: 2005
  ident: ref62
  publication-title: CRCNSCollaborative Research in Computational NeuroscienceData Sharing
– start-page: 543
  year: 1999
  ident: ref13
  publication-title: Handbook of Computer Vision and Applications
– ident: ref78
  doi: 10.1007/s00221-006-0804-0
– ident: ref97
  doi: 10.1109/TPAMI.2004.29
– start-page: 2944
  year: 2009
  ident: ref95
  article-title: SUNDAy: Saliency using natural statistics for dynamic analysis of scenes
  publication-title: Proc 31st Annu Cognit Sci Soc
– ident: ref94
  doi: 10.1109/ICIP.2003.1246946
– ident: ref8
  doi: 10.1109/TMM.2005.854410
– ident: ref73
  doi: 10.1162/jocn.1997.9.1.27
– volume: 19
  start-page: 545
  year: 2006
  ident: ref40
  publication-title: Advances in neural information processing systems
– ident: ref70
  doi: 10.1016/j.visres.2010.01.002
– ident: ref89
  doi: 10.1109/CVPR.2011.5995423
– ident: ref53
  doi: 10.1109/ICIP.2010.5652280
– ident: ref84
  doi: 10.1146/annurev.neuro.24.1.1193
– ident: ref29
  doi: 10.1146/annurev.psych.50.1.243
– ident: ref12
  doi: 10.1109/TCIAIG.2009.2024532
– start-page: 30
  year: 2010
  ident: ref82
  article-title: An eye fixation database for saliency detection in images
  publication-title: Proc Eur Conf Comput Vis
– ident: ref23
  doi: 10.1145/1658349.1658355
– year: 2005
  ident: ref20
  publication-title: The PASCAL Visual Object Classes
– ident: ref6
  doi: 10.1109/ROBOT.2010.5509638
– ident: ref59
  doi: 10.1016/0010-0285(80)90005-5
– ident: ref81
  doi: 10.1167/11.4.14
SSID ssj0014516
Score 2.595842
Snippet Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by...
SourceID proquest
pubmed
pascalfrancis
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 55
SubjectTerms Analytical models
Applied sciences
Area Under Curve
Arrays
Artificial intelligence
Attention - physiology
Bottom-up attention
Computational Biology
Computational modeling
Computer science; control theory; systems
Databases, Factual
Detection, estimation, filtering, equalization, prediction
Exact sciences and technology
eye movement prediction
Eye movements
Eye Movements - physiology
Government
Humans
Image processing
Image Processing, Computer-Assisted - methods
Information, signal and communications theory
Mathematical models
model comparison
Models, Statistical
Organizing
Pattern recognition
Pattern recognition. Digital image processing. Computational geometry
Photic Stimulation
Predictive models
Searching
Signal and communications theory
Signal processing
Signal, noise
State of the art
Studies
Telecommunications and information theory
Videos
Vision systems
Visual
visual attention
visual saliency
Visualization
Title Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study
URI https://ieeexplore.ieee.org/document/6253254
https://www.ncbi.nlm.nih.gov/pubmed/22868572
https://www.proquest.com/docview/1272440607
https://www.proquest.com/docview/1273682743
https://www.proquest.com/docview/1283652936
Volume 22
WOSCitedRecordID wos000312892000005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE/IET Electronic Library (IEL) (UW System Shared)
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1La9wwEB6S0EN6aNJNH27SRYVeCnVWka2Hcwuhob2ElKZlb0bWAxaCt8TrQP59R7LW20AbyMUYayzLnhnrk0aaD-Cjb7yn2rBcc2vyUuFBC1rlVUiWZqQs8LZINiEvL9V8Xl1twedxL4xzLi4-c8fhNMby7dL0Yapshli9wAHNNmxLKYe9WmPEIBDOxsgml7lE2L8OSdJqdv3tKqzhYscMxzfYX4cEwEwJxSV70BtFepWwOFJ3-H38QGzxf-QZe6CLvae1fR9eJKRJzgbTeAlbrp3AXkKdJPl0N4Hnf6UknMDuhnz3AOrvvW7jLjT8J5J1_hKy9CTO_eeBSA2fgEP2OMlIFi35teh6fOoPhPdhUyeJIljzKTkj55tM4ySsX7x_BT8vvlyff80TI0NuSilWOb6Vdo2wVHNGC-u1liUWaC2sbhovLC_xShPoFbXRTBldyUor7PIYb05KX7yGnXbZurdAqFXcWt-gUQj8WfvGhbw4WLkRpqGmyGC21kxtUrrywJpxU8dhC61qVGsd1FontWbwabzj95Cq4xHZg6CiUS5pJ4PpA-WP5Qytpagoz-BobQ11cvauPmESQRIVFKv9MBajm4bYi27dso8yhVAM8dpjMqoQHPGXyODNYGmbBiSDfffvhh_CLos8HWFu6Ah2Vre9ew_PzN1q0d1O0V_mahr95Q8qORAt
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3da9UwFD_MKTgfNr2bWjdnBF8Eu5ubNh_d2xiODedl4lX2VtI0gQvSK-ut4H_vSZrb62AOfCmlOfloz0nzS05yfgDvXOUc1YalmtcmzRVetKBFWvhgaUbKDLMFsgk5narr6-JqAz4MZ2GstWHzmT3yt8GXXy9M55fKxojVM5zQPICHPM_ZpD-tNfgMPOVs8G1ymUoE_iunJC3Gs4srv4uLHTGc4eCI7UMAMyUUl-zWeBQIVvz2SN3iF3I9tcW_sWcYg852_q_1T2E7Yk1y0hvHM9iwzQh2Iu4ksVe3I3jyV1DCEWyt6Xd3ofzS6SacQ8O_IllFMCELR8Lqf-qp1LAGnLSHZUYyb8j3edthrV8R4PtjnSSIYMnH5IScrmONE7-D8fcefDv7ODs9TyMnQ2pyKZYpvpW2laip5oxmtdNa5pigtah1VTlR8xyfVJ5gURvNlNGFLLTCQY_xapK77DlsNovGvgRCa8Xr2lVoFgJ_166yPjIOFm6EqajJEhivNFOaGLDc82b8KMPEhRYlqrX0ai2jWhN4P-T42QfruEd216tokIvaSeDwlvKHdIbWkhWUJ3CwsoYydve2nDCJMIkKisW-HZKxo3rvi27sogsymVAMEdt9MioTHBGYSOBFb2nrBkSDfXV3w9_A4_PZ58vy8mL6aR-2WGDt8CtFB7C5vOnsa3hkfi3n7c1h6DV_AAaEEow
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Quantitative+Analysis+of+Human-Model+Agreement+in+Visual+Saliency+Modeling%3A+A+Comparative+Study&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=BORJI%2C+Ali&rft.au=SIHITE%2C+Dicky+N&rft.au=ITTI%2C+Laurent&rft.date=2013&rft.pub=Institute+of+Electrical+and+Electronics+Engineers&rft.issn=1057-7149&rft.volume=22&rft.issue=1-2&rft.spage=55&rft.epage=69&rft_id=info:doi/10.1109%2Ftip.2012.2210727&rft.externalDBID=n%2Fa&rft.externalDocID=26853905
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon