Five points to check when comparing visual perception in humans and machines

With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed toward comparing information processing in humans and machines. These studies are an exciting chance to learn about one system by studying the other. Here, we propose ideas on how...

Full description

Saved in:
Bibliographic Details
Published in:Journal of vision (Charlottesville, Va.) Vol. 21; no. 3; p. 16
Main Authors: Funke, Christina M., Borowski, Judy, Stosio, Karolina, Brendel, Wieland, Wallis, Thomas S. A., Bethge, Matthias
Format: Journal Article
Language:English
Published: United States The Association for Research in Vision and Ophthalmology 01.03.2021
Subjects:
ISSN:1534-7362, 1534-7362
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed toward comparing information processing in humans and machines. These studies are an exciting chance to learn about one system by studying the other. Here, we propose ideas on how to design, conduct, and interpret experiments such that they adequately support the investigation of mechanisms when comparing human and machine perception. We demonstrate and apply these ideas through three case studies. The first case study shows how human bias can affect the interpretation of results and that several analytic tools can help to overcome this human reference point. In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks. Thereby, we show that contrary to previous suggestions, feedback mechanisms might not be necessary for the tasks in question. The third case study highlights the importance of aligning experimental conditions. We find that a previously observed difference in object recognition does not hold when adapting the experiment to make conditions more equitable between humans and machines. In presenting a checklist for comparative studies of visual reasoning in humans and machines, we hope to highlight how to overcome potential pitfalls in design and inference.
AbstractList With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed toward comparing information processing in humans and machines. These studies are an exciting chance to learn about one system by studying the other. Here, we propose ideas on how to design, conduct, and interpret experiments such that they adequately support the investigation of mechanisms when comparing human and machine perception. We demonstrate and apply these ideas through three case studies. The first case study shows how human bias can affect the interpretation of results and that several analytic tools can help to overcome this human reference point. In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks. Thereby, we show that contrary to previous suggestions, feedback mechanisms might not be necessary for the tasks in question. The third case study highlights the importance of aligning experimental conditions. We find that a previously observed difference in object recognition does not hold when adapting the experiment to make conditions more equitable between humans and machines. In presenting a checklist for comparative studies of visual reasoning in humans and machines, we hope to highlight how to overcome potential pitfalls in design and inference.With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed toward comparing information processing in humans and machines. These studies are an exciting chance to learn about one system by studying the other. Here, we propose ideas on how to design, conduct, and interpret experiments such that they adequately support the investigation of mechanisms when comparing human and machine perception. We demonstrate and apply these ideas through three case studies. The first case study shows how human bias can affect the interpretation of results and that several analytic tools can help to overcome this human reference point. In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks. Thereby, we show that contrary to previous suggestions, feedback mechanisms might not be necessary for the tasks in question. The third case study highlights the importance of aligning experimental conditions. We find that a previously observed difference in object recognition does not hold when adapting the experiment to make conditions more equitable between humans and machines. In presenting a checklist for comparative studies of visual reasoning in humans and machines, we hope to highlight how to overcome potential pitfalls in design and inference.
With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed toward comparing information processing in humans and machines. These studies are an exciting chance to learn about one system by studying the other. Here, we propose ideas on how to design, conduct, and interpret experiments such that they adequately support the investigation of mechanisms when comparing human and machine perception. We demonstrate and apply these ideas through three case studies. The first case study shows how human bias can affect the interpretation of results and that several analytic tools can help to overcome this human reference point. In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks. Thereby, we show that contrary to previous suggestions, feedback mechanisms might not be necessary for the tasks in question. The third case study highlights the importance of aligning experimental conditions. We find that a previously observed difference in object recognition does not hold when adapting the experiment to make conditions more equitable between humans and machines. In presenting a checklist for comparative studies of visual reasoning in humans and machines, we hope to highlight how to overcome potential pitfalls in design and inference.
Author Funke, Christina M.
Wallis, Thomas S. A.
Borowski, Judy
Bethge, Matthias
Brendel, Wieland
Stosio, Karolina
Author_xml – sequence: 1
  givenname: Christina M.
  surname: Funke
  fullname: Funke, Christina M.
  organization: University of Tübingen, Tübingen, Germany, christina.funke@bethgelab.org
– sequence: 2
  givenname: Judy
  surname: Borowski
  fullname: Borowski, Judy
  organization: University of Tübingen, Tübingen, Germany, judy.borowski@bethgelab.org
– sequence: 3
  givenname: Karolina
  surname: Stosio
  fullname: Stosio, Karolina
  organization: University of Tübingen, Tübingen, Germany, Bernstein Center for Computational Neuroscience, Tübingen and Berlin, Germany, Volkswagen Group Machine Learning Research Lab, Munich, Germany, ka.stosio@gmail.com
– sequence: 4
  givenname: Wieland
  surname: Brendel
  fullname: Brendel, Wieland
  organization: University of Tübingen, Tübingen, Germany, Bernstein Center for Computational Neuroscience, Tübingen and Berlin, Germany, Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany, wieland.brendel@bethgelab.org
– sequence: 5
  givenname: Thomas S. A.
  surname: Wallis
  fullname: Wallis, Thomas S. A.
  organization: University of Tübingen, Tübingen, Germany, Present address: Amazon.com, Tübingen, tsawallis@gmail.com
– sequence: 6
  givenname: Matthias
  surname: Bethge
  fullname: Bethge, Matthias
  organization: University of Tübingen, Tübingen, Germany, Bernstein Center for Computational Neuroscience, Tübingen and Berlin, Germany, Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany, matthias@bethgelab.org
BackLink https://www.ncbi.nlm.nih.gov/pubmed/33724362$$D View this record in MEDLINE/PubMed
BookMark eNptkc1LxDAQxYOsuLp68i45CtI1k7RNexFkcVVY8KLnkKapjbZJbdqK_71Z_EDF0wwzv3kP3hygmXVWI3QMZAmQ8vMnNy0pLNkS0h20DwmLI85SOvvRz9GB90-EUJIQ2ENzxjiNw3wfbdZm0rhzxg4eDw6rWqtn_Fpri5VrO9kb-4gn40fZ4E73SneDcRYbi-uxldZjaUvcSlUbq_0h2q1k4_XRZ12gh_XV_eom2txd364uN5GKYz5EGeN5TiiveAV5pkrGWAGZyqHQlNCi4FUOkpUypSnEJIlLJlmelGlZpnkRILZAFx-63Vi0ulTaDr1sRNebVvZvwkkjfm-sqcWjmwTPM0JiCAKnnwK9exm1H0RrvNJNI612oxc0xJQlJAMa0JOfXt8mXxEGAD4A1Tvve10JZQa5TSlYm0YAEds3ifAmQUEwAWm4Oftz8yX7H_0OacqUxg
CitedBy_id crossref_primary_10_1038_s41467_025_60801_6
crossref_primary_10_1016_j_neunet_2025_107582
crossref_primary_10_1038_s41598_022_20012_1
crossref_primary_10_1146_annurev_vision_120522_031739
crossref_primary_10_1016_j_visres_2023_108195
crossref_primary_10_1073_pnas_2220642120
crossref_primary_10_1016_j_jisa_2023_103502
crossref_primary_10_1016_j_tins_2022_12_008
crossref_primary_10_1371_journal_pbio_3001477
crossref_primary_10_1016_j_cogsys_2023_101167
crossref_primary_10_1038_s41562_021_01097_6
crossref_primary_10_3389_fnins_2022_975639
crossref_primary_10_1371_journal_pcbi_1012751
crossref_primary_10_3390_arts13050138
crossref_primary_10_1016_j_neunet_2025_107189
crossref_primary_10_1007_s43681_025_00703_x
crossref_primary_10_1111_nyas_14619
crossref_primary_10_3389_fnins_2023_1208882
crossref_primary_10_1007_s40747_025_02053_x
crossref_primary_10_1162_neco_a_01485
crossref_primary_10_1002_advs_202405789
crossref_primary_10_1017_S0140525X22002813
crossref_primary_10_1109_MRA_2022_3210587
crossref_primary_10_1016_j_cogsys_2023_101156
crossref_primary_10_1016_j_inffus_2022_11_011
crossref_primary_10_1038_s41598_022_10526_z
crossref_primary_10_1038_s41598_022_07438_3
crossref_primary_10_1109_TNNLS_2022_3196831
Cites_doi 10.1371/journal.pcbi.1006897
10.3389/fpsyg.2018.00345
10.1038/14819
10.1098/rsfs.2018.0011
10.1167/8.10.16
10.1073/pnas.1513198113
10.1016/0042-6989(93)90080-G
10.1167/18.13.2
10.5962/bhl.title.1046
10.1038/s41467-019-08931-6
10.1016/j.visres.2008.06.015
10.3389/fpsyg.2017.01551
10.1016/j.neuron.2012.01.010
10.1016/j.visres.2006.11.014
10.1101/407007
10.1163/156856897X00366
10.1016/S0042-6989(02)00686-7
10.1371/journal.pcbi.1003915
10.1007/BF00344251
10.1371/journal.pcbi.1004896
10.1016/j.conb.2020.11.009
10.1109/CBMI.2019.8877412
10.1037/0735-7036.122.4.449
10.1111/eth.1943.5.issue-3
10.1016/j.visres.2004.06.011
10.1073/pnas.1403112111
10.1163/156856897X00357
10.1098/rsfs.2018.0027
10.1016/j.conb.2019.01.007
10.1146/annurev-vision-091718-014951
10.1016/j.tics.2010.09.006
10.1037/0735-7036.121.3.227
10.1016/j.tics.2019.01.009
10.1016/0042-6989(96)00062-4
10.1016/j.visres.2006.10.014
10.7554/eLife.55978
10.1523/JNEUROSCI.1084-05.2005
10.1073/pnas.1109168108
10.1073/pnas.1905334117
10.1145/219717.219748
10.1073/pnas.90.16.7495
10.4324/9781315009292
ContentType Journal Article
Copyright Copyright 2021 The Authors 2021
Copyright_xml – notice: Copyright 2021 The Authors 2021
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
5PM
DOI 10.1167/jov.21.3.16
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
PubMed Central (Full Participant titles)
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic

MEDLINE
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
DocumentTitleAlternate Funke et al
EISSN 1534-7362
ExternalDocumentID PMC7980041
33724362
10_1167_jov_21_3_16
Genre Research Support, Non-U.S. Gov't
Journal Article
Comparative Study
GroupedDBID ---
29L
2WC
53G
5GY
5VS
AAFWJ
AAYXX
ABIVO
ACGFO
ADBBV
AENEX
AFPKN
ALMA_UNASSIGNED_HOLDINGS
BAWUL
BCNDV
CITATION
CS3
DIK
DU5
E3Z
EBS
EJD
F5P
FRP
GROUPED_DOAJ
GX1
KQ8
M~E
OK1
OVT
P2P
RNS
RPM
TR2
TRV
W2D
W8F
XSB
CGR
CUY
CVF
ECM
EIF
NPM
7X8
5PM
ID FETCH-LOGICAL-c447t-83799027f7f198cd333b18c91be202bb7f91a3da62614054d3a395d6dd69b1be3
ISICitedReferencesCount 40
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000637802500016&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1534-7362
IngestDate Thu Aug 21 14:28:00 EDT 2025
Thu Jul 10 22:48:43 EDT 2025
Mon Jul 21 06:01:57 EDT 2025
Sat Nov 29 01:36:13 EST 2025
Tue Nov 18 20:59:45 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 3
Language English
License http://creativecommons.org/licenses/by/4.0
This work is licensed under a Creative Commons Attribution 4.0 International License.
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c447t-83799027f7f198cd333b18c91be202bb7f91a3da62614054d3a395d6dd69b1be3
Notes ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
CMF and JB are both first authors on this work.
WB, TSAW and MB are joint senior authors.
OpenAccessLink http://dx.doi.org/10.1167/jov.21.3.16
PMID 33724362
PQID 2501850812
PQPubID 23479
ParticipantIDs pubmedcentral_primary_oai_pubmedcentral_nih_gov_7980041
proquest_miscellaneous_2501850812
pubmed_primary_33724362
crossref_citationtrail_10_1167_jov_21_3_16
crossref_primary_10_1167_jov_21_3_16
PublicationCentury 2000
PublicationDate 2021-03-01
PublicationDateYYYYMMDD 2021-03-01
PublicationDate_xml – month: 03
  year: 2021
  text: 2021-03-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Journal of vision (Charlottesville, Va.)
PublicationTitleAlternate J Vis
PublicationYear 2021
Publisher The Association for Research in Vision and Ophthalmology
Publisher_xml – name: The Association for Research in Vision and Ophthalmology
References Pelli (bib62) 1997; 10
Elsayed (bib21) 2018
Conway (bib13) 2005; 25
Kleiner (bib41) 2007; 36
Ullman (bib79) 2016; 113
Deng (bib14) 2009
Santoro (bib68) 2017
Gomez-Villa (bib30) 2019
Dujmovic (bib17) 2020; 9
Kuriki (bib48) 2008; 8
Kim (bib38) 2018; 8
Serre (bib71) 2019; 5
Buckner (bib9) 2019
Tversky (bib78) 2004; 44
Majaj (bib55) 2018; 18
Zhou (bib88) 2019; 10
Ma (bib54) 2020
Ringach (bib65) 1996; 36
Miller (bib60) 1995; 38
Geirhos (bib27) 2018
Boesch (bib5) 2007; 121
van Bergen (bib4) 2020
Barrett (bib2) 2018
Kovacs (bib45) 1993; 90
Ritter (bib66) 2017
Riesenhuber (bib64) 1999; 2
Niven (bib61) 2019
Tomasello (bib77) 2008; 122
Kitaoka (bib40) 2003; 15
Koehler (bib42) 1943; 5
Srivastava (bib73) 2019
Golan (bib29) 2019
Messina (bib58) 2019
Han (bib32) 2019
Tan (bib76) 2019
Kubilius (bib47) 2016; 12
DiCarlo (bib15) 2012; 73
Schofield (bib69) 2018; 8
Gatys (bib25) 2016
Brendel (bib8) 2019
Zhang (bib86) 2016
Schrimpf (bib70) 2018
Michaelis (bib59) 2019
Mathes (bib56) 2007; 47
Watanabe (bib82) 2018; 9
Eigen (bib19) 2015
Wu (bib83) 2019
Firestone (bib22) 2020
Szegedy (bib75) 2013
Yamins (bib84) 2014; 111
Long (bib52) 2015
Loffler (bib51) 2003; 43
Liao (bib50) 2016
Peterson (bib63) 2016
Stabinger (bib74) 2016
He (bib34) 2016
Volokitin (bib81) 2017
Krizhevsky (bib46) 2012
Yan (bib85) 2017
Villalobos (bib80) 2020
Cichy (bib12) 2019; 23
Elder (bib20) 1993; 33
Kim (bib37) 2019
Fleuret (bib23) 2011; 108
Eberhardt (bib18) 2016
Cadena (bib10) 2019; 15
Hisakata (bib35) 2008; 48
Barrett (bib3) 2019; 55
Geirhos (bib28) 2018
Spoerer (bib72) 2017; 8
Levi (bib49) 2007; 47
Romanes (bib67) 1883
Geirhos (bib26) 2020
Fukushima (bib24) 1980; 36
Chollet (bib11) 2019
Khaligh-Razavi (bib36) 2014; 10
Luo (bib53) 2019
McCoy (bib57) 2019
Koffka (bib43) 2013
Haun (bib33) 2010; 14
Doerig (bib16) 2019
Brainard (bib6) 1997; 10
Guo (bib31) 2017
Kingma (bib39) 2014
Köhler (bib44) 1925
Braitenberg (bib7) 1986
Zhang (bib87) 2018
References_xml – volume: 15
  start-page: e1006897
  issue: 4
  year: 2019
  ident: bib10
  article-title: Deep convolutional models improve predictions of macaque v1 responses to natural images
  publication-title: PLoS Computational Biology,
  doi: 10.1371/journal.pcbi.1006897
– start-page: 4967
  volume-title: Advances in Neural Information Processing Systems, 30
  year: 2017
  ident: bib68
  article-title: A simple neural network module for relational reasoning
– volume: 9
  start-page: 345
  year: 2018
  ident: bib82
  article-title: Illusory motion reproduced by deep neural networks trained for prediction
  publication-title: Frontiers in Psychology,
  doi: 10.3389/fpsyg.2018.00345
– volume: 2
  start-page: 1019
  issue: 11
  year: 1999
  ident: bib64
  article-title: Hierarchical models of object recognition in cortex
  publication-title: Nature Neuroscience,
  doi: 10.1038/14819
– volume: 8
  start-page: 20180011
  issue: 4
  year: 2018
  ident: bib38
  article-title: Not-so-clevr: Learning same–different relations strains feedforward neural networks
  publication-title: Interface Focus,
  doi: 10.1098/rsfs.2018.0011
– volume: 8
  start-page: 16
  issue: 10
  year: 2008
  ident: bib48
  article-title: Functional brain imaging of the rotating snakes illusion by fMRI
  publication-title: Journal of Vision
  doi: 10.1167/8.10.16
– volume: 113
  start-page: 2744
  issue: 10
  year: 2016
  ident: bib79
  article-title: Atoms of recognition in human and computer vision
  publication-title: Proceedings of the National Academy of Sciences
  doi: 10.1073/pnas.1513198113
– volume: 33
  start-page: 981
  issue: 7
  year: 1993
  ident: bib20
  article-title: The effect of contour closure on the rapid discrimination of two-dimensional shapes
  publication-title: Vision Research,
  doi: 10.1016/0042-6989(93)90080-G
– year: 2019
  ident: bib53
– volume: 18
  start-page: 2
  issue: 13
  year: 2018
  ident: bib55
  article-title: Deep learning—using machine learning to study biological vision
  publication-title: Journal of Vision,
  doi: 10.1167/18.13.2
– year: 2019
  ident: bib8
  article-title: Approximating CNNs with bag-of-local-features models works surprisingly well on imagenet
– start-page: 744268
  year: 2019
  ident: bib16
  article-title: Crowding reveals fundamental differences in local vs. global processing in humans and machines
– start-page: 770
  volume-title: IEEE Conference on Computer Vision and Pattern Recognition
  year: 2016
  ident: bib34
  article-title: Deep residual learning for image recognition
– volume-title: Animal intelligence
  year: 1883
  ident: bib67
  doi: 10.5962/bhl.title.1046
– volume: 10
  start-page: 1334
  issue: 1
  year: 2019
  ident: bib88
  article-title: Humans can decipher adversarial images
  publication-title: Nature Communications,
  doi: 10.1038/s41467-019-08931-6
– year: 2019
  ident: bib32
– volume: 48
  start-page: 1940
  issue: 19
  year: 2008
  ident: bib35
  article-title: The effects of eccentricity and retinal illuminance on the illusory motion seen in a stationary luminance gradient
  publication-title: Vision Research,
  doi: 10.1016/j.visres.2008.06.015
– volume: 8
  start-page: 1551
  year: 2017
  ident: bib72
  article-title: Recurrent convolutional neural networks: A better model of biological object recognition
  publication-title: Frontiers in Psychology,
  doi: 10.3389/fpsyg.2017.01551
– year: 2017
  ident: bib85
– volume: 73
  start-page: 415
  issue: 3
  year: 2012
  ident: bib15
  article-title: How does the brain solve visual object recognition?
  publication-title: Neuron,
  doi: 10.1016/j.neuron.2012.01.010
– volume: 47
  start-page: 818
  issue: 6
  year: 2007
  ident: bib56
  article-title: Closure facilitates contour integration
  publication-title: Vision Research,
  doi: 10.1016/j.visres.2006.11.014
– start-page: 455
  volume-title: International Symposium on Visual Computing
  year: 2018
  ident: bib87
  article-title: Can deep learning learn the principle of closed contour detection?
– start-page: 5628
  volume-title: Advances in Neural Information Processing Systems, 30
  year: 2017
  ident: bib81
  article-title: Do deep neural networks suffer from crowding?
– year: 2018
  ident: bib70
  article-title: Brain-score: Which artificial neural network for object recognition is most brain-like?
  doi: 10.1101/407007
– year: 2019
  ident: bib61
– volume: 10
  start-page: 437
  year: 1997
  ident: bib62
  article-title: The videotoolbox software for visual psychophysics: Transforming numbers into movies
  publication-title: Spatial Vision,
  doi: 10.1163/156856897X00366
– year: 2020
  ident: bib54
– volume: 43
  start-page: 519
  issue: 5
  year: 2003
  ident: bib51
  article-title: Local and global contributions to shape discrimination
  publication-title: Vision Research,
  doi: 10.1016/S0042-6989(02)00686-7
– volume: 10
  start-page: e1003915
  issue: 11
  year: 2014
  ident: bib36
  article-title: Deep supervised, but not unsupervised, models may explain it cortical representation
  publication-title: PLoS Computational Biology,
  doi: 10.1371/journal.pcbi.1003915
– volume: 36
  start-page: 193
  year: 1980
  ident: bib24
  article-title: Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
  publication-title: Biological Cybernetics,
  doi: 10.1007/BF00344251
– volume: 15
  start-page: 261
  issue: 4
  year: 2003
  ident: bib40
  article-title: Phenomenal characteristics of the peripheral drift illusion
  publication-title: Vision,
– volume: 36
  start-page: 1
  issue: 14
  year: 2007
  ident: bib41
  article-title: What's new in psychtoolbox-3
  publication-title: Perception,
– start-page: 7538
  volume-title: Advances in neural information processing systems, 31
  year: 2018
  ident: bib28
  article-title: Generalisation in humans and deep neural networks
– start-page: 248
  volume-title: IEEE Conference on Computer Vision and Pattern Recognition
  year: 2009
  ident: bib14
  article-title: ImageNet: A large-scale hierarchical image database
– volume: 12
  start-page: e1004896
  issue: 4
  year: 2016
  ident: bib47
  article-title: Deep neural networks as a computational model for human shape sensitivity
  publication-title: PLoS Computational Biology
  doi: 10.1371/journal.pcbi.1004896
– year: 2020
  ident: bib4
  article-title: Going in circles is the way forward: The role of recurrence in visual inference
  doi: 10.1016/j.conb.2020.11.009
– start-page: 1
  volume-title: International Conference on Content-Based Multimedia Indexing (CBMI)
  year: 2019
  ident: bib58
  article-title: Testing deep neural networks on the same-different task
  doi: 10.1109/CBMI.2019.8877412
– start-page: 1321
  volume-title: Proceedings of the 34th International Conference on Machine Learning,
  year: 2017
  ident: bib31
  article-title: On calibration of modern neural networks
– volume: 122
  start-page: 449
  issue: 4
  year: 2008
  ident: bib77
  article-title: Assessing the validity of ape-human comparisons: A reply to boesch (2007)
  publication-title: Journal of Comparative Psychology,
  doi: 10.1037/0735-7036.122.4.449
– volume: 5
  start-page: 575
  issue: 3
  year: 1943
  ident: bib42
  article-title: Counting experiments on a common raven and comparative experiments on humans
  publication-title: Zeitschrift für Tierpsychologie
  doi: 10.1111/eth.1943.5.issue-3
– start-page: 2650
  volume-title: IEEE International Conference on Computer Vision
  year: 2015
  ident: bib19
  article-title: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture
– volume: 44
  start-page: 2769
  issue: 24
  year: 2004
  ident: bib78
  article-title: Contour grouping: Closure effects are explained by good continuation and proximity
  publication-title: Vision Research,
  doi: 10.1016/j.visres.2004.06.011
– volume: 111
  start-page: 8619
  issue: 23
  year: 2014
  ident: bib84
  article-title: Performance-optimized hierarchical models predict neural responses in higher visual cortex
  publication-title: Proceedings of the National Academy of Sciences
  doi: 10.1073/pnas.1403112111
– volume: 10
  start-page: 433
  year: 1997
  ident: bib6
  article-title: The psychophysics toolbox
  publication-title: Spatial Vision
  doi: 10.1163/156856897X00357
– volume: 8
  year: 2018
  ident: bib69
  article-title: Understanding images in biological and computer vision
  publication-title: Interface Focus,
  doi: 10.1098/rsfs.2018.0027
– year: 2019
  ident: bib11
  article-title: The measure of intelligence
– volume-title: Vehicles: Experiments in synthetic psychology
  year: 1986
  ident: bib7
– volume: 55
  start-page: 55
  year: 2019
  ident: bib3
  article-title: Analyzing biological and artificial neural networks: Challenges with opportunities for synergy?
  publication-title: Current Opinion in Neurobiology
  doi: 10.1016/j.conb.2019.01.007
– start-page: 1097
  volume-title: Advances in Neural Information Processing Systems, 25
  year: 2012
  ident: bib46
  article-title: Imagenet classification with deep convolutional neural networks
– year: 2016
  ident: bib50
– volume: 5
  start-page: 399
  year: 2019
  ident: bib71
  article-title: Deep learning: The good, the bad, and the ugly
  publication-title: Annual Review of Vision Science
  doi: 10.1146/annurev-vision-091718-014951
– year: 2019
  ident: bib37
– start-page: 511
  year: 2018
  ident: bib2
  article-title: Measuring abstract reasoning in neural networks
  publication-title: Proceedings of the 35th International Conference on Machine Learning, 80
– year: 2019
  ident: bib57
– year: 2019
  ident: bib29
– volume: 14
  start-page: 552
  issue: 12
  year: 2010
  ident: bib33
  article-title: Origins of spatial, temporal, and numerical cognition: Insights from comparative psychology
  publication-title: Trends in Cognitive Sciences,
  doi: 10.1016/j.tics.2010.09.006
– start-page: 380
  volume-title: International Conference on Artificial Neural Networks
  year: 2016
  ident: bib74
  article-title: 25 years of CNNs: Can we compare to human abstraction capabilities?
– volume: 121
  start-page: 227
  issue: 3
  year: 2007
  ident: bib5
  article-title: What makes us human (homo sapiens)? The challenge of cognitive cross-species comparison
  publication-title: Journal of Comparative Psychology
  doi: 10.1037/0735-7036.121.3.227
– volume: 23
  start-page: 305
  issue: 4
  year: 2019
  ident: bib12
  article-title: Deep neural networks as scientific models
  publication-title: Trends in Cognitive Sciences,
  doi: 10.1016/j.tics.2019.01.009
– year: 2016
  ident: bib63
– volume: 36
  start-page: 3037
  issue: 19
  year: 1996
  ident: bib65
  article-title: Spatial and temporal properties of illusory contours and amodal boundary completion
  publication-title: Vision Research,
  doi: 10.1016/0042-6989(96)00062-4
– start-page: 12301
  volume-title: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  year: 2019
  ident: bib30
  article-title: Convolutional neural networks can be deceived by visual illusions
– year: 2019
  ident: bib9
– year: 2020
  ident: bib80
  article-title: Do deep neural networks for segmentation understand insideness?
– year: 2020
  ident: bib26
– year: 2019
  ident: bib76
– volume: 47
  start-page: 512
  issue: 4
  year: 2007
  ident: bib49
  article-title: Global contour processing in amblyopia
  publication-title: Vision Research,
  doi: 10.1016/j.visres.2006.10.014
– start-page: 2414
  volume-title: IEEE Conference on Computer Vision and Pattern Recognition
  year: 2016
  ident: bib25
  article-title: Image style transfer using convolutional neural networks
– year: 2014
  ident: bib39
– year: 2018
  ident: bib27
– volume: 9
  start-page: e55978
  year: 2020
  ident: bib17
  article-title: What do adversarial images tell us about human vision?
  publication-title: eLife,
  doi: 10.7554/eLife.55978
– start-page: 1100
  volume-title: Advances in neural information processing systems, 29
  year: 2016
  ident: bib18
  article-title: How deep is the feature analysis underlying rapid visual categorization?
– volume-title: The mentality of apes
  year: 1925
  ident: bib44
– volume: 25
  start-page: 5651
  issue: 23
  year: 2005
  ident: bib13
  article-title: Neural basis for a powerful static motion illusion
  publication-title: Journal of Neuroscience,
  doi: 10.1523/JNEUROSCI.1084-05.2005
– volume: 108
  start-page: 17621
  issue: 43
  year: 2011
  ident: bib23
  article-title: Comparing machines and humans on a visual categorization test
  publication-title: Proceedings of the National Academy of Sciences
  doi: 10.1073/pnas.1109168108
– volume-title: Proceedings of the National Academy of Sciences
  year: 2020
  ident: bib22
  article-title: Performance vs. competence in human–machine comparisons
  doi: 10.1073/pnas.1905334117
– year: 2019
  ident: bib83
– volume: 38
  start-page: 39
  issue: 11
  year: 1995
  ident: bib60
  article-title: Wordnet: A lexical database for english
  publication-title: Communications of the ACM,
  doi: 10.1145/219717.219748
– start-page: 3431
  volume-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  year: 2015
  ident: bib52
  article-title: Fully convolutional networks for semantic segmentation
– year: 2019
  ident: bib59
– start-page: 3910
  volume-title: Advances in neural information processing systems, 31
  year: 2018
  ident: bib21
  article-title: Adversarial examples that fool both computer vision and time-limited humans
– year: 2013
  ident: bib75
– volume: 90
  start-page: 7495
  issue: 16
  year: 1993
  ident: bib45
  article-title: A closed curve is much more than an incomplete one: Effect of closure in figure-ground segmentation
  publication-title: Proceedings of the National Academy of Sciences
  doi: 10.1073/pnas.90.16.7495
– year: 2019
  ident: bib73
– year: 2016
  ident: bib86
– start-page: 2940
  volume-title: Proceedings of the 34th International Conference on Machine Learning,
  year: 2017
  ident: bib66
  article-title: Cognitive psychology for deep neural networks: A shape bias case study
– volume-title: Principles of Gestalt psychology
  year: 2013
  ident: bib43
  doi: 10.4324/9781315009292
SSID ssj0020501
Score 2.5634768
Snippet With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed toward comparing information processing...
SourceID pubmedcentral
proquest
pubmed
crossref
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
StartPage 16
SubjectTerms Artificial Intelligence
Humans
Image Processing, Computer-Assisted - methods
Learning - physiology
Pattern Recognition, Automated - methods
Pattern Recognition, Visual - physiology
Problem Solving
Recognition, Psychology
Visual Perception - physiology
Title Five points to check when comparing visual perception in humans and machines
URI https://www.ncbi.nlm.nih.gov/pubmed/33724362
https://www.proquest.com/docview/2501850812
https://pubmed.ncbi.nlm.nih.gov/PMC7980041
Volume 21
WOSCitedRecordID wos000637802500016&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1534-7362
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0020501
  issn: 1534-7362
  databaseCode: DOA
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 1534-7362
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0020501
  issn: 1534-7362
  databaseCode: M~E
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9NAEF6lBSEuiDfhES1ST1QO2V3b6z0i1AihpuJQRG6W116rLsWO8mpP_A5-LrMPb5wmh3LgYkXO-iF_n9czs9_MIHSURBHPE5kFVOQqCGMJ75wgYSDJiLFIUlJkpmvJKT87S6ZT8a3X-9PmwqyveF0nNzdi9l-hhn0Atk6d_Qe4_UlhB_wG0GELsMP2TsCPtRZo1lRa3wKGJYCS_zy-vlCt3lzHBtbVwqRgeVWLDnuYdn22ZPMvI7F08sJd09UmpJsFYL1a3yzBXl3rpEKjms2GnfDCeFVb-Y-tYlDV2fFk6GMAzby5do2zv7Zlbm3vNriAy1UzXYU2UQMdsjdh6x-Vcu1HfNiCdnRbfqYNA87cVKz27HPTMyUdGrLOXGtzNHc_AbFehL5s1kNKhmxI9hTavvUB9LJE4xDFPIWDU0pSlpL4AN2jPBJ6vpz8PvGu_CgauUq89oZd5icc_LFz5W1bZ8eBua3D7Rg254_RIwcr_mSZ9AT1VP0UPZg4zcUzdKoJhS2h8LLBhlBYEwp7QmFLKLwhFK5qbAmFASTcEuo5-j4-Of_8JXA9OII8DPkySBgHe4XykpdEJHnBGJMkyQWRClCVkpeCZKzIwC8GVz0KC5YxERVxUcRCwiD2Ah3WTa1eISylCjOwzrlkZRiXRJZFlo8KnguVyGhU9tGH9mmluStQr_ukXKV7cOmjIz94Zuuy7B_2vn3sKcybejEsq1WzWqRUV7IE74TQPnppYfAnYozTEFDtI74FkB-ga7Jv_1NXF6Y2OxeJLmH3-m639wY93Lwbb9Hhcr5S79D9fL2sFvMBOuDTZGCiRANDv79AZLEm
linkProvider ISSN International Centre
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Five+points+to+check+when+comparing+visual+perception+in+humans+and+machines&rft.jtitle=Journal+of+vision+%28Charlottesville%2C+Va.%29&rft.au=Funke%2C+Christina+M.&rft.au=Borowski%2C+Judy&rft.au=Stosio%2C+Karolina&rft.au=Brendel%2C+Wieland&rft.date=2021-03-01&rft.issn=1534-7362&rft.eissn=1534-7362&rft.volume=21&rft.issue=3&rft.spage=16&rft_id=info:doi/10.1167%2Fjov.21.3.16&rft.externalDBID=n%2Fa&rft.externalDocID=10_1167_jov_21_3_16
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1534-7362&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1534-7362&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1534-7362&client=summon