Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms

Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accu...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings of the National Academy of Sciences - PNAS Ročník 115; číslo 24; s. 6171
Hlavní autoři: Phillips, P Jonathon, Yates, Amy N, Hu, Ying, Hahn, Carina A, Noyes, Eilidh, Jackson, Kelsey, Cavazos, Jacqueline G, Jeckeln, Géraldine, Ranjan, Rajeev, Sankaranarayanan, Swami, Chen, Jun-Cheng, Castillo, Carlos D, Chellappa, Rama, White, David, O'Toole, Alice J
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States 12.06.2018
Témata:
ISSN:1091-6490, 1091-6490
On-line přístup:Zjistit podrobnosti o přístupu
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.
AbstractList Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.
Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.
Author Yates, Amy N
O'Toole, Alice J
Ranjan, Rajeev
Hu, Ying
Castillo, Carlos D
Phillips, P Jonathon
Noyes, Eilidh
Cavazos, Jacqueline G
Jackson, Kelsey
Hahn, Carina A
White, David
Sankaranarayanan, Swami
Chen, Jun-Cheng
Jeckeln, Géraldine
Chellappa, Rama
Author_xml – sequence: 1
  givenname: P Jonathon
  orcidid: 0000-0001-6284-5197
  surname: Phillips
  fullname: Phillips, P Jonathon
  email: jonathon@nist.gov
  organization: Information Access Division, National Institute of Standards and Technology, Gaithersburg, MD 20899; jonathon@nist.gov
– sequence: 2
  givenname: Amy N
  surname: Yates
  fullname: Yates, Amy N
  organization: Information Access Division, National Institute of Standards and Technology, Gaithersburg, MD 20899
– sequence: 3
  givenname: Ying
  surname: Hu
  fullname: Hu, Ying
  organization: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080
– sequence: 4
  givenname: Carina A
  surname: Hahn
  fullname: Hahn, Carina A
  organization: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080
– sequence: 5
  givenname: Eilidh
  surname: Noyes
  fullname: Noyes, Eilidh
  organization: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080
– sequence: 6
  givenname: Kelsey
  surname: Jackson
  fullname: Jackson, Kelsey
  organization: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080
– sequence: 7
  givenname: Jacqueline G
  surname: Cavazos
  fullname: Cavazos, Jacqueline G
  organization: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080
– sequence: 8
  givenname: Géraldine
  surname: Jeckeln
  fullname: Jeckeln, Géraldine
  organization: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080
– sequence: 9
  givenname: Rajeev
  surname: Ranjan
  fullname: Ranjan, Rajeev
  organization: Department of Electrical and Computer Engineering, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20854
– sequence: 10
  givenname: Swami
  surname: Sankaranarayanan
  fullname: Sankaranarayanan, Swami
  organization: Department of Electrical and Computer Engineering, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20854
– sequence: 11
  givenname: Jun-Cheng
  surname: Chen
  fullname: Chen, Jun-Cheng
  organization: University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20854
– sequence: 12
  givenname: Carlos D
  surname: Castillo
  fullname: Castillo, Carlos D
  organization: University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20854
– sequence: 13
  givenname: Rama
  surname: Chellappa
  fullname: Chellappa, Rama
  organization: Department of Electrical and Computer Engineering, University of Maryland Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20854
– sequence: 14
  givenname: David
  surname: White
  fullname: White, David
  organization: School of Psychology, The University of New South Wales, Sydney, NSW 2052, Australia
– sequence: 15
  givenname: Alice J
  surname: O'Toole
  fullname: O'Toole, Alice J
  organization: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX 75080
BackLink https://www.ncbi.nlm.nih.gov/pubmed/29844174$$D View this record in MEDLINE/PubMed
BookMark eNplkE1LxDAYhIOsuB969iY9erBrPtvmKIurwoIXPZc36Zu10iZr0oLrr1d0BcHTDMMzc5g5mfjgkZBzRpeMluJ65yEtWcmZUIoxdURmjGqWF1LTyR8_JfOUXimlWlX0hEy5rqRkpZwRswaLWUQbtr4d2uAzsHaMYPdZcJkLEX1qbYbv0LceY7rK0rjDeCh8fCfgm8z9m-m2IbbDS59OybGDLuHZQRfkeX37tLrPN493D6ubTW4VlUOuoHFAwUnrmDbCcWG0Qd5UWDrkxjmBnKM0AFYWugKlZVEoYYtKoAGq-IJc_uzuYngbMQ113yaLXQcew5hqTmXJFddKfKEXB3Q0PTb1LrY9xH39-wv_BG_kacU
CitedBy_id crossref_primary_10_1016_j_forsciint_2020_110486
crossref_primary_10_1073_pnas_2110013119
crossref_primary_10_1016_j_neubiorev_2024_105943
crossref_primary_10_1038_s41598_024_58605_7
crossref_primary_10_1109_ACCESS_2019_2923626
crossref_primary_10_1007_s11263_024_02078_8
crossref_primary_10_1016_j_cognition_2024_105876
crossref_primary_10_1007_s11704_020_9001_8
crossref_primary_10_1016_j_eswa_2019_04_018
crossref_primary_10_1002_acp_4153
crossref_primary_10_1007_s00426_020_01429_7
crossref_primary_10_1002_arcp_1089
crossref_primary_10_1016_j_apergo_2024_104364
crossref_primary_10_1016_j_tics_2018_06_006
crossref_primary_10_1111_1467_9566_70032
crossref_primary_10_1002_acp_70002
crossref_primary_10_1186_s41235_024_00596_0
crossref_primary_10_1002_acp_70003
crossref_primary_10_7717_peerj_6330
crossref_primary_10_1002_acp_3739
crossref_primary_10_1016_j_forsciint_2023_111576
crossref_primary_10_4018_IJSSCI_2020010103
crossref_primary_10_1145_3403964
crossref_primary_10_1007_s12206_025_0124_6
crossref_primary_10_1016_j_neucom_2025_130614
crossref_primary_10_3758_s13428_021_01609_2
crossref_primary_10_1016_j_cognition_2024_105904
crossref_primary_10_1111_cogs_13031
crossref_primary_10_1016_j_compbiomed_2019_103584
crossref_primary_10_1073_pnas_2304085120
crossref_primary_10_1002_acp_4027
crossref_primary_10_1111_bjop_12794
crossref_primary_10_1016_j_isci_2023_108501
crossref_primary_10_1002_acp_3975
crossref_primary_10_3758_s13423_021_02044_2
crossref_primary_10_1177_1741659020917434
crossref_primary_10_26634_jcom_8_1_17390
crossref_primary_10_3758_s13428_023_02092_7
crossref_primary_10_1016_j_cognition_2021_104955
crossref_primary_10_1098_rsos_201187
crossref_primary_10_1177_17456916231185339
crossref_primary_10_1177_17470218221076817
crossref_primary_10_1186_s41235_021_00317_x
crossref_primary_10_1109_ACCESS_2024_3467151
crossref_primary_10_1007_s42113_022_00157_y
crossref_primary_10_1109_TCSVT_2023_3344809
crossref_primary_10_1002_acp_3608
crossref_primary_10_1016_j_eswa_2025_126691
crossref_primary_10_35784_iapgos_7156
crossref_primary_10_1057_s41284_021_00321_2
crossref_primary_10_1016_j_jml_2020_104104
crossref_primary_10_1016_j_forsciint_2023_111879
crossref_primary_10_1038_s41598_021_92549_6
crossref_primary_10_3390_s22145270
crossref_primary_10_1016_j_scijus_2024_07_006
crossref_primary_10_1016_j_forsciint_2022_111473
crossref_primary_10_1073_pnas_2220642120
crossref_primary_10_1002_acp_4254
crossref_primary_10_1016_j_entcom_2024_100847
crossref_primary_10_1016_j_jarmac_2018_08_006
crossref_primary_10_1371_journal_pone_0237855
crossref_primary_10_1057_s41264_024_00285_5
crossref_primary_10_1111_1556_4029_15531
crossref_primary_10_1177_0301006620904614
crossref_primary_10_3390_bs15081094
crossref_primary_10_1186_s41235_019_0193_0
crossref_primary_10_1111_bjop_12392
crossref_primary_10_1186_s41235_019_0174_3
crossref_primary_10_1111_bjop_12394
crossref_primary_10_1111_1556_4029_14324
crossref_primary_10_1109_TETC_2019_2958738
crossref_primary_10_1109_ACCESS_2023_3294090
crossref_primary_10_1073_pnas_1902661116
crossref_primary_10_1145_3664634
crossref_primary_10_1002_sam_70021
crossref_primary_10_1186_s41235_024_00564_8
crossref_primary_10_1002_acp_4127
crossref_primary_10_2139_ssrn_5199544
crossref_primary_10_1002_acp_4003
crossref_primary_10_1002_acp_4245
crossref_primary_10_1016_j_cortex_2022_09_012
crossref_primary_10_1016_j_cose_2021_102227
crossref_primary_10_1111_bjop_12657
crossref_primary_10_3758_s13423_023_02304_3
crossref_primary_10_1016_j_neuropsychologia_2022_108279
crossref_primary_10_1088_1361_6501_ad889d
crossref_primary_10_1371_journal_pone_0258241
crossref_primary_10_3389_fnins_2024_1441285
crossref_primary_10_1016_j_forsciint_2023_111888
crossref_primary_10_1155_2022_5151105
crossref_primary_10_1016_j_neuropsychologia_2024_109061
crossref_primary_10_1177_21695067231192692
crossref_primary_10_1016_j_cognition_2020_104345
crossref_primary_10_1155_2023_8225630
crossref_primary_10_1145_3365842
crossref_primary_10_1111_bjop_12368
crossref_primary_10_1038_s41598_025_92907_8
crossref_primary_10_1016_j_cognition_2020_104341
crossref_primary_10_1016_j_eswa_2020_113448
crossref_primary_10_1002_acp_70053
crossref_primary_10_1057_s41599_022_01090_y
crossref_primary_10_1145_3618113
crossref_primary_10_1177_03010066241295992
crossref_primary_10_1177_2041669519863077
crossref_primary_10_1017_S0140525X23000602
crossref_primary_10_3390_s22031153
crossref_primary_10_1073_pnas_2220580120
crossref_primary_10_1134_S1064230720040073
crossref_primary_10_1145_3625288
crossref_primary_10_1038_s41467_019_12623_6
crossref_primary_10_1089_ast_2022_0017
crossref_primary_10_7202_1076696ar
crossref_primary_10_1002_acp_3813
crossref_primary_10_1007_s10548_025_01136_9
crossref_primary_10_1007_s10610_020_09441_8
crossref_primary_10_1371_journal_pone_0283682
crossref_primary_10_3758_s13428_022_02009_w
crossref_primary_10_1186_s41235_019_0205_0
crossref_primary_10_1016_j_cotox_2019_03_009
crossref_primary_10_1177_0301006619877821
crossref_primary_10_1007_s42991_022_00253_3
crossref_primary_10_1371_journal_pone_0212935
crossref_primary_10_1002_acp_70037
crossref_primary_10_1038_s44159_022_00041_3
crossref_primary_10_1016_j_neuropsychologia_2021_108119
crossref_primary_10_1016_j_visres_2025_108641
crossref_primary_10_1038_s41598_023_39944_3
crossref_primary_10_1109_ACCESS_2020_3011028
crossref_primary_10_1016_j_cobeha_2020_08_008
crossref_primary_10_1016_j_foodcont_2024_110754
crossref_primary_10_1073_pnas_2111547119
crossref_primary_10_1146_annurev_vision_093019_111701
crossref_primary_10_3389_fdata_2024_1337465
crossref_primary_10_3758_s13428_021_01638_x
crossref_primary_10_1007_s12024_024_00839_y
crossref_primary_10_1186_s41235_024_00532_2
crossref_primary_10_1016_j_cognition_2022_105333
crossref_primary_10_1080_00401706_2020_1785549
crossref_primary_10_2139_ssrn_4279986
crossref_primary_10_1111_bjop_12629
crossref_primary_10_1371_journal_pone_0277625
crossref_primary_10_3758_s13428_021_01614_5
crossref_primary_10_1002_acp_4053
crossref_primary_10_1016_j_cognition_2021_104966
crossref_primary_10_1109_TNSRE_2022_3173079
crossref_primary_10_1111_bjop_12745
crossref_primary_10_1080_13683500_2023_2295923
crossref_primary_10_1016_j_chbah_2025_100200
crossref_primary_10_1146_annurev_vision_091718_014951
crossref_primary_10_1109_TIFS_2021_3059274
crossref_primary_10_3390_app14093783
crossref_primary_10_3758_s13428_024_02592_0
crossref_primary_10_1016_j_forsciint_2021_110947
crossref_primary_10_1016_j_forsciint_2024_112202
crossref_primary_10_1016_j_neuropsychologia_2021_107810
crossref_primary_10_1016_j_jarmac_2021_07_010
crossref_primary_10_1145_3422988
crossref_primary_10_1016_j_cortex_2023_05_018
crossref_primary_10_1016_j_neunet_2022_07_007
crossref_primary_10_1111_1556_4029_15613
crossref_primary_10_1177_25151274231164927
crossref_primary_10_1016_j_neuropsychologia_2021_107809
crossref_primary_10_1016_j_tics_2020_06_006
crossref_primary_10_3390_laws13030035
crossref_primary_10_1063_1_5129306
crossref_primary_10_1016_j_future_2020_07_014
crossref_primary_10_1098_rsos_200233
crossref_primary_10_1162_jocn_a_02040
crossref_primary_10_1093_jrsssc_qlad089
crossref_primary_10_1098_rsos_201169
crossref_primary_10_1186_s41235_021_00288_z
crossref_primary_10_1073_pnas_2319709121
crossref_primary_10_1016_j_forsciint_2019_109910
crossref_primary_10_1016_j_scijus_2024_06_002
crossref_primary_10_1016_j_forsciint_2025_112613
crossref_primary_10_1177_17470218211027695
crossref_primary_10_3758_s13428_025_02809_w
crossref_primary_10_1038_s41598_023_28632_x
crossref_primary_10_1109_TNNLS_2021_3082304
crossref_primary_10_1145_3609224
crossref_primary_10_1038_s41583_020_00393_w
crossref_primary_10_1038_s41598_021_93109_8
crossref_primary_10_3390_s21030728
ContentType Journal Article
Copyright Copyright © 2018 the Author(s). Published by PNAS.
Copyright_xml – notice: Copyright © 2018 the Author(s). Published by PNAS.
DBID CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1073/pnas.1721355115
DatabaseName Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod no_fulltext_linktorsrc
Discipline Sciences (General)
EISSN 1091-6490
ExternalDocumentID 29844174
Genre Research Support, U.S. Gov't, Non-P.H.S
Research Support, Non-U.S. Gov't
Journal Article
GroupedDBID ---
-DZ
-~X
.55
0R~
123
29P
2AX
2FS
2WC
4.4
53G
5RE
5VS
85S
AACGO
AAFWJ
AANCE
ABBHK
ABOCM
ABPLY
ABPPZ
ABTLG
ABXSQ
ABZEH
ACGOD
ACHIC
ACIWK
ACNCT
ACPRK
ADULT
AENEX
AEUPB
AEXZC
AFFNX
AFOSN
AFRAH
ALMA_UNASSIGNED_HOLDINGS
AQVQM
BKOMP
CGR
CS3
CUY
CVF
D0L
DCCCD
DIK
DOOOF
DU5
E3Z
EBS
ECM
EIF
EJD
F5P
FRP
GX1
H13
HH5
HYE
IPSME
JAAYA
JBMMH
JENOY
JHFFW
JKQEH
JLS
JLXEF
JPM
JSG
JSODD
JST
KQ8
L7B
LU7
N9A
NPM
N~3
O9-
OK1
PNE
PQQKQ
R.V
RHF
RHI
RNA
RNS
RPM
RXW
SA0
SJN
TAE
TN5
UKR
VQA
W8F
WH7
WOQ
WOW
X7M
XSW
Y6R
YBH
YIF
YIN
YKV
YSK
ZCA
~02
~KM
7X8
ADQXQ
ID FETCH-LOGICAL-c504t-5adfa0af4cf19b3f23b9be2d8e7fe2bff3e22e4baac4698a5946653c683eba052
IEDL.DBID 7X8
ISICitedReferencesCount 238
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000434933400041&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1091-6490
IngestDate Fri Sep 05 14:31:43 EDT 2025
Wed Feb 19 02:33:46 EST 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 24
Keywords face identification
forensic science
wisdom-of-crowds
face recognition algorithm
machine learning technology
Language English
License Copyright © 2018 the Author(s). Published by PNAS.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c504t-5adfa0af4cf19b3f23b9be2d8e7fe2bff3e22e4baac4698a5946653c683eba052
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-6284-5197
OpenAccessLink https://www.pnas.org/doi/10.1073/pnas.1721355115
PMID 29844174
PQID 2047252953
PQPubID 23479
ParticipantIDs proquest_miscellaneous_2047252953
pubmed_primary_29844174
PublicationCentury 2000
PublicationDate 2018-06-12
PublicationDateYYYYMMDD 2018-06-12
PublicationDate_xml – month: 06
  year: 2018
  text: 2018-06-12
  day: 12
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Proceedings of the National Academy of Sciences - PNAS
PublicationTitleAlternate Proc Natl Acad Sci U S A
PublicationYear 2018
References 31097803 - Nat Hum Behav. 2018 Jul;2(7):444
References_xml – reference: 31097803 - Nat Hum Behav. 2018 Jul;2(7):444
SSID ssj0009580
Score 2.6680768
Snippet Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences....
SourceID proquest
pubmed
SourceType Aggregation Database
Index Database
StartPage 6171
SubjectTerms Algorithms
Biometric Identification - methods
Face - anatomy & histology
Forensic Sciences - methods
Humans
Machine Learning
Reproducibility of Results
Title Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms
URI https://www.ncbi.nlm.nih.gov/pubmed/29844174
https://www.proquest.com/docview/2047252953
Volume 115
WOSCitedRecordID wos000434933400041&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8QwEA7qevDi-7G-iOBBwWqbpG1yEhEXLy57UNjbMkkTFdy2bnZF_fUmbVcFEQQvhRZSQmaSfJOZfB9Ch25P5UlmIMhUZgKWOFtA6F4Joy7CVtKFAKYSm0i7Xd7vi15z4Gabssrpmlgt1Fmh_Bm5C9JZSnxWip6Xz4FXjfLZ1UZCYxa1qIMyvqQr7fNvpLu8ZiMQUZAwEU6pfVJ6VuZgT3344_bbKIp_x5fVPtNZ-m8Pl9FigzDxRe0SK2hG56topZnDFh81RNPHa0h2QGn8WUJU5BiUmoxAveHCYIdmfXG7wvoVhv6KoD3BdlLqUdPgvfoCeYbNj9883buOjR-Gdh3dda5uL6-DRnMhUHHIxkEMznAhGKZMJCQ1hEohNcm4To0m0hiqCdFMAiivPQmx56ePqUo41RLCmGygubzI9RbCofLwKRUyVJyJLHNYjgugxviQCShvo4PpOA6cT_tEBeS6mNjB10i20WZtjEFZk28MiOBeNY1t_6H1Dlpw-IYHldTQLmoZN6P1HppXL-NHO9qvnMU9u72bDx0Xy8E
linkProvider ProQuest
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Face+recognition+accuracy+of+forensic+examiners%2C+superrecognizers%2C+and+face+recognition+algorithms&rft.jtitle=Proceedings+of+the+National+Academy+of+Sciences+-+PNAS&rft.au=Phillips%2C+P+Jonathon&rft.au=Yates%2C+Amy+N&rft.au=Hu%2C+Ying&rft.au=Hahn%2C+Carina+A&rft.date=2018-06-12&rft.eissn=1091-6490&rft.volume=115&rft.issue=24&rft.spage=6171&rft_id=info:doi/10.1073%2Fpnas.1721355115&rft_id=info%3Apmid%2F29844174&rft_id=info%3Apmid%2F29844174&rft.externalDocID=29844174
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1091-6490&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1091-6490&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1091-6490&client=summon