Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models
Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavi...
Uložené v:
| Vydané v: | The American psychologist Ročník 78; číslo 1; s. 36 |
|---|---|
| Hlavní autori: | , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
United States
01.01.2023
|
| Predmet: | |
| ISSN: | 1935-990X, 1935-990X |
| On-line prístup: | Zistit podrobnosti o prístupe |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present
as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims. (PsycInfo Database Record (c) 2023 APA, all rights reserved). |
|---|---|
| AbstractList | Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present
as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims. (PsycInfo Database Record (c) 2023 APA, all rights reserved). Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present psychological audits as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims. (PsycInfo Database Record (c) 2023 APA, all rights reserved).Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based decision tools. Psychology's more-than-a-century of research on the measurement of psychological traits and the prediction of human behavior can benefit such conversations, yet psychological researchers often find themselves excluded due to mismatches in terminology, values, and goals across disciplines. In the present paper, we begin to build a shared interdisciplinary understanding of AI fairness and bias by first presenting three major lenses, which vary in focus and prototypicality by discipline, from which to consider relevant issues: (a) individual attitudes, (b) legality, ethicality, and morality, and (c) embedded meanings within technical domains. Using these lenses, we next present psychological audits as a standardized approach for evaluating the fairness and bias of AI systems that make predictions about humans across disciplinary perspectives. We present 12 crucial components to audits across three categories: (a) components related to AI models in terms of their source data, design, development, features, processes, and outputs, (b) components related to how information about models and their applications are presented, discussed, and understood from the perspectives of those employing the algorithm, those affected by decisions made using its predictions, and third-party observers, and (c) meta-components that must be considered across all other auditing components, including cultural context, respect for persons, and the integrity of individual research designs used to support all model developer claims. (PsycInfo Database Record (c) 2023 APA, all rights reserved). |
| Author | Landers, Richard N Behrend, Tara S |
| Author_xml | – sequence: 1 givenname: Richard N orcidid: 0000-0001-5611-2923 surname: Landers fullname: Landers, Richard N organization: Department of Psychology – sequence: 2 givenname: Tara S orcidid: 0000-0002-7943-5298 surname: Behrend fullname: Behrend, Tara S organization: Department of Psychological Sciences |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35157476$$D View this record in MEDLINE/PubMed |
| BookMark | eNpNkMtOwzAURC1URB-w4QOQl2wCsRPHNruq4lEJiQ1I7KIb-6Z1mxd2UsTf00KRmM3MSEezmCkZNW2DhFyy-IbFibyFuov30pKfkAnTiYi0jt9H__KYTEPY7BmhNDsj40QwIVOZTchmPljXu2ZF-zXS-ZLCobc-3NE5LT3U-Nn6LS1bT3EH1QA_bAnONxgChcbSwkGgrqFrt1rT0MMWw2Go82id6d0Oad1arMI5OS2hCnhx9Bl5e7h_XTxFzy-Py8X8OYKUpX0khC5EqkwmTCpKFaOUZWK5zZgtCp1mhhkpYmERjEqsEiVXwuiEaSnBSq74jFz_7na-_Rgw9HntgsGqggbbIeQ84zoWWskDenVEh6JGm3fe1eC_8r9_-DeLrGia |
| CitedBy_id | crossref_primary_10_1080_00207543_2024_2432463 crossref_primary_10_1007_s12528_025_09467_z crossref_primary_10_3389_feduc_2025_1565938 crossref_primary_10_1016_j_accinf_2025_100739 crossref_primary_10_1080_00049530_2024_2419682 crossref_primary_10_1177_20539517241290217 crossref_primary_10_1007_s10869_025_10035_6 crossref_primary_10_1017_iop_2022_44 crossref_primary_10_1016_j_copsyc_2024_101836 crossref_primary_10_1109_TAFFC_2023_3296695 crossref_primary_10_3390_diagnostics14151594 crossref_primary_10_1111_peps_12643 crossref_primary_10_1007_s10869_022_09829_9 crossref_primary_10_1007_s13748_024_00345_w crossref_primary_10_1080_2331186X_2025_2490425 crossref_primary_10_7717_peerj_cs_1630 crossref_primary_10_1007_s43681_025_00728_2 crossref_primary_10_1371_journal_pone_0318486 crossref_primary_10_2196_69007 crossref_primary_10_1080_10494820_2025_2546630 crossref_primary_10_1080_2331186X_2025_2526436 crossref_primary_10_1080_08874417_2024_2417672 crossref_primary_10_1016_j_jrt_2025_100129 crossref_primary_10_1016_j_inffus_2024_102760 crossref_primary_10_1080_10508422_2024_2332699 crossref_primary_10_1177_08404704241291226 crossref_primary_10_1007_s12027_024_00785_w crossref_primary_10_1007_s43681_024_00590_8 crossref_primary_10_3390_jcdd10120485 crossref_primary_10_20525_ijrbs_v12i9_2737 crossref_primary_10_1111_ijsa_12396 crossref_primary_10_1111_peps_12593 crossref_primary_10_1007_s10869_025_10022_x crossref_primary_10_1016_j_copsyc_2024_101829 crossref_primary_10_36096_ijbes_v7i1_641 crossref_primary_10_1111_emip_12593 crossref_primary_10_2174_0113816128287941231206050340 crossref_primary_10_1007_s43681_023_00266_9 crossref_primary_10_1016_j_jflm_2025_102895 crossref_primary_10_1007_s43681_024_00529_z crossref_primary_10_1097_ACM_0000000000005980 crossref_primary_10_1080_1359432X_2025_2517592 crossref_primary_10_2196_48633 crossref_primary_10_4102_sajip_v48i0_1988 crossref_primary_10_1007_s43681_024_00450_5 crossref_primary_10_1177_20539517241299732 crossref_primary_10_32604_cmes_2023_029451 crossref_primary_10_7759_cureus_49252 crossref_primary_10_3390_info16050400 crossref_primary_10_1109_ACCESS_2024_3521279 crossref_primary_10_1109_ACCESS_2024_3375763 crossref_primary_10_3390_cancers14122897 crossref_primary_10_1080_00036846_2024_2321835 crossref_primary_10_1080_10447318_2023_2232982 crossref_primary_10_1080_23311975_2024_2344743 crossref_primary_10_1007_s43681_024_00445_2 crossref_primary_10_3389_fpsyg_2024_1353022 crossref_primary_10_1177_10422587241304676 crossref_primary_10_1016_j_sciaf_2024_e02281 crossref_primary_10_1109_JPROC_2024_3403898 crossref_primary_10_1111_ijsa_12459 crossref_primary_10_1057_s41262_025_00383_2 crossref_primary_10_1111_ijsa_12456 crossref_primary_10_1111_ijsa_12499 crossref_primary_10_1007_s10639_024_12605_2 crossref_primary_10_1111_ijsa_12376 crossref_primary_10_1111_jedm_12330 crossref_primary_10_3389_feduc_2025_1499497 crossref_primary_10_1145_3710908 crossref_primary_10_3917_inno_pr2_0153 crossref_primary_10_1080_10447318_2024_2331858 crossref_primary_10_1007_s10869_025_10067_y crossref_primary_10_3389_fpsyg_2024_1390182 crossref_primary_10_1080_00140139_2025_2512426 crossref_primary_10_1108_IJOES_02_2024_0060 crossref_primary_10_1111_peps_12578 crossref_primary_10_1108_RAF_01_2025_0006 crossref_primary_10_1016_j_clsr_2024_106053 crossref_primary_10_1016_j_genhosppsych_2025_02_018 crossref_primary_10_1007_s44206_023_00074_y |
| ContentType | Journal Article |
| DBID | CGR CUY CVF ECM EIF NPM 7X8 |
| DOI | 10.1037/amp0000972 |
| DatabaseName | Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
| DatabaseTitle | MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | no_fulltext_linktorsrc |
| Discipline | Psychology |
| EISSN | 1935-990X |
| ExternalDocumentID | 35157476 |
| Genre | Journal Article |
| GroupedDBID | --- --Z -DZ -ET -~X .GJ 07C 0R~ 186 23M 2KS 354 3EH 53G 5GY 5RE 5VS 6TJ 7RZ 85S 9M8 AAAHA AAIKC AAMNW AAWTL AAYOK ABCQX ABIVO ABNCP ABPPZ ACGFO ACHQT ACKIV ACNCT ACPQG ACTDY ADMHG AEHFB AFFNX AGNAY AI. ALEEW ALMA_UNASSIGNED_HOLDINGS AS9 ASUFR AWKKM AZXWR BKOMP CGNQK CGR CS3 CUY CVF D~A ECM EIF EPA F5P FTD HVGLF HZ~ ISO L7B LPU LW5 MS~ MVM NHB NPM O9- OHT OMK OPA OVD P2P PQQKQ ROL RXW SES SKT SPA TAE TEORI TN5 TWZ UBC UCV UHB UHS ULE UPT URZ VH1 VQA VQP WH7 X6Y XIH XJT XOL XZL YCJ YCV YIF YIN YQI YR5 YRY YYP YYQ YZZ ZCA ZCG ZGI ZHY ZKG ZPI ~02 ~A~ 3KI 7X8 ABVOZ PHGZT PUEGO |
| ID | FETCH-LOGICAL-a414t-559b548c65c45f80e77f3d2d61dbb946c1c7505deac83d85f285c931977ad7282 |
| IEDL.DBID | 7X8 |
| ISICitedReferencesCount | 103 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000754045100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1935-990X |
| IngestDate | Thu Oct 02 10:28:06 EDT 2025 Wed Feb 19 02:25:49 EST 2025 |
| IsDoiOpenAccess | false |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-a414t-559b548c65c45f80e77f3d2d61dbb946c1c7505deac83d85f285c931977ad7282 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0000-0001-5611-2923 0000-0002-7943-5298 |
| OpenAccessLink | https://doi.org/10.1037/amp0000972 |
| PMID | 35157476 |
| PQID | 2629059878 |
| PQPubID | 23479 |
| ParticipantIDs | proquest_miscellaneous_2629059878 pubmed_primary_35157476 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-01-01 |
| PublicationDateYYYYMMDD | 2023-01-01 |
| PublicationDate_xml | – month: 01 year: 2023 text: 2023-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | The American psychologist |
| PublicationTitleAlternate | Am Psychol |
| PublicationYear | 2023 |
| SSID | ssj0005891 |
| Score | 2.6852164 |
| Snippet | Researchers, governments, ethics watchdogs, and the public are increasingly voicing concerns about unfairness and bias in artificial intelligence (AI)-based... |
| SourceID | proquest pubmed |
| SourceType | Aggregation Database Index Database |
| StartPage | 36 |
| SubjectTerms | Artificial Intelligence Humans |
| Title | Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models |
| URI | https://www.ncbi.nlm.nih.gov/pubmed/35157476 https://www.proquest.com/docview/2629059878 |
| Volume | 78 |
| WOSCitedRecordID | wos000754045100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1JS8NAFB7UeujFfakbI3gdmm2WeJEgFj1YelDILcwWqGISmyr4732TxZ4EwUsgkIQwee-b782bfB9CVz6TQaCYIVpSRoD_e0T5ghFuhBEq9qmhqjGb4NOpSNN41i241d22yh4TG6A2pXZr5OOABTFQAcHFTfVOnGuU6652FhrraBAClXGJydOVWnjnmAcchRJA3bSXJw35WL5VDT1qlIF_oZbNFDPZ_u_L7aCtjlzipI2GXbRmiz00_MG4r330kri_MGC6wkD8cPKApTsvF_U1TnDeb9TCwGRxrwMO17qmj4NELAuD1VzWeF5gJ3SMgVu-2to9qFq4lo8DT9y469QH6Hly93R7Tzq7BSIjP1oSqC0U1C-aUR3RXHiW8zw0gWG-USqOmPY10AtqAKpFaATNA0F1DCnMuTQcSrdDtFGUhT1G2PqB4dwz0ioZWcliX-ee8ZhVgoWRoiN02Y9jBuHsehSysOVHna1GcoSO2o-RVa3uRhYC94Lqh5384e5TNHTG8O1iyRka5JDM9hxt6s_lvF5cNHECx-ns8RubiskN |
| linkProvider | ProQuest |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Auditing+the+AI+auditors%3A+A+framework+for+evaluating+fairness+and+bias+in+high+stakes+AI+predictive+models&rft.jtitle=The+American+psychologist&rft.au=Landers%2C+Richard+N&rft.au=Behrend%2C+Tara+S&rft.date=2023-01-01&rft.eissn=1935-990X&rft.volume=78&rft.issue=1&rft.spage=36&rft_id=info:doi/10.1037%2Famp0000972&rft_id=info%3Apmid%2F35157476&rft_id=info%3Apmid%2F35157476&rft.externalDocID=35157476 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1935-990X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1935-990X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1935-990X&client=summon |