Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms
The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However,...
Uloženo v:
| Vydáno v: | AI & society Ročník 38; číslo 2; s. 549 - 563 |
|---|---|
| Hlavní autoři: | , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
London
Springer London
01.04.2023
Springer Springer Nature B.V |
| Témata: | |
| ISSN: | 0951-5666, 1435-5655 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society. |
|---|---|
| AbstractList | The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society. The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society.The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently explored. Our paper aims to fill this gap and address the AI ethics principle of fairness from a conceptual standpoint, drawing insights from accounts of fairness elaborated in moral philosophy and using them to conceptualise fairness as an ethical value and to redefine fairness in HMLA accordingly. To achieve our goal, following a first section aimed at clarifying the background, methodology and structure of the paper, in the second section, we provide an overview of the discussion of the AI ethics principle of fairness in HMLA and show that the concept of fairness underlying this debate is framed in purely distributive terms and overlaps with non-discrimination, which is defined in turn as the absence of biases. After showing that this framing is inadequate, in the third section, we pursue an ethical inquiry into the concept of fairness and argue that fairness ought to be conceived of as an ethical value. Following a clarification of the relationship between fairness and non-discrimination, we show that the two do not overlap and that fairness requires much more than just non-discrimination. Moreover, we highlight that fairness not only has a distributive but also a socio-relational dimension. Finally, we pinpoint the constitutive components of fairness. In doing so, we base our arguments on a renewed reflection on the concept of respect, which goes beyond the idea of equal respect to include respect for individual persons. In the fourth section, we analyse the implications of our conceptual redefinition of fairness as an ethical value in the discussion of fairness in HMLA. Here, we claim that fairness requires more than non-discrimination and the absence of biases as well as more than just distribution; it needs to ensure that HMLA respects persons both as persons and as particular individuals. Finally, in the fifth section, we sketch some broader implications and show how our inquiry can contribute to making HMLA and, more generally, AI promote the social good and a fairer society. |
| Audience | Academic |
| Author | Tiribelli, Simona Giovanola, Benedetta |
| Author_xml | – sequence: 1 givenname: Benedetta surname: Giovanola fullname: Giovanola, Benedetta email: benedetta.giovanola@unimc.it organization: Department of Political Sciences, Communication, and International Relations, University of Macerata, Department of Philosophy, Tufts University – sequence: 2 givenname: Simona surname: Tiribelli fullname: Tiribelli, Simona organization: Department of Political Sciences, Communication, and International Relations, University of Macerata, Institute for Technology and Global Health, PathCheck Foundation |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35615443$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9Uk1v1DAUtFAR3Rb-AAdkiQuXFH8n4YC0VHxUqsQFzpbjPCeuEnuxsxX993h3S0srVPngJ3tm_OZ5TtBRiAEQek3JGSWkfp8JoUJVhLGqFFJW6hlaUcFlJZWUR2hFWklLrdQxOsn5ihCiZMNeoGMuFZVC8BX6_QluYuhx503GphS9zzb52Qez-Bg-4AQ9OB98GPAyAl5fYFhGbzPeJB-s30yAo8PO-BQgZ-wDHsFMy2hNAjwbO_oA1QQm7SXMNMTkl3HOL9FzZ6YMr273U_Tzy-cf59-qy-9fL87Xl5UVpfOqJk0riAKpDLdKQGP63kqgqhONaWXLBBgOrlXEcmm6vuuEc9A407WUNA3wU_TxoLvZdjP0FsKSzKRL97NJNzoarx_eBD_qIV7rljKumCoC724FUvy1hbzouYwIpskEiNusmarLXEUreIG-fQS9itsUij3NGlLXDWk5uUcNZgLtg4vlXbsT1eta1KpmjNGCOvsPqqweZm9LEJwv5w8Ib_41eufw718XADsAbIo5J3B3EEr0LlD6EChdAqX3gdI7980jkvXLPhqlHT89TeUHat5FZYB0P40nWH8A9IXf2w |
| CitedBy_id | crossref_primary_10_1007_s10664_023_10402_y crossref_primary_10_1038_s44294_025_00092_w crossref_primary_10_1016_j_cviu_2024_104142 crossref_primary_10_1016_j_sca_2025_100121 crossref_primary_10_3390_diagnostics14151594 crossref_primary_10_1016_j_chbr_2024_100577 crossref_primary_10_2196_53505 crossref_primary_10_1007_s00146_024_01946_8 crossref_primary_10_1016_j_imu_2025_101664 crossref_primary_10_3233_EFI_240045 crossref_primary_10_1007_s41666_025_00195_8 crossref_primary_10_1016_j_lanepe_2025_101421 crossref_primary_10_1016_j_gene_2025_149623 crossref_primary_10_1146_annurev_biodatasci_102623_104553 crossref_primary_10_1007_s12553_024_00916_w crossref_primary_10_1007_s43681_024_00533_3 crossref_primary_10_1057_s41599_024_02926_5 crossref_primary_10_1038_s41746_023_00953_1 crossref_primary_10_1016_j_is_2024_102464 crossref_primary_10_1016_j_ijinfomgt_2023_102745 crossref_primary_10_3390_bdcc7030147 crossref_primary_10_2105_AJPH_2023_307225 crossref_primary_10_1007_s11739_022_03080_z crossref_primary_10_1016_j_iot_2024_101352 crossref_primary_10_1108_RMJ_10_2023_0061 crossref_primary_10_1109_ACCESS_2024_3360306 crossref_primary_10_3389_fpubh_2025_1526360 crossref_primary_10_1038_s41746_025_01739_3 crossref_primary_10_1016_j_cjca_2024_07_026 crossref_primary_10_3390_rs16234529 crossref_primary_10_32604_cmes_2023_029451 crossref_primary_10_1038_s41746_024_01306_2 crossref_primary_10_3390_electronics13020416 crossref_primary_10_1080_02642069_2025_2537115 crossref_primary_10_3389_fmed_2023_1237432 crossref_primary_10_1007_s11245_023_09939_w crossref_primary_10_2196_51308 crossref_primary_10_1038_s41598_024_68291_0 crossref_primary_10_1371_journal_pdig_0000692 crossref_primary_10_1371_journal_pdig_0000495 crossref_primary_10_2196_49575 crossref_primary_10_1080_09537325_2025_2556753 crossref_primary_10_1109_TAI_2024_3361836 crossref_primary_10_1080_02642069_2024_2359077 crossref_primary_10_1007_s00146_024_01901_7 crossref_primary_10_1038_s41746_025_01503_7 crossref_primary_10_1002_hcs2_114 crossref_primary_10_1016_j_engstruct_2024_119508 crossref_primary_10_1007_s11245_022_09874_2 crossref_primary_10_5209_dere_102342 crossref_primary_10_3390_info14080426 crossref_primary_10_3390_life14060652 crossref_primary_10_1002_mus_28023 crossref_primary_10_5209_dere_102345 |
| Cites_doi | 10.1007/s13347-019-00355-w 10.1017/CBO9780511624971 10.1093/acprof:oso/9780199664313.001.0001 10.1007/s11023-019-09509-3 10.1016/S2589-7500(20)30065-0 10.1561/110000001 10.1177/01914537211040568 10.4159/9780674042605 10.1353/pbm.2019.0012 10.1007/s11948-019-00104-4 10.1007/978-3-540-75829-7_5 10.1177/0049124118782533 10.4159/harvard.9780674736061 10.5040/9781472544735.ch-001 10.1023/A:1010676701382 10.1177/0090591784012001005 10.1080/13698230.2021.1893255 10.1056/NEJMp1714229 10.1007/s10676-013-9321-6 10.1017/S0269888913000039 10.1007/978-94-007-6970-0 10.1093/ijlit/ean018 10.1007/s10676-022-09622-5 10.1093/qje/qjx032 10.1086/233897 10.1017/CBO9780511810916 10.1089/big.2016.0047 10.1038/s41591-018-0316-z 10.1093/acprof:oso/9780198237907.001.0001 10.1371/journal.pmed.1001413 10.1086/692974 10.1108/JICES-06-2018-0056 10.1177/2053951715622512 10.1017/CBO9781139165860 10.1377/hlthaff.2014.0048 10.1038/d41586-018-05267-x 10.1136/bmj.m363 10.1016/j.chb.2019.04.019 10.1016/j.socscimed.2020.113172 10.1136/medethics-2019-105586 10.1177/2053951716679679 10.18574/nyu/9781479854608.001.0001 10.1007/s11023-020-09529-4 10.2139/ssrn.3072038 10.1093/acprof:oso/9780199656967.001.0001 10.1098/rsta.2017.0362 10.1038/s41591-018-0300-7 10.1177/1469540513480159 10.1001/jama.2016.17216 10.1080/17460441.2019.1621284 10.24963/ijcai.2017/654 10.1086/658897 10.1080/21670811.2016.1208053 10.1093/sf/soz162 10.4159/9780674978867 10.1111/j.1088-4963.1998.tb00063.x 10.1038/s41591-018-0320-3 10.1093/acprof:oso/9780199796113.001.0001 10.1080/00455091.1999.10717521 10.1109/TITB.2009.2039485 10.1038/s42256-019-0088-2 10.7326/M18-1990 10.1093/acprof:oso/9780198732877.001.0001 10.1056/NEJMms2004740 10.1007/s10551-019-04226-4 10.29173/irie345 10.1145/3340531.3412152 10.1007/s00146-016-0677-0 10.2307/j.ctv24w638g 10.1109/ACCESS.2018.2878254 10.1007/s43681-021-00038-3 10.1007/s13347-017-0278-y 10.1111/j.1088-4963.2003.00005.x 10.18574/nyu/9781479833641.001.0001 10.1177/1527476420919691 10.1145/3351095.3372871 10.1016/j.compbiomed.2019.04.027 10.4159/9780674977440 10.1007/s00146-021-01154-8 10.2139/ssrn.2972855 10.2139/ssrn.2477899 10.1001/jama.2018.11100 10.1111/j.1088-4963.2010.01181.x 10.1016/j.jsis.2015.02.001 10.1126/science.aax2342 10.1145/3287560.3287598 10.1007/978-1-4020-6914-7_2 10.1007/s10892-010-9085-8 10.1086/292054 10.1109/ICDMW.2012.101 10.3390/jcm8030360 |
| ContentType | Journal Article |
| Copyright | The Authors 2023 2023. corrected publication 2023 The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022. COPYRIGHT 2023 Springer The Authors 2023 2023. corrected publication 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 |
| Copyright_xml | – notice: The Authors 2023 2023. corrected publication 2023 – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022. – notice: COPYRIGHT 2023 Springer – notice: The Authors 2023 2023. corrected publication 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022 |
| DBID | C6C AAYXX CITATION NPM 3V. 7SC 7TK 7XB 8AL 8FD 8FE 8FG 8FH 8FK 8G5 ABUWG AFKRA ARAPS AZQEC BBNVY BENPR BGLVJ BHPHI CCPQU DWQXO GNUQQ GUQSH HCIFZ JQ2 K7- L7M LK8 L~C L~D M0N M2O M7P MBDVC P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PSYQQ Q9U 7X8 5PM |
| DOI | 10.1007/s00146-022-01455-6 |
| DatabaseName | Springer Nature OA Free Journals CrossRef PubMed ProQuest Central (Corporate) Computer and Information Systems Abstracts Neurosciences Abstracts ProQuest Central (purchase pre-March 2016) Computing Database (Alumni Edition) Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Natural Science Collection ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Research Library ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials Biological Science Collection ProQuest Central Technology Collection Natural Science Collection ProQuest One Community College ProQuest Central ProQuest Central Student ProQuest Research Library SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database Advanced Technologies Database with Aerospace Biological Sciences Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Computing Database Research Library Biological Science Database Research Library (Corporate) Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ProQuest Central China ProQuest One Psychology ProQuest Central Basic MEDLINE - Academic PubMed Central (Full Participant titles) |
| DatabaseTitle | CrossRef PubMed ProQuest One Psychology Research Library Prep Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College Research Library (Alumni Edition) ProQuest Natural Science Collection ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences Natural Science Collection ProQuest Central Korea Biological Science Collection ProQuest Research Library ProQuest Central (New) Advanced Technologies Database with Aerospace Advanced Technologies & Aerospace Collection ProQuest Computing ProQuest Biological Science Collection ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ProQuest Technology Collection Biological Science Database ProQuest SciTech Collection Neurosciences Abstracts Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
| DatabaseTitleList | CrossRef ProQuest One Psychology MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science Philosophy |
| EISSN | 1435-5655 |
| EndPage | 563 |
| ExternalDocumentID | PMC9123626 A747672221 35615443 10_1007_s00146_022_01455_6 |
| Genre | Journal Article |
| GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 1N0 1SB 2.D 203 23M 28- 2J2 2JN 2JY 2KG 2LR 2P1 2VQ 2~H 30V 3R3 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 78A 8FE 8FG 8FH 8G5 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACPRK ACYUM ACZOJ ADBBV ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADZKW AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. BA0 BBNVY BBWZM BDATZ BENPR BGLVJ BGNMA BHPHI BPHCQ BSONS C6C CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DWQXO EBLON EBS EDO EIOEI EJD ESBYG FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GUQSH GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IAO ICD ICJ IHE IJ- IKXTQ ITC ITM IWAJR IXC IZIGR IZQ I~X I~Z J-C J0Z JBSCW JCJTX JZLTJ K6V K7- KDC KOV KOW LAS LK8 LLZTM M0N M2O M4Y M7P MA- MK~ MVM N2Q NB0 NDZJH NPVJJ NQJWS NU0 O-J O9- O93 O9G O9I O9J OAM P19 P62 P9O PF0 PQQKQ PROAC PSYQQ PT4 PT5 Q2X QOK QOS R-Y R4E R89 R9I RHV RIG RNI ROL RPX RSV RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WH7 WK6 WK8 YLTOR Z45 Z7R Z7X Z7Z Z81 Z83 Z88 Z8M Z8N Z8T Z8U Z8W Z92 ZMTXR ~A9 ~EX 77I AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG ADKFA AEZWR AFDZB AFFHD AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT PQGLB NPM 7SC 7TK 7XB 8AL 8FD 8FK JQ2 L7M L~C L~D MBDVC PKEHL PQEST PQUKI PRINS Q9U 7X8 PUEGO 5PM |
| ID | FETCH-LOGICAL-c4566-7089406e56a3c64e8addc5e16b48a95924ea3ef960c35abdbb4ffe8fab91088e3 |
| IEDL.DBID | RSV |
| ISSN | 0951-5666 |
| IngestDate | Tue Nov 04 01:52:49 EST 2025 Thu Oct 02 05:35:06 EDT 2025 Wed Nov 05 14:51:15 EST 2025 Sat Nov 29 14:08:23 EST 2025 Sat Nov 29 10:28:12 EST 2025 Mon Jul 21 06:01:53 EDT 2025 Tue Nov 18 22:21:01 EST 2025 Sat Nov 29 06:28:19 EST 2025 Fri Feb 21 02:42:40 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 2 |
| Keywords | Discrimination Ethics of algorithms Healthcare machine-learning algorithms Bias Fairness Respect |
| Language | English |
| License | The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c4566-7089406e56a3c64e8addc5e16b48a95924ea3ef960c35abdbb4ffe8fab91088e3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| OpenAccessLink | https://link.springer.com/10.1007/s00146-022-01455-6 |
| PMID | 35615443 |
| PQID | 2807780930 |
| PQPubID | 30083 |
| PageCount | 15 |
| ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_9123626 proquest_miscellaneous_2670064943 proquest_journals_2807780930 gale_infotracmisc_A747672221 gale_infotracacademiconefile_A747672221 pubmed_primary_35615443 crossref_primary_10_1007_s00146_022_01455_6 crossref_citationtrail_10_1007_s00146_022_01455_6 springer_journals_10_1007_s00146_022_01455_6 |
| PublicationCentury | 2000 |
| PublicationDate | 20230400 |
| PublicationDateYYYYMMDD | 2023-04-01 |
| PublicationDate_xml | – month: 4 year: 2023 text: 20230400 |
| PublicationDecade | 2020 |
| PublicationPlace | London |
| PublicationPlace_xml | – name: London – name: Germany |
| PublicationSubtitle | Journal of Knowledge, Culture and Communication |
| PublicationTitle | AI & society |
| PublicationTitleAbbrev | AI & Soc |
| PublicationTitleAlternate | AI Soc |
| PublicationYear | 2023 |
| Publisher | Springer London Springer Springer Nature B.V |
| Publisher_xml | – name: Springer London – name: Springer – name: Springer Nature B.V |
| References | Giovanola B, Tiribelli S (2022) Weapons of Moral construction? On the value of fairness in algorithmic decision-making. Ethics Inform Technol. https://doi.org/10.1007/s10676-022-09622-5 HintonGDeep learning-a technology with the potential to transform health careJAMA2018320111101110210.1001/jama.2018.11100 MorleyJMachadoCBurrCCowlsJJoshiITaddeoMFloridiLThe ethics of AI in health care: a mapping reviewSoc Sci Med202026010.1016/j.socscimed.2020.113172 Kamishima T, Akaho S, Asoh H, Sakuma J (2012) Considerations on fairness-aware data mining. In: IEEE 12th International Conference on Data Mining Workshops, Brussels, Belgium, pp 378–385. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6406465. Retrieved March 10, 2021 ObermeyerZPowersBVogeliCMullainathanSDissecting racial bias in an algorithm used to manage the health of populationsScience201936644745310.1126/science.aax2342 DarwallSTwo kinds of respectEthics197788364910.1086/292054 PasqualeFThe black box society: the secret algorithms that control money and information2015CambridgeHarvard University Press10.4159/harvard.9780674736061 DanielsNJust health care1985CambridgeCambridge University Press10.1017/CBO9780511624971 Ochigame R (2019) The invention of “Ethical AI”. https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/. Retrieved March 10, 2021 WilliamsBPersons, character and moralityMoral Luck: Philosophical papers 1973–19801981CambridgeCambridge University Press11910.1017/CBO9781139165860 Sangiovanni A (2017) Humanity without dignity. Moral equality, respect, and human rights. Harvard University Press, Cambridge GulshanVPengLCoramMStumpeMCWuDNarayanaswamyAVenugopalanSWidnerKMadamsTCuadrosJKimRRamanRNelsonPCMegaJLWebsterDRDevelopment and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographsJAMA2016316222402241010.1001/jama.2016.17216 Dieterich B, Mendoza C., Brennan T (2016) COMPAS risk scales: demonstrating accuracy equity and predictive parity performance of the COMPAS risk scales in broward county. https://www.semanticscholar.org/paper/COMPAS-Risk-Scales-%3A-Demonstrating-Accuracy-Equity/cb6a2c110f9fe675799c6aefe1082bb6390fdf49. Retrieved March 11, 2021 TsamadosAAggarwalNCowlsJMorleyJRobertsHTaddeoMFloridiLThe ethics of algorithms: key problems and solutionsAI Soc202110.1007/s00146-021-01154-8 RomeiARuggieriSA multidisciplinary survey on discrimination analysisKnowl Eng Rev201429558263810.1017/S0269888913000039 Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. https://arxiv.org/abs/1610.02413. Retrieved March 12, 2021 Scheffler S (2003) What is egalitarianism?. Philos Public Affairs 31(1): 5–39. http://www.jstor.org/stable/3558033. Retrieved March 11, 2021 NoorPCan we trust AI not to further embed racial bias and prejudice?BMJ (Clin Res Ed)202036810.1136/bmj.m363 Abebe R, Barocas S, Kleinberg J, Levy K, Raghavan M, Robinson DG (2020) Roles for computing in social change. https://doi.org/10.1145/3351095.3372871. ArXiv:1912.04883. DiakopoulosNKoliskaMAlgorithmic transparency in the news mediaDigit J20175780982810.1080/21670811.2016.1208053 Overdorf R, Kulynych B, Balsa E, Troncoso C, Gürse S (2018) Questioning the assumptions behind fairness solutions. ArXiv:1811.11293. Retrieved March 11, 2021 Hinman LM (2008) Searching ethics: the role of search engines in the construction and distribution of knowledge. In: Spink A, Zimmer M (eds) Web search. Information science and knowledge management, Springer. https://doi.org/10.1007/978-3-540-75829-7_5. Hu M (2017) Algorithmic jim crow. Fordham Law Rev. https://ir.lawnet.fordham.edu/flr/vol86/iss2/13/. Retrieved March 10, 2021 ShapiroSAlgorithmic television in the age of large-scale customizationTelevis New Med202021665866310.1177/1527476420919691 BartonCChettipallyUZhouYJiangZLynn-PalevskyALeSCalvertJDasREvaluation of a machine learning algorithm for up to 48-hour advance prediction of sepsis using six vital signsComput Biol Med2019109798410.1016/j.compbiomed.2019.04.027 Friedler S, Scheidegger C, Venkatasubramanian S (2016) On the (im)possibility of fairness. https://www.researchgate.net/publication/308610093_On_the_impossibility_of_fairness/citation/download. Retrieved March 11, 2021 Richardson R, Schultz J, Crawford K (2019) Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice. N.Y.U. L. Review 94(192). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3333423. Retrieved March 10, 2021 Tufekci Z (2015) Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. J Telecommun High Technol Law 13(203). https://ctlj.colorado.edu/wp-content/uploads/2015/08/Tufekci-final.pdf. Retrieved March 11, 2021 EstevaARobicquetARamsundarBKuleshovVDePristoMChouKCuiCCorradoGThrunSDeanJA guide to deep learning in healthcareNat Med2019251242910.1038/s41591-018-0316-z KleinbergJLakkarajuHLeskovecJLudwigJMullainathanSHuman decisions and machine predictionsQ J Econ201710.1093/qje/qjx0321405.91119 Lippert-RasmussenKBorn free and equal? A philosophical inquiry into the nature of discrimination2013OxfordOxford University Press10.1093/acprof:oso/9780199796113.001.0001 BrighouseHRobeynsIMeasuring justice. Primary Goods and capabilities2010CambridgeCambridge University Press10.1017/CBO9780511810916 Cotter A, Jiang H, Sridharan K (2018) Two-player games for efficient non-convex constrained optimization. arXiv preprint arXiv:1804.06500. BerkRHeidariHJabbariSKearnsMRothAFairness in criminal justice risk assessments: the state of the artSociol Methods Res201810.1177/0049124118782533 Benjamin R (2019) Race after technology: abolitionist tools for the new jim code. Polity, Medford ShelbyTDark ghettos: injustice, dissent, and reform2016CambridgeHarvard University Press10.2307/j.ctv24w638g HellmanDMoreauSPhilosophical foundations of discrimination law2013OxfordOxford University Press10.1093/acprof:oso/9780199664313.001.0001 BurrellJHow the machine ‘thinks’: understanding opacity in machine learning algorithmsBig Data Soc201610.1177/2053951715622512 FlemingNHow artificial intelligence is changing drug discoveryNature20185577707S55S5710.1038/d41586-018-05267-x Mansoury M, Abdollahpouri H, Pechenizkiy M, Mobasher B, Burke R (2020) Feedback loop and bias amplification in recommender systems. In: Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery, New York, NY, USA: 2145–2148. https://doi.org/10.1145/3340531.3412152. FergusonAGThe rise of big dtata policing. Surveillance, race, and the future of law enforcement2017New YorkNew York University Press10.18574/nyu/9781479854608.001.0001 CollSConsumption as biopower: governing bodies with loyalty cardsJ Consu Cult201313320122010.1177/1469540513480159 NobleSUAlgorithms of oppression: how search engines reinforce racism2018New YorkNew York University Press10.18574/nyu/9781479833641.001.0001 TopolEJHigh-performance medicine: the convergence of human and artificial intelligenceNat Med2019251445610.1038/s41591-018-0300-7 BarocasSSelbstADBig data’s disparate impactSSRN Electron J201610.2139/ssrn.2477899 KhaitanTA theory of discrimination law2015OxfordOxford University Press10.1093/acprof:oso/9780199656967.001.0001 RobbinsSA misdirected principle with a catch: explicability for AIMind Mach2019294495514113141310.1007/s11023-019-09509-3 Deville J (2013) Leaky Data: How Wonga Makes Lending decisions. Charisma: Consumer Market Studies. http://www.charisma-network.net/finance/leaky-data-how-wonga-makes-lending-decisions. Retrieved March 11, 2021 CharDSShahNHMagnusDImplementing machine learning in health care—addressing ethical challengesN Engl J Med20183781198198310.1056/NEJMp1714229 DworkinRSovereign virtue: the theory and practice of equality2000CambridgeHarvard University Press Corbett-Davies S, Goel S (2018) The measure and mismeasure of fairness: a critical review of fair machine learning. http://arxiv.org/abs/1808.00023. Retrieved March 11, 2021 Barocas S (2014) Data mining and the discourse on discrimination. In: Proceedings of the Data Ethics Workshop, Conference on Knowledge Discovery and Data Mining (KDD). https://dataethics.github.io/proceedings/DataMiningandtheDiscourseOnDiscrimination.pdf. Retrieved March 10 2021 RajkomarAHardtMHowellMDCorradoGChinMHEnsuring fairness in machine learning to advance health equityAnn Intern Med20181691286687210.7326/M18-1990 Hildebrandt M (2008) Defining profiling: a new type of knowledge?. In: Hildebrandt M, Gutwirth S (eds) Profiling the European Citizen. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-6914-7_2 PariserEThe filter bubble2011New YorkPenguin ShahHAlgorithmic accountabilityPhilos Trans R Soc Math Phys Eng Sci201837621282017036210.1098/rsta.2017.0362 TranBXVuGTHaGHVuongQHHoMTVuongTTHoRCMGlobal evolution of research in artificial intelligence in health and medicine: a bibliometric studyJ Clin Med201910.3390/jcm8030360 Lobosco K (2013) Facebook friends could change your credit score. CNN Business. https://money.cnn.com/2013/08/26/technology/social/facebook-credit-score/index.html. . Retrieved March 11, 2021 WongPDemocratizing algorithmic fairnessPhilos Technol201910.1007/s13347-019-00355-w HaySIGeorgeDBMoyesCLBrownsteinJSBig data opportunities for global infectious disease surveillancePLoS Med201310410.1371/journal.pmed.1001413 ChouldechovaAFair prediction with disparate impact: a study of bias in recidivism prediction instrumentsBig Data20175215316310.1089/big.2016.0047 Turner LeeNDetecting racial bias in algorithms and machine learningJ Inf Commun Ethics Soc201816325226010.1108/JICES-06-2018-0056 Pleiss G, Raghavan M, Wu F, Kleinberg J, Weinberger KQ (2017) On fairness and calibration. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, pp 5684–5693. WolffJFairness respect, and the egalitarian ethosPhilos Public Affairs19982729712210 BX Tran (1455_CR106) 2019 1455_CR2 1455_CR1 V Gulshan (1455_CR53) 2016; 316 IG Cohen (1455_CR21) 2014; 33 M Fricker (1455_CR41) 2007 K Lippert-Rasmussen (1455_CR71) 2013 T Grote (1455_CR52) 2020; 46 1455_CR7 1455_CR73 1455_CR72 Z Obermeyer (1455_CR83) 2019; 366 1455_CR5 S Moreau (1455_CR77) 2010; 38 T Khaitan (1455_CR68) 2015 BD Mittelstadt (1455_CR75) 2016 R Dworkin (1455_CR33) 2000 A Tsamados (1455_CR107) 2021 S Shapiro (1455_CR100) 2020; 21 B Norgeot (1455_CR82) 2019; 25 H Brighouse (1455_CR17) 2010 1455_CR86 MD McCradden (1455_CR74) 2020; 2 1455_CR89 N Fleming (1455_CR39) 2018; 557 1455_CR84 N Barakat (1455_CR6) 2010; 14 A Jobin (1455_CR63) 2019; 1 V Eubanks (1455_CR37) 2018 J Morley (1455_CR78) 2020; 260 J Kleinberg (1455_CR67) 2017 R Noggle (1455_CR80) 1999; 29 1455_CR112 AG Ferguson (1455_CR38) 2017 M Seng Ah Lee (1455_CR98) 2020 H Shah (1455_CR99) 2018; 376 A Buhmann (1455_CR15) 2019 1455_CR119 D Hellman (1455_CR56) 2013 1455_CR54 EJ Topol (1455_CR105) 2019; 25 1455_CR59 1455_CR58 1455_CR57 EB Laidlaw (1455_CR70) 2008; 17 C Garattini (1455_CR45) 2019; 32 B Williams (1455_CR115) 1981 1455_CR51 1455_CR50 P Noor (1455_CR81) 2020; 368 1455_CR66 G Harerimana (1455_CR55) 2018; 6 1455_CR64 F Pasquale (1455_CR88) 2015 S Newell (1455_CR76) 2015; 24 A Chouldechova (1455_CR22) 2017; 5 1455_CR62 B Chin-Yee (1455_CR20) 2019; 62 A Romei (1455_CR94) 2014; 29 S Coll (1455_CR23) 2013; 13 D Shin (1455_CR102) 2019; 98 T Shelby (1455_CR101) 2016 1455_CR32 1455_CR31 1455_CR34 SI Hay (1455_CR60) 2013; 10 N Diakopoulos (1455_CR30) 2017; 5 A Esteva (1455_CR36) 2019; 25 P Wong (1455_CR118) 2019 S Barocas (1455_CR8) 2016 1455_CR44 1455_CR42 SU Noble (1455_CR79) 2018 SD Baum (1455_CR10) 2016 1455_CR47 C Barton (1455_CR9) 2019; 109 1455_CR46 A Rajkomar (1455_CR90) 2018; 169 1455_CR40 J Rawls (1455_CR91) 1971 E Kelly (1455_CR65) 2017; 128 S Umbrello (1455_CR110) 2020; 26 C O’Neil (1455_CR85) 2016 J Wolff (1455_CR116) 1998; 27 G Hinton (1455_CR61) 2018; 320 1455_CR11 DS Char (1455_CR19) 2018; 378 E Bozdag (1455_CR14) 2013; 15 1455_CR97 J Waldron (1455_CR114) 2017 1455_CR96 1455_CR13 S Umbrello (1455_CR111) 2021; 1 1455_CR95 R Berk (1455_CR12) 2018 B Giovanola (1455_CR48) 2021 1455_CR92 S Robbins (1455_CR93) 2019; 29 J Burrell (1455_CR16) 2016 I Carter (1455_CR18) 2011; 121 E Anderson (1455_CR4) 1999; 109 N Turner Lee (1455_CR109) 2018; 16 1455_CR103 1455_CR104 DA Vyas (1455_CR113) 2020; 383 1455_CR108 B Giovanola (1455_CR49) 2021 WJ Kuo (1455_CR69) 2001; 66 N Daniels (1455_CR26) 1985 Ó Álvarez-Machancoses (1455_CR3) 2019; 14 1455_CR25 1455_CR24 S Darwall (1455_CR28) 1977; 88 E Pariser (1455_CR87) 2011 B Eidelson (1455_CR35) 2015 J Wolff (1455_CR117) 2010; 14 B Friedman (1455_CR43) 2017; 11 1455_CR29 1455_CR27 |
| References_xml | – reference: Hu M (2017) Algorithmic jim crow. Fordham Law Rev. https://ir.lawnet.fordham.edu/flr/vol86/iss2/13/. Retrieved March 10, 2021 – reference: Van den Hoven J, Vermaas PE, van de Poel I (2015) Handbook of ethics, values, and technological design. Sources, theory, values and application domains. Springer. ISBN: 978-94-007-6969-4 – reference: Giovanola B, Tiribelli S (2022) Weapons of Moral construction? On the value of fairness in algorithmic decision-making. Ethics Inform Technol. https://doi.org/10.1007/s10676-022-09622-5 – reference: NobleSUAlgorithms of oppression: how search engines reinforce racism2018New YorkNew York University Press10.18574/nyu/9781479833641.001.0001 – reference: Abebe R, Barocas S, Kleinberg J, Levy K, Raghavan M, Robinson DG (2020) Roles for computing in social change. https://doi.org/10.1145/3351095.3372871. ArXiv:1912.04883. – reference: Hinman LM (2008) Searching ethics: the role of search engines in the construction and distribution of knowledge. In: Spink A, Zimmer M (eds) Web search. Information science and knowledge management, Springer. https://doi.org/10.1007/978-3-540-75829-7_5. – reference: O’NeilCWeapons of math destruction: how big data increases inequality and threatens democracy2016New YorkCrown1441.00001 – reference: BarocasSSelbstADBig data’s disparate impactSSRN Electron J201610.2139/ssrn.2477899 – reference: Benjamin R (2019) Race after technology: abolitionist tools for the new jim code. Polity, Medford – reference: ShelbyTDark ghettos: injustice, dissent, and reform2016CambridgeHarvard University Press10.2307/j.ctv24w638g – reference: EidelsonBDiscrimination and disrespect2015OxfordOxford University Press10.1093/acprof:oso/9780198732877.001.0001 – reference: NoggleRKantian respect and particular personsCan J Philos19992944947710.1080/00455091.1999.10717521 – reference: MittelstadtBDAlloPTaddeoMWachterSFloridiLThe ethics of algorithms: mapping the debateBig Data Soc201610.1177/2053951716679679 – reference: Hinman LM (2005) Esse est indicato in Google: Ethical and Political Issues in Search Engines. International Review of Information Ethics 3. Retrieved March 11, 2021, from https://informationethics.ca/index.php/irie/article/view/345. – reference: Scheffler S (2003) What is egalitarianism?. Philos Public Affairs 31(1): 5–39. http://www.jstor.org/stable/3558033. Retrieved March 11, 2021 – reference: EubanksVAutomating inequality. How high-tech tools profile, police, and punish the poor2018New YorkSt Martin’s Publishing – reference: Fuster A, Goldsmith-Pinkham P, Ramadorai T, Walther A (2017) Predictably unequal? The effects of machine learning on credit markets. SSRN Electron J. https://doi.org/10.2139/ssrn.3072038. – reference: Dieterich B, Mendoza C., Brennan T (2016) COMPAS risk scales: demonstrating accuracy equity and predictive parity performance of the COMPAS risk scales in broward county. https://www.semanticscholar.org/paper/COMPAS-Risk-Scales-%3A-Demonstrating-Accuracy-Equity/cb6a2c110f9fe675799c6aefe1082bb6390fdf49. Retrieved March 11, 2021 – reference: BrighouseHRobeynsIMeasuring justice. Primary Goods and capabilities2010CambridgeCambridge University Press10.1017/CBO9780511810916 – reference: NorgeotBGlicksbergBSButteAJA call for deep-learning healthcareNat Med2019251141510.1038/s41591-018-0320-3 – reference: Corbett-Davies S, Goel S (2018) The measure and mismeasure of fairness: a critical review of fair machine learning. http://arxiv.org/abs/1808.00023. Retrieved March 11, 2021 – reference: Friedler S, Scheidegger C, Venkatasubramanian S (2016) On the (im)possibility of fairness. https://www.researchgate.net/publication/308610093_On_the_impossibility_of_fairness/citation/download. Retrieved March 11, 2021 – reference: KleinbergJLakkarajuHLeskovecJLudwigJMullainathanSHuman decisions and machine predictionsQ J Econ201710.1093/qje/qjx0321405.91119 – reference: FergusonAGThe rise of big dtata policing. Surveillance, race, and the future of law enforcement2017New YorkNew York University Press10.18574/nyu/9781479854608.001.0001 – reference: Chin-YeeBUpshurRThree problems with big data and artificial intelligence in medicinePerspect Biol Med201962223725610.1353/pbm.2019.0012 – reference: LaidlawEBPrivate power, public interest: an examination of search engine accountabilityInt J Law Inform Technol200817111314510.1093/ijlit/ean018 – reference: ObermeyerZPowersBVogeliCMullainathanSDissecting racial bias in an algorithm used to manage the health of populationsScience201936644745310.1126/science.aax2342 – reference: Tufekci Z (2015) Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. J Telecommun High Technol Law 13(203). https://ctlj.colorado.edu/wp-content/uploads/2015/08/Tufekci-final.pdf. Retrieved March 11, 2021 – reference: GiovanolaBSalaRThe reasons of the unreasonable: is political liberalism still an option?Philos Soc Crit202110.1177/01914537211040568 – reference: Kamishima T, Akaho S, Asoh H, Sakuma J (2012) Considerations on fairness-aware data mining. In: IEEE 12th International Conference on Data Mining Workshops, Brussels, Belgium, pp 378–385. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6406465. Retrieved March 10, 2021 – reference: GiovanolaBJustice, emotions, socially disruptive technologiesCrit Rev Int Soc Polit Philos202110.1080/13698230.2021.1893255 – reference: GarattiniCRaffleJAisyahDNSartainFKozlakidisZBig data analytics, infectious diseases and associated ethical impactsPhilos Technol2019321698510.1007/s13347-017-0278-y – reference: WongPDemocratizing algorithmic fairnessPhilos Technol201910.1007/s13347-019-00355-w – reference: HarerimanaGJangBKimJWParkHKHealth big data analytics: a technology surveyIEEE Access20186656616567810.1109/ACCESS.2018.2878254 – reference: Lippert-RasmussenKBorn free and equal? A philosophical inquiry into the nature of discrimination2013OxfordOxford University Press10.1093/acprof:oso/9780199796113.001.0001 – reference: CollSConsumption as biopower: governing bodies with loyalty cardsJ Consu Cult201313320122010.1177/1469540513480159 – reference: BurrellJHow the machine ‘thinks’: understanding opacity in machine learning algorithmsBig Data Soc201610.1177/2053951715622512 – reference: KhaitanTA theory of discrimination law2015OxfordOxford University Press10.1093/acprof:oso/9780199656967.001.0001 – reference: Binns R (2018) Fairness in machine learning: lessons from political philosophy. http://arxiv.org/abs/1712.03586. Retrieved 11 March, 2021 – reference: Angwin J, Larson J, Mattu S, Lauren K (2016) Machine bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Retrieved March 10, 2021 – reference: Danks D, London AJ (2017) Algorithmic bias in autonomous systems. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, pp 4691–4697. https://doi.org/10.24963/ijcai.2017/654. – reference: BartonCChettipallyUZhouYJiangZLynn-PalevskyALeSCalvertJDasREvaluation of a machine learning algorithm for up to 48-hour advance prediction of sepsis using six vital signsComput Biol Med2019109798410.1016/j.compbiomed.2019.04.027 – reference: TsamadosAAggarwalNCowlsJMorleyJRobertsHTaddeoMFloridiLThe ethics of algorithms: key problems and solutionsAI Soc202110.1007/s00146-021-01154-8 – reference: CharDSShahNHMagnusDImplementing machine learning in health care—addressing ethical challengesN Engl J Med20183781198198310.1056/NEJMp1714229 – reference: GulshanVPengLCoramMStumpeMCWuDNarayanaswamyAVenugopalanSWidnerKMadamsTCuadrosJKimRRamanRNelsonPCMegaJLWebsterDRDevelopment and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographsJAMA2016316222402241010.1001/jama.2016.17216 – reference: ShapiroSAlgorithmic television in the age of large-scale customizationTelevis New Med202021665866310.1177/1527476420919691 – reference: DanielsNJust health care1985CambridgeCambridge University Press10.1017/CBO9780511624971 – reference: Seng Ah LeeMFloridiLAlgorithmic fairness in mortgage lending: from absolute conditions to relational trade-offsMinds Mach202010.1007/s11023-020-09529-4 – reference: WolffJFairness respect, and the egalitarian ethosPhilos Public Affairs19982729712210.1111/j.1088-4963.1998.tb00063.x – reference: WolffJFairness, respect, and the egalitarian “ethos” revisitedJ Ethics2010143/433535010.1007/s10892-010-9085-8 – reference: MoreauSWhat is discrimination?Philos Public Aff201038214317910.1111/j.1088-4963.2010.01181.x – reference: Turner LeeNDetecting racial bias in algorithms and machine learningJ Inf Commun Ethics Soc201816325226010.1108/JICES-06-2018-0056 – reference: NoorPCan we trust AI not to further embed racial bias and prejudice?BMJ (Clin Res Ed)202036810.1136/bmj.m363 – reference: Mansoury M, Abdollahpouri H, Pechenizkiy M, Mobasher B, Burke R (2020) Feedback loop and bias amplification in recommender systems. In: Proceedings of the 29th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery, New York, NY, USA: 2145–2148. https://doi.org/10.1145/3340531.3412152. – reference: Gillis TB, Spiess J (2019) Big data and discrimination. Univ Chicago Law Rev. https://lawreview.uchicago.edu/sites/lawreview.uchicago.edu/files/09%20Gillis%20%26%20Spiess_SYMP_Post-SA%20%28BE%29.pdf. Retrieved March 11, 2021 – reference: Overdorf R, Kulynych B, Balsa E, Troncoso C, Gürse S (2018) Questioning the assumptions behind fairness solutions. ArXiv:1811.11293. Retrieved March 11, 2021 – reference: ShahHAlgorithmic accountabilityPhilos Trans R Soc Math Phys Eng Sci201837621282017036210.1098/rsta.2017.0362 – reference: FrickerMEpistemic injustice: power and the ethics of knowing2007New YorkOxford University Press10.1093/acprof:oso/9780198237907.001.0001 – reference: Sandel M (1984) The procedural republic and the unencumbered self. Polit Theory 12: 81–96. http://www.jstor.org/stable/191382. Retrieved March 11, 2021 – reference: Pleiss G, Raghavan M, Wu F, Kleinberg J, Weinberger KQ (2017) On fairness and calibration. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, pp 5684–5693. – reference: PariserEThe filter bubble2011New YorkPenguin – reference: WilliamsBPersons, character and moralityMoral Luck: Philosophical papers 1973–19801981CambridgeCambridge University Press11910.1017/CBO9781139165860 – reference: RomeiARuggieriSA multidisciplinary survey on discrimination analysisKnowl Eng Rev201429558263810.1017/S0269888913000039 – reference: Sangiovanni A (2017) Humanity without dignity. Moral equality, respect, and human rights. Harvard University Press, Cambridge – reference: TopolEJHigh-performance medicine: the convergence of human and artificial intelligenceNat Med2019251445610.1038/s41591-018-0300-7 – reference: UmbrelloSvan de PoelIMapping value sensitive design onto AI for social good principlesAI Ethics20211311410.1007/s43681-021-00038-3 – reference: Forst R (2014) Two pictures of justice. In: Justice, Democracy and the Right to Justification. Rainer Forst in Dialogue, Bloomsbury, London, pp 3–26. – reference: Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. https://arxiv.org/abs/1610.02413. Retrieved March 12, 2021 – reference: McCraddenMDJoshiSMazwiMAndersonJAEthical limitations of algorithmic fairness solutions in health care machine learningLancet Digital Health202025e221e22310.1016/S2589-7500(20)30065-0 – reference: Edwards L, Veale M (2017) Slave to the algorithm? Why a right to explanationn is probably not the remedy you are looking for. SSRN Electron J. https://doi.org/10.2139/ssrn.2972855. – reference: DworkinRSovereign virtue: the theory and practice of equality2000CambridgeHarvard University Press – reference: CarterIRespect and the basis of equalityEthics2011121353857110.1086/658897 – reference: NewellSMarabelliMStrategic opportunities (and challenges) of algorithmic decision-making: a call for action on the long-term societal effects of ‘datificaion’J Strateg Inf Syst201524131410.1016/j.jsis.2015.02.001 – reference: UmbrelloSImaginative value sensitive design: using moral imagination theory to inform responsible technology designSci Eng Ethics202026257559510.1007/s11948-019-00104-4 – reference: Dwork C, Hard M, Pitassi T, Reingold O, Zemel R (2011) Fairness through awareness. http://arxi-v.org/abs/1104.3913. Retrieved March 11, 2021 – reference: WaldronJOne another’s equal. The basis of human equality2017CambridgeHarvard University Press10.4159/9780674978867 – reference: Hildebrandt M (2008) Defining profiling: a new type of knowledge?. In: Hildebrandt M, Gutwirth S (eds) Profiling the European Citizen. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-6914-7_2 – reference: JobinAIencaMVayenaEArtificial intelligence: the global landscape of ethics guidelinesNat Mach Intell2019138939910.1038/s42256-019-0088-2 – reference: DiakopoulosNKoliskaMAlgorithmic transparency in the news mediaDigit J20175780982810.1080/21670811.2016.1208053 – reference: Deville J (2013) Leaky Data: How Wonga Makes Lending decisions. Charisma: Consumer Market Studies. http://www.charisma-network.net/finance/leaky-data-how-wonga-makes-lending-decisions. Retrieved March 11, 2021 – reference: BaumSDOn the promotion of safe and socially beneficial artificial intelligenceAI Soc201610.1007/s00146-016-0677-0 – reference: RajkomarAHardtMHowellMDCorradoGChinMHEnsuring fairness in machine learning to advance health equityAnn Intern Med20181691286687210.7326/M18-1990 – reference: Cotter A, Jiang H, Sridharan K (2018) Two-player games for efficient non-convex constrained optimization. arXiv preprint arXiv:1804.06500. – reference: KuoWJChangRFChenDRLeeCCData mining with decision trees for diagnosis of breast tumor in medical ultrasonic imagesBreast Cancer Res Treat2001661515710.1023/A:1010676701382 – reference: RobbinsSA misdirected principle with a catch: explicability for AIMind Mach2019294495514113141310.1007/s11023-019-09509-3 – reference: PasqualeFThe black box society: the secret algorithms that control money and information2015CambridgeHarvard University Press10.4159/harvard.9780674736061 – reference: Ochigame R (2019) The invention of “Ethical AI”. https://theintercept.com/2019/12/20/mit-ethical-ai-artificial-intelligence/. Retrieved March 10, 2021 – reference: Barocas S (2014) Data mining and the discourse on discrimination. In: Proceedings of the Data Ethics Workshop, Conference on Knowledge Discovery and Data Mining (KDD). https://dataethics.github.io/proceedings/DataMiningandtheDiscourseOnDiscrimination.pdf. Retrieved March 10 2021 – reference: CohenIGAmarasinghamRShahAXieBLoBThe legal and ethical concerns that arise from using complex predictive analytics in health careHealth Aff20143371139114710.1377/hlthaff.2014.0048 – reference: DarwallSTwo kinds of respectEthics197788364910.1086/292054 – reference: VyasDAEisensteinLGJonesDSHidden in plain sight—reconsidering the use of race correction in clinical algorithmsN Engl J Med2020383987488210.1056/NEJMms2004740 – reference: GroteTBerensPOn the ethics of algorithmic decision-making in healthcareJ Med Ethics202046320521110.1136/medethics-2019-105586 – reference: HaySIGeorgeDBMoyesCLBrownsteinJSBig data opportunities for global infectious disease surveillancePLoS Med201310410.1371/journal.pmed.1001413 – reference: Selbst AD, Boyd D, Friedler AS, Venkatasubramanian S, Vertesi J (2019) Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, 59–68. ACM Press. Atlanta, GA, USA: https://doi.org/10.1145/3287560.3287598. – reference: MorleyJMachadoCBurrCCowlsJJoshiITaddeoMFloridiLThe ethics of AI in health care: a mapping reviewSoc Sci Med202026010.1016/j.socscimed.2020.113172 – reference: BozdagEBias in algorithmic filtering and personalizationEthics Inf Technol20131520922710.1007/s10676-013-9321-6 – reference: ShinDParkYJRole of fairness, accountability, and transparency in algorithmic affordanceComput Hum Behav20199827728410.1016/j.chb.2019.04.019 – reference: RawlsJA theory of justice1971CambridgeHarvard University Press10.4159/9780674042605 – reference: TranBXVuGTHaGHVuongQHHoMTVuongTTHoRCMGlobal evolution of research in artificial intelligence in health and medicine: a bibliometric studyJ Clin Med201910.3390/jcm8030360 – reference: Kim PT (2017) Data-driven discrimination at work. 58 Wm. & Mary L. Rev 857(3). https://scholarship.law.wm.edu/wmlr/vol58/iss3/4. Retrieved March 11, 2021 – reference: BerkRHeidariHJabbariSKearnsMRothAFairness in criminal justice risk assessments: the state of the artSociol Methods Res201810.1177/0049124118782533 – reference: HintonGDeep learning-a technology with the potential to transform health careJAMA2018320111101110210.1001/jama.2018.11100 – reference: HellmanDMoreauSPhilosophical foundations of discrimination law2013OxfordOxford University Press10.1093/acprof:oso/9780199664313.001.0001 – reference: Goh G, Cotter A, Gupta M, Friedlander MP (2016) Satisfying real-world goals with dataset con- straints. In: Advances in Neural Information Processing Systems, pp 2415–2423. Available at: https://papers.nips.cc/paper/2016/file/dc4c44f624d600aa568390f1f1104aa0-Paper.pdf – reference: ChouldechovaAFair prediction with disparate impact: a study of bias in recidivism prediction instrumentsBig Data20175215316310.1089/big.2016.0047 – reference: Lobosco K (2013) Facebook friends could change your credit score. CNN Business. https://money.cnn.com/2013/08/26/technology/social/facebook-credit-score/index.html. . Retrieved March 11, 2021 – reference: AndersonEWhat is the point of equality?Ethics1999109228933710.1086/233897 – reference: EstevaARobicquetARamsundarBKuleshovVDePristoMChouKCuiCCorradoGThrunSDeanJA guide to deep learning in healthcareNat Med2019251242910.1038/s41591-018-0316-z – reference: Simonite T (2020) Meet the secret algorithm that's keeping students out of college. Wired. https://www.wired.com/story/algorithm-set-students-grades-altered-futures/. Retrieved March 11, 2021 – reference: Richardson R, Schultz J, Crawford K (2019) Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice. N.Y.U. L. Review 94(192). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3333423. Retrieved March 10, 2021 – reference: Agarwal A, Beygelzimer A, Dudik M, Langford J., Wallach H (2018) A reductions approach to fair classification. In: Proceedings of the 35th International Conference on Machine Learning. In Proceedings of Machine Learning Research, 80: 60–69. Available at https://proceedings.mlr.press/v80/agarwal18a.html – reference: Zafar MB, Valera I, Gomez Rodriguez M, Gummadi KP (2015) Fairness constraints: Mechanisms for fair classification. arXiv preprint arXiv:1507.05259. – reference: FlemingNHow artificial intelligence is changing drug discoveryNature20185577707S55S5710.1038/d41586-018-05267-x – reference: Giovanola B (2018) Giustizia sociale. Eguaglianza e rispetto nelle società diseguali. Il Mulino, Bologna. – reference: Álvarez-MachancosesÓFernández-MartínezJLUsing artificial intelligence methods to speed up drug discoveryExpert Opin Drug Discov201914876977710.1080/17460441.2019.1621284 – reference: BuhmannAPaßmannJFieselerCManaging algorithmic accountability: balancing reputational concerns, engagement strategies, and the potential of rational discourseJ Bus Ethics201910.1007/s10551-019-04226-4 – reference: FriedmanBHendryDGBorningAA survey of value sensitive design methods. Foundations and Trends®Human Comput Interact20171126312510.1561/110000001 – reference: BarakatNBradleyAPBarakatMNHIntelligible support vector machines for diagnosis of diabetes mellitusIEEE Trans Inf Technol Biomed20101441114112010.1109/TITB.2009.2039485 – reference: KellyEThe historical injustice problem for political liberalismEthics2017128759410.1086/692974 – year: 2019 ident: 1455_CR118 publication-title: Philos Technol doi: 10.1007/s13347-019-00355-w – volume-title: Just health care year: 1985 ident: 1455_CR26 doi: 10.1017/CBO9780511624971 – volume-title: Philosophical foundations of discrimination law year: 2013 ident: 1455_CR56 doi: 10.1093/acprof:oso/9780199664313.001.0001 – volume: 29 start-page: 495 issue: 4 year: 2019 ident: 1455_CR93 publication-title: Mind Mach doi: 10.1007/s11023-019-09509-3 – volume: 2 start-page: e221 issue: 5 year: 2020 ident: 1455_CR74 publication-title: Lancet Digital Health doi: 10.1016/S2589-7500(20)30065-0 – volume: 11 start-page: 63 issue: 2 year: 2017 ident: 1455_CR43 publication-title: Human Comput Interact doi: 10.1561/110000001 – year: 2021 ident: 1455_CR49 publication-title: Philos Soc Crit doi: 10.1177/01914537211040568 – volume-title: A theory of justice year: 1971 ident: 1455_CR91 doi: 10.4159/9780674042605 – volume: 62 start-page: 237 issue: 2 year: 2019 ident: 1455_CR20 publication-title: Perspect Biol Med doi: 10.1353/pbm.2019.0012 – volume: 26 start-page: 575 issue: 2 year: 2020 ident: 1455_CR110 publication-title: Sci Eng Ethics doi: 10.1007/s11948-019-00104-4 – ident: 1455_CR59 doi: 10.1007/978-3-540-75829-7_5 – year: 2018 ident: 1455_CR12 publication-title: Sociol Methods Res doi: 10.1177/0049124118782533 – ident: 1455_CR119 – volume-title: The black box society: the secret algorithms that control money and information year: 2015 ident: 1455_CR88 doi: 10.4159/harvard.9780674736061 – ident: 1455_CR40 doi: 10.5040/9781472544735.ch-001 – ident: 1455_CR51 – volume: 66 start-page: 51 issue: 1 year: 2001 ident: 1455_CR69 publication-title: Breast Cancer Res Treat doi: 10.1023/A:1010676701382 – ident: 1455_CR95 doi: 10.1177/0090591784012001005 – year: 2021 ident: 1455_CR48 publication-title: Crit Rev Int Soc Polit Philos doi: 10.1080/13698230.2021.1893255 – ident: 1455_CR92 – ident: 1455_CR5 – volume: 378 start-page: 981 issue: 11 year: 2018 ident: 1455_CR19 publication-title: N Engl J Med doi: 10.1056/NEJMp1714229 – volume: 15 start-page: 209 year: 2013 ident: 1455_CR14 publication-title: Ethics Inf Technol doi: 10.1007/s10676-013-9321-6 – volume: 29 start-page: 582 issue: 5 year: 2014 ident: 1455_CR94 publication-title: Knowl Eng Rev doi: 10.1017/S0269888913000039 – ident: 1455_CR13 – ident: 1455_CR86 – volume-title: The filter bubble year: 2011 ident: 1455_CR87 – ident: 1455_CR112 doi: 10.1007/978-94-007-6970-0 – volume: 17 start-page: 113 issue: 1 year: 2008 ident: 1455_CR70 publication-title: Int J Law Inform Technol doi: 10.1093/ijlit/ean018 – ident: 1455_CR50 doi: 10.1007/s10676-022-09622-5 – ident: 1455_CR54 – year: 2017 ident: 1455_CR67 publication-title: Q J Econ doi: 10.1093/qje/qjx032 – volume: 109 start-page: 289 issue: 2 year: 1999 ident: 1455_CR4 publication-title: Ethics doi: 10.1086/233897 – volume-title: Measuring justice. Primary Goods and capabilities year: 2010 ident: 1455_CR17 doi: 10.1017/CBO9780511810916 – volume: 5 start-page: 153 issue: 2 year: 2017 ident: 1455_CR22 publication-title: Big Data doi: 10.1089/big.2016.0047 – volume-title: Automating inequality. How high-tech tools profile, police, and punish the poor year: 2018 ident: 1455_CR37 – ident: 1455_CR108 – volume: 25 start-page: 24 issue: 1 year: 2019 ident: 1455_CR36 publication-title: Nat Med doi: 10.1038/s41591-018-0316-z – volume-title: Epistemic injustice: power and the ethics of knowing year: 2007 ident: 1455_CR41 doi: 10.1093/acprof:oso/9780198237907.001.0001 – volume: 10 issue: 4 year: 2013 ident: 1455_CR60 publication-title: PLoS Med doi: 10.1371/journal.pmed.1001413 – volume: 128 start-page: 75 year: 2017 ident: 1455_CR65 publication-title: Ethics doi: 10.1086/692974 – ident: 1455_CR66 – volume: 16 start-page: 252 issue: 3 year: 2018 ident: 1455_CR109 publication-title: J Inf Commun Ethics Soc doi: 10.1108/JICES-06-2018-0056 – year: 2016 ident: 1455_CR16 publication-title: Big Data Soc doi: 10.1177/2053951715622512 – start-page: 1 volume-title: Moral Luck: Philosophical papers 1973–1980 year: 1981 ident: 1455_CR115 doi: 10.1017/CBO9781139165860 – volume: 33 start-page: 1139 issue: 7 year: 2014 ident: 1455_CR21 publication-title: Health Aff doi: 10.1377/hlthaff.2014.0048 – volume: 557 start-page: S55 issue: 7707 year: 2018 ident: 1455_CR39 publication-title: Nature doi: 10.1038/d41586-018-05267-x – volume: 368 year: 2020 ident: 1455_CR81 publication-title: BMJ (Clin Res Ed) doi: 10.1136/bmj.m363 – ident: 1455_CR24 – ident: 1455_CR72 – volume: 98 start-page: 277 year: 2019 ident: 1455_CR102 publication-title: Comput Hum Behav doi: 10.1016/j.chb.2019.04.019 – volume: 260 year: 2020 ident: 1455_CR78 publication-title: Soc Sci Med doi: 10.1016/j.socscimed.2020.113172 – volume: 46 start-page: 205 issue: 3 year: 2020 ident: 1455_CR52 publication-title: J Med Ethics doi: 10.1136/medethics-2019-105586 – year: 2016 ident: 1455_CR75 publication-title: Big Data Soc doi: 10.1177/2053951716679679 – ident: 1455_CR46 – volume-title: The rise of big dtata policing. Surveillance, race, and the future of law enforcement year: 2017 ident: 1455_CR38 doi: 10.18574/nyu/9781479854608.001.0001 – year: 2020 ident: 1455_CR98 publication-title: Minds Mach doi: 10.1007/s11023-020-09529-4 – ident: 1455_CR29 – ident: 1455_CR44 doi: 10.2139/ssrn.3072038 – volume-title: Weapons of math destruction: how big data increases inequality and threatens democracy year: 2016 ident: 1455_CR85 – volume-title: A theory of discrimination law year: 2015 ident: 1455_CR68 doi: 10.1093/acprof:oso/9780199656967.001.0001 – volume: 376 start-page: 20170362 issue: 2128 year: 2018 ident: 1455_CR99 publication-title: Philos Trans R Soc Math Phys Eng Sci doi: 10.1098/rsta.2017.0362 – volume: 25 start-page: 44 issue: 1 year: 2019 ident: 1455_CR105 publication-title: Nat Med doi: 10.1038/s41591-018-0300-7 – volume: 13 start-page: 201 issue: 3 year: 2013 ident: 1455_CR23 publication-title: J Consu Cult doi: 10.1177/1469540513480159 – volume: 316 start-page: 2402 issue: 22 year: 2016 ident: 1455_CR53 publication-title: JAMA doi: 10.1001/jama.2016.17216 – volume: 14 start-page: 769 issue: 8 year: 2019 ident: 1455_CR3 publication-title: Expert Opin Drug Discov doi: 10.1080/17460441.2019.1621284 – ident: 1455_CR27 doi: 10.24963/ijcai.2017/654 – volume: 121 start-page: 538 issue: 3 year: 2011 ident: 1455_CR18 publication-title: Ethics doi: 10.1086/658897 – volume: 5 start-page: 809 issue: 7 year: 2017 ident: 1455_CR30 publication-title: Digit J doi: 10.1080/21670811.2016.1208053 – ident: 1455_CR32 – ident: 1455_CR11 doi: 10.1093/sf/soz162 – volume-title: One another’s equal. The basis of human equality year: 2017 ident: 1455_CR114 doi: 10.4159/9780674978867 – volume: 27 start-page: 97 issue: 2 year: 1998 ident: 1455_CR116 publication-title: Philos Public Affairs doi: 10.1111/j.1088-4963.1998.tb00063.x – volume: 25 start-page: 14 issue: 1 year: 2019 ident: 1455_CR82 publication-title: Nat Med doi: 10.1038/s41591-018-0320-3 – volume-title: Born free and equal? A philosophical inquiry into the nature of discrimination year: 2013 ident: 1455_CR71 doi: 10.1093/acprof:oso/9780199796113.001.0001 – volume: 29 start-page: 449 year: 1999 ident: 1455_CR80 publication-title: Can J Philos doi: 10.1080/00455091.1999.10717521 – volume: 14 start-page: 1114 issue: 4 year: 2010 ident: 1455_CR6 publication-title: IEEE Trans Inf Technol Biomed doi: 10.1109/TITB.2009.2039485 – volume: 1 start-page: 389 year: 2019 ident: 1455_CR63 publication-title: Nat Mach Intell doi: 10.1038/s42256-019-0088-2 – volume: 169 start-page: 866 issue: 12 year: 2018 ident: 1455_CR90 publication-title: Ann Intern Med doi: 10.7326/M18-1990 – volume-title: Discrimination and disrespect year: 2015 ident: 1455_CR35 doi: 10.1093/acprof:oso/9780198732877.001.0001 – volume: 383 start-page: 874 issue: 9 year: 2020 ident: 1455_CR113 publication-title: N Engl J Med doi: 10.1056/NEJMms2004740 – year: 2019 ident: 1455_CR15 publication-title: J Bus Ethics doi: 10.1007/s10551-019-04226-4 – ident: 1455_CR58 doi: 10.29173/irie345 – ident: 1455_CR73 doi: 10.1145/3340531.3412152 – year: 2016 ident: 1455_CR10 publication-title: AI Soc doi: 10.1007/s00146-016-0677-0 – ident: 1455_CR2 – volume-title: Dark ghettos: injustice, dissent, and reform year: 2016 ident: 1455_CR101 doi: 10.2307/j.ctv24w638g – ident: 1455_CR62 – volume: 6 start-page: 65661 year: 2018 ident: 1455_CR55 publication-title: IEEE Access doi: 10.1109/ACCESS.2018.2878254 – volume: 1 start-page: 1 issue: 3 year: 2021 ident: 1455_CR111 publication-title: AI Ethics doi: 10.1007/s43681-021-00038-3 – ident: 1455_CR47 – volume: 32 start-page: 69 issue: 1 year: 2019 ident: 1455_CR45 publication-title: Philos Technol doi: 10.1007/s13347-017-0278-y – ident: 1455_CR104 doi: 10.1111/j.1088-4963.2003.00005.x – volume-title: Algorithms of oppression: how search engines reinforce racism year: 2018 ident: 1455_CR79 doi: 10.18574/nyu/9781479833641.001.0001 – volume: 21 start-page: 658 issue: 6 year: 2020 ident: 1455_CR100 publication-title: Televis New Med doi: 10.1177/1527476420919691 – ident: 1455_CR1 doi: 10.1145/3351095.3372871 – ident: 1455_CR89 – volume: 109 start-page: 79 year: 2019 ident: 1455_CR9 publication-title: Comput Biol Med doi: 10.1016/j.compbiomed.2019.04.027 – ident: 1455_CR96 doi: 10.4159/9780674977440 – year: 2021 ident: 1455_CR107 publication-title: AI Soc doi: 10.1007/s00146-021-01154-8 – ident: 1455_CR34 doi: 10.2139/ssrn.2972855 – ident: 1455_CR42 – year: 2016 ident: 1455_CR8 publication-title: SSRN Electron J doi: 10.2139/ssrn.2477899 – volume-title: Sovereign virtue: the theory and practice of equality year: 2000 ident: 1455_CR33 – volume: 320 start-page: 1101 issue: 11 year: 2018 ident: 1455_CR61 publication-title: JAMA doi: 10.1001/jama.2018.11100 – volume: 38 start-page: 143 issue: 2 year: 2010 ident: 1455_CR77 publication-title: Philos Public Aff doi: 10.1111/j.1088-4963.2010.01181.x – ident: 1455_CR103 – ident: 1455_CR7 – volume: 24 start-page: 3 issue: 1 year: 2015 ident: 1455_CR76 publication-title: J Strateg Inf Syst doi: 10.1016/j.jsis.2015.02.001 – volume: 366 start-page: 447 year: 2019 ident: 1455_CR83 publication-title: Science doi: 10.1126/science.aax2342 – ident: 1455_CR97 doi: 10.1145/3287560.3287598 – ident: 1455_CR84 – ident: 1455_CR31 – ident: 1455_CR57 doi: 10.1007/978-1-4020-6914-7_2 – ident: 1455_CR25 – volume: 14 start-page: 335 issue: 3/4 year: 2010 ident: 1455_CR117 publication-title: J Ethics doi: 10.1007/s10892-010-9085-8 – volume: 88 start-page: 36 year: 1977 ident: 1455_CR28 publication-title: Ethics doi: 10.1086/292054 – ident: 1455_CR64 doi: 10.1109/ICDMW.2012.101 – year: 2019 ident: 1455_CR106 publication-title: J Clin Med doi: 10.3390/jcm8030360 |
| SSID | ssj0006582 |
| Score | 2.5918636 |
| Snippet | The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and... |
| SourceID | pubmedcentral proquest gale pubmed crossref springer |
| SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 549 |
| SubjectTerms | Algorithms Artificial Intelligence Bias Computer Science Control Data mining Engineering Economics Ethics Health care Learning algorithms Logistics Machine learning Marketing Mechatronics Medical ethics Methodology of the Social Sciences Organization Original Original Article Performing Arts Philosophy Principles Robotics |
| SummonAdditionalLinks | – databaseName: Computer Science Database dbid: K7- link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Nb9QwEB1B4dALhQIlUJCRkDiARRI7jsMFrRAVCFT1AKi3yHGc7kptdtlsEfx7ZhwnJSvRC7es7ET2zvP4jT0fAC_Q4LKyLmJuKym4bGTGdYU_szTGZS4QAf664PuX_PhYn54WJ-HArQtulYNO9Iq6Xlo6I39DWVtyjfZ3_G71g1PVKLpdDSU0bsKtJE0TwvnnnI-aGHfXdCglj7RFhaAZHzpHtgG535JjgswyriYb07Z6_mt_2vad3LpA9fvS0d7_zugu3AmMlM16CN2DG67dh72h2gMLi38fdk-Gqge_78OvPvKFVQvTMYMPFN3bVwgjSb9la1e7xhefYEgx2ewTc96znq2G0322bBjdJpGuZYuWzUdPNHbhPTwdDyUtzpg5P8ORb-YX3QP4dvTh6_uPPBRx4Ba5meJ5rAskDS5TRlglnUaFajOXqEpqU2Ro_jkjXIOGlBWZqeqqkk3jdGMqJDJaO_EQdtpl6x4BEwilNKlcZpBYGKkLG7vYqLqQdZ07aSJIBgmWNmQ4p0Ib5-WYm9lLvUSpl17qpYrg1fjOqs_vcW3vlwSMkhY_ftmaEMOA46M0WuUMjTOVI-VKIjic9MRFa6fNAybKoDS68goQETwfm-lNcoRr3fIS-1BYlZKFFBEc9Egcxy2QC1M6wwjyCUbHDpRKfNrSLuY-pXhBSXhSnN_rAc1Xw_r33_H4-lk8gd0UKWHv53QIO5v1pXsKt-3PzaJbP_NL9Q_8BUQo priority: 102 providerName: ProQuest |
| Title | Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms |
| URI | https://link.springer.com/article/10.1007/s00146-022-01455-6 https://www.ncbi.nlm.nih.gov/pubmed/35615443 https://www.proquest.com/docview/2807780930 https://www.proquest.com/docview/2670064943 https://pubmed.ncbi.nlm.nih.gov/PMC9123626 |
| Volume | 38 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: SpringerLink Contemporary customDbUrl: eissn: 1435-5655 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0006582 issn: 0951-5666 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB7RlgMXyrMEyspISBwgUjZ-xOG2oFYgYFkVqFZcIsdxupHabLW7Rfx8ZpwHZAVIcLESeRLZzjw-x_MAeIobLiuKNAptLngoSiFDneOtjCMUc44c4I8LTt8n06mez9NZGxS27rzduyNJr6n7YDdC8-QwS64EQspQ7cAemjtN4njy6bTXv2hT466APIIV1YbK_P4dA3O0rZR_sUrbHpNbx6beGh3v_988bsHNFn2yScMut-Gaq-_AflfZgbWCfhe-N4EtLK_Mmhm8oODdpgAYfciXbOUKV_raEgwRJJu8Zc47zrPL7uc9W5aMDotIlbKqZove0YxdeAdOF7YVK86YOT9brqrN4mJ9D74cH31-_SZsazSEFqGXCpNIp4gJnFSGWyWcRn1ppRurXGiTStzdOcNdifsky6XJizwXZel0aXLEKVo7fh9262XtHgDjyCnxOHfSIG4wQqc2cpFRRSqKInHCBDDuPlVm2wTmVEfjPOtTL_ulzXBpM7-0mQrgef_MZZO-46_Uz4gDMpJtfLM1bYgCjo-yZGUT3HupBBHVOIDDASXKpB12dzyUtTphnVHeoURHKY8CeNJ305Pk51a75RXSUNSUEqngARw0LNePmyPUpWyFASQDZuwJKFP4sKeuFj5jeEo5dmKc34uOJX8O68_L8fDfyB_BjRgRYOPWdAi7m9WVewzX7bdNtV6NYCeZ6xHsvTqazk7w7l0SYvsh_khtMsN2Jr-OvCT_AKaGPDw |
| linkProvider | Springer Nature |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9NAEB6VgkQvLZRHDQUWCcQBLBx7ba-REIqAqlFDlENBvW3X63UTqXVCnAL9U_xGZtaP4kj01gO3RLt21ptvXt6Z-QBeYMCleZZ4rk554PKch65I8WvoeyjmASLAHhd8G8ajkTg6SsZr8LuphaG0ykYnWkWdzTS9I39LXVtigfG392H-3SXWKDpdbSg0KlgcmIufGLKV7wef8P996ft7nw8_7rs1q4Cr0VmI3NgTCVoxE0Yq0BE3AiVch6YXpVyoJMR4xKjA5OjZ6yBUaZamPM-NyFWKllUIE-B9b8BNdCN8YVMFx63mR2vuN9T1-FNRXaRjS_UoFqF0X0qE4GHoRh1DuGoO_rKHq7maKwe21g7ubf1vO3gHNmuPm_UrEbkLa6bYhq2GzYLVym0bNsYNq8PFPfhVVfawdKpKpvADVS9XDGiE5HdsYTKTW3INhi406w-YsZUDbN6cXrBZzui0jGwJmxZs0mbasTObwWrcmrLjhKnTE9yp5eSsvA9fr2UzHsB6MSvMDrAARcXvpSZU6DgpLhLtGU9FWcKzLDZcOdBrECN13cGdiEROZdt72qJMIsqkRZmMHHjdXjOv-pdcOfsVAVGScsM7a1XXaOD6qE2Y7GPwGcXoUvYc2O3MRKWku8MNBmWtFEt5CUAHnrfDdCUl-hVmdo5zqGws4gkPHHhYIb9dd4C-PrVrdCDuyEQ7gVqld0eK6cS2TE-oyZCPz_emkZ7LZf17Ox5d_RTP4Pb-4ZehHA5GB49hw0f3t8rp2oX15eLcPIFb-sdyWi6eWjXB4Pi6peoPGwii2g |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9NAEB6VglAvLZSXocAigTiAVcde22skhCJKRNQqygFQxcVdr9dNpNZJ4xToX-PXMbP2ujgSvfXALdGunfXmm5d3Zj6AlxhwKZ4nnqsyHri84KErMvwa-h6KeYAIMMcF3w7i0UgcHibjNfhta2EordLqRKOo85mid-S71LUlFhh_e7tFkxYx3ht8mJ-5xCBFJ62WTqOGyL6--InhW_V-uIf_9SvfH3z68vGz2zAMuAodh8iNPZGgRdNhJAMVcS1Q2lWoe1HGhUxCjE20DHSBXr4KQpnlWcaLQotCZmhlhdAB3vcG3IwxxqTAbxx-b60AWnbf0tjjT0VNwY4p26O4hFJ_KSmCh6EbdYziqmn4yzau5m2uHN4amzjY-p938w5sNp4469eicxfWdLkNW5blgjVKbxs2xpbt4eIe_Korflg2lRWT-IGqmmtmNEL4O7bQuS4M6QZD15r1h0ybigI2t6cabFYwOkUjG8OmJZu0GXjs1GS2areh8jhm8uQYd2o5Oa3uw9dr2YwHsF7OSv0IWIAi5PcyHUp0qCQXifK0J6M84Xkeay4d6Fn0pKrp7E4EIydp25PaIC5FxKUGcWnkwJv2mnnd1-TK2a8JlCkpPbyzkk3tBq6P2oelfQxKoxhdzZ4DO52ZqKxUd9jiMW2UZZVegtGBF-0wXUkJgKWeneMcKieLeMIDBx7WUtCuO8AYgNo4OhB35KOdQC3UuyPldGJaqSfUfMjH53trJelyWf_ejsdXP8VzuI3ClB4MR_tPYMNHr7hO9dqB9eXiXD-FW-rHclotnhmNweDouoXqD8Iuq7w |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Beyond+bias+and+discrimination%3A+redefining+the+AI+ethics+principle+of+fairness+in+healthcare+machine-learning+algorithms&rft.jtitle=AI+%26+society&rft.au=Giovanola%2C+Benedetta&rft.au=Tiribelli%2C+Simona&rft.date=2023-04-01&rft.pub=Springer+London&rft.issn=0951-5666&rft.eissn=1435-5655&rft.volume=38&rft.issue=2&rft.spage=549&rft.epage=563&rft_id=info:doi/10.1007%2Fs00146-022-01455-6&rft.externalDocID=10_1007_s00146_022_01455_6 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0951-5666&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0951-5666&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0951-5666&client=summon |