Investigating for bias in healthcare algorithms: a sex-stratified analysis of supervised machine learning models in liver disease prediction
ObjectivesThe Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this over...
Uloženo v:
| Vydáno v: | BMJ health & care informatics Ročník 29; číslo 1; s. e100457 |
|---|---|
| Hlavní autoři: | , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
England
BMJ Publishing Group Ltd
01.04.2022
BMJ Publishing Group LTD BMJ Publishing Group |
| Témata: | |
| ISSN: | 2632-1009, 2632-1009 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | ObjectivesThe Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias.MethodsFollowing our literature review of ILPD papers, the models reported in existing studies are recreated and then interrogated for bias. We define four experiments, training on sex-unbalanced/balanced data, with and without feature selection. We build random forests (RFs), support vector machines (SVMs), Gaussian Naïve Bayes and logistic regression (LR) classifiers, running experiments 100 times, reporting average results with SD.ResultsWe reproduce published models achieving accuracies of >70% (LR 71.31% (2.37 SD) – SVM 79.40% (2.50 SD)) and demonstrate a previously unobserved performance disparity. Across all classifiers females suffer from a higher false negative rate (FNR). Presently, RF and LR classifiers are reported as the most effective models, yet in our experiments they demonstrate the greatest FNR disparity (RF; −21.02%; LR; −24.07%).DiscussionWe demonstrate a sex disparity that exists in published ILPD classifiers. In practice, the higher FNR for females would manifest as increased rates of missed diagnosis for female patients and a consequent lack of appropriate care. Our study demonstrates that evaluating biases in the initial stages of machine learning can provide insights into inequalities in current clinical practice, reveal pathophysiological differences between the male and females, and can mitigate the digitisation of inequalities into algorithmic systems.ConclusionOur findings are important to medical data scientists, clinicians and policy-makers involved in the implementation medical artificial intelligence systems. An awareness of the potential biases of these systems is essential in preventing the digital exacerbation of healthcare inequalities. |
|---|---|
| AbstractList | The Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias.OBJECTIVESThe Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias.Following our literature review of ILPD papers, the models reported in existing studies are recreated and then interrogated for bias. We define four experiments, training on sex-unbalanced/balanced data, with and without feature selection. We build random forests (RFs), support vector machines (SVMs), Gaussian Naïve Bayes and logistic regression (LR) classifiers, running experiments 100 times, reporting average results with SD.METHODSFollowing our literature review of ILPD papers, the models reported in existing studies are recreated and then interrogated for bias. We define four experiments, training on sex-unbalanced/balanced data, with and without feature selection. We build random forests (RFs), support vector machines (SVMs), Gaussian Naïve Bayes and logistic regression (LR) classifiers, running experiments 100 times, reporting average results with SD.We reproduce published models achieving accuracies of >70% (LR 71.31% (2.37 SD) - SVM 79.40% (2.50 SD)) and demonstrate a previously unobserved performance disparity. Across all classifiers females suffer from a higher false negative rate (FNR). Presently, RF and LR classifiers are reported as the most effective models, yet in our experiments they demonstrate the greatest FNR disparity (RF; -21.02%; LR; -24.07%).RESULTSWe reproduce published models achieving accuracies of >70% (LR 71.31% (2.37 SD) - SVM 79.40% (2.50 SD)) and demonstrate a previously unobserved performance disparity. Across all classifiers females suffer from a higher false negative rate (FNR). Presently, RF and LR classifiers are reported as the most effective models, yet in our experiments they demonstrate the greatest FNR disparity (RF; -21.02%; LR; -24.07%).We demonstrate a sex disparity that exists in published ILPD classifiers. In practice, the higher FNR for females would manifest as increased rates of missed diagnosis for female patients and a consequent lack of appropriate care. Our study demonstrates that evaluating biases in the initial stages of machine learning can provide insights into inequalities in current clinical practice, reveal pathophysiological differences between the male and females, and can mitigate the digitisation of inequalities into algorithmic systems.DISCUSSIONWe demonstrate a sex disparity that exists in published ILPD classifiers. In practice, the higher FNR for females would manifest as increased rates of missed diagnosis for female patients and a consequent lack of appropriate care. Our study demonstrates that evaluating biases in the initial stages of machine learning can provide insights into inequalities in current clinical practice, reveal pathophysiological differences between the male and females, and can mitigate the digitisation of inequalities into algorithmic systems.Our findings are important to medical data scientists, clinicians and policy-makers involved in the implementation medical artificial intelligence systems. An awareness of the potential biases of these systems is essential in preventing the digital exacerbation of healthcare inequalities.CONCLUSIONOur findings are important to medical data scientists, clinicians and policy-makers involved in the implementation medical artificial intelligence systems. An awareness of the potential biases of these systems is essential in preventing the digital exacerbation of healthcare inequalities. ObjectivesThe Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias.MethodsFollowing our literature review of ILPD papers, the models reported in existing studies are recreated and then interrogated for bias. We define four experiments, training on sex-unbalanced/balanced data, with and without feature selection. We build random forests (RFs), support vector machines (SVMs), Gaussian Naïve Bayes and logistic regression (LR) classifiers, running experiments 100 times, reporting average results with SD.ResultsWe reproduce published models achieving accuracies of >70% (LR 71.31% (2.37 SD) – SVM 79.40% (2.50 SD)) and demonstrate a previously unobserved performance disparity. Across all classifiers females suffer from a higher false negative rate (FNR). Presently, RF and LR classifiers are reported as the most effective models, yet in our experiments they demonstrate the greatest FNR disparity (RF; −21.02%; LR; −24.07%).DiscussionWe demonstrate a sex disparity that exists in published ILPD classifiers. In practice, the higher FNR for females would manifest as increased rates of missed diagnosis for female patients and a consequent lack of appropriate care. Our study demonstrates that evaluating biases in the initial stages of machine learning can provide insights into inequalities in current clinical practice, reveal pathophysiological differences between the male and females, and can mitigate the digitisation of inequalities into algorithmic systems.ConclusionOur findings are important to medical data scientists,clinicians and policy-makersinvolved in the implementationmedical artificial intelligence systems. Anawareness of the potential biases of these systemsis essential in preventing the digital exacerbation ofhealthcare inequalities. Objectives The Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias.Methods Following our literature review of ILPD papers, the models reported in existing studies are recreated and then interrogated for bias. We define four experiments, training on sex-unbalanced/balanced data, with and without feature selection. We build random forests (RFs), support vector machines (SVMs), Gaussian Naïve Bayes and logistic regression (LR) classifiers, running experiments 100 times, reporting average results with SD.Results We reproduce published models achieving accuracies of >70% (LR 71.31% (2.37 SD) – SVM 79.40% (2.50 SD)) and demonstrate a previously unobserved performance disparity. Across all classifiers females suffer from a higher false negative rate (FNR). Presently, RF and LR classifiers are reported as the most effective models, yet in our experiments they demonstrate the greatest FNR disparity (RF; −21.02%; LR; −24.07%).Discussion We demonstrate a sex disparity that exists in published ILPD classifiers. In practice, the higher FNR for females would manifest as increased rates of missed diagnosis for female patients and a consequent lack of appropriate care. Our study demonstrates that evaluating biases in the initial stages of machine learning can provide insights into inequalities in current clinical practice, reveal pathophysiological differences between the male and females, and can mitigate the digitisation of inequalities into algorithmic systems.Conclusion Our findings are important to medical data scientists, clinicians and policy-makers involved in the implementation medical artificial intelligence systems. An awareness of the potential biases of these systems is essential in preventing the digital exacerbation of healthcare inequalities. The Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias. Following our literature review of ILPD papers, the models reported in existing studies are recreated and then interrogated for bias. We define four experiments, training on sex-unbalanced/balanced data, with and without feature selection. We build random forests (RFs), support vector machines (SVMs), Gaussian Naïve Bayes and logistic regression (LR) classifiers, running experiments 100 times, reporting average results with SD. We reproduce published models achieving accuracies of >70% (LR 71.31% (2.37 SD) - SVM 79.40% (2.50 SD)) and demonstrate a previously unobserved performance disparity. Across all classifiers females suffer from a higher false negative rate (FNR). Presently, RF and LR classifiers are reported as the most effective models, yet in our experiments they demonstrate the greatest FNR disparity (RF; -21.02%; LR; -24.07%). We demonstrate a sex disparity that exists in published ILPD classifiers. In practice, the higher FNR for females would manifest as increased rates of missed diagnosis for female patients and a consequent lack of appropriate care. Our study demonstrates that evaluating biases in the initial stages of machine learning can provide insights into inequalities in current clinical practice, reveal pathophysiological differences between the male and females, and can mitigate the digitisation of inequalities into algorithmic systems. Our findings are important to medical data scientists, clinicians and policy-makers involved in the implementation medical artificial intelligence systems. An awareness of the potential biases of these systems is essential in preventing the digital exacerbation of healthcare inequalities. ObjectivesThe Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic inequities in liver disease diagnosis and management, these algorithms require scrutiny for potential biases. We address this overlooked issue by investigating ILPD models for sex bias.MethodsFollowing our literature review of ILPD papers, the models reported in existing studies are recreated and then interrogated for bias. We define four experiments, training on sex-unbalanced/balanced data, with and without feature selection. We build random forests (RFs), support vector machines (SVMs), Gaussian Naïve Bayes and logistic regression (LR) classifiers, running experiments 100 times, reporting average results with SD.ResultsWe reproduce published models achieving accuracies of >70% (LR 71.31% (2.37 SD) – SVM 79.40% (2.50 SD)) and demonstrate a previously unobserved performance disparity. Across all classifiers females suffer from a higher false negative rate (FNR). Presently, RF and LR classifiers are reported as the most effective models, yet in our experiments they demonstrate the greatest FNR disparity (RF; −21.02%; LR; −24.07%).DiscussionWe demonstrate a sex disparity that exists in published ILPD classifiers. In practice, the higher FNR for females would manifest as increased rates of missed diagnosis for female patients and a consequent lack of appropriate care. Our study demonstrates that evaluating biases in the initial stages of machine learning can provide insights into inequalities in current clinical practice, reveal pathophysiological differences between the male and females, and can mitigate the digitisation of inequalities into algorithmic systems.ConclusionOur findings are important to medical data scientists, clinicians and policy-makers involved in the implementation medical artificial intelligence systems. An awareness of the potential biases of these systems is essential in preventing the digital exacerbation of healthcare inequalities. |
| Author | Wu, Honghan Straw, Isabel |
| AuthorAffiliation | Institute of Health Informatics , University College London , London , UK |
| AuthorAffiliation_xml | – name: Institute of Health Informatics , University College London , London , UK |
| Author_xml | – sequence: 1 givenname: Isabel orcidid: 0000-0003-0003-3550 surname: Straw fullname: Straw, Isabel email: isabelstraw@doctors.org.uk organization: Institute of Health Informatics, University College London, London, UK – sequence: 2 givenname: Honghan surname: Wu fullname: Wu, Honghan organization: Institute of Health Informatics, University College London, London, UK |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35470133$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9ks1u1TAQhSNUREvpC7BAltiwSbGd2IlZIKGKn0qV2MDamjiTG1859sXOvaLvwEPjNC20XXRle3zOp9GceVkc-eCxKF4zes5YJd9303Y0tuSUs5JRWovmWXHCZcWXlzq6dz8uzlLaUko5r2tZNS-K40rUDWVVdVL8ufQHTLPdwGz9hgwhks5CItaTEcHNo4GIBNwmRDuPU_pAgCT8XaY5ZsdgsSfgwV0nm0gYSNrvMB5syuUJzGg9EocQ_cKeQo_uhuzsASPpswwSkl3E3prZBv-qeD6AS3h2e54WP798_nHxrbz6_vXy4tNV2YlKzKWgDGHAoRZG9iCgZ23LWdt3lHZUMZRKSW5k3Q29kgMVnWwRWtVyrvKcaFedFpcrtw-w1btoJ4jXOoDVN4UQNxribI1DrTiwuse27vhQAxcZ37ZKdQ2tsFV8YX1cWbt9N2Fv0OfJuAfQhz_ejnoTDlrRSuUcMuDdLSCGX_uchZ5sMugceAz7pLkUQkhOWZOlbx9Jt2Ef8_gXlZR5Bg1dVG_ud_SvlbvQs6BdBSaGlCIO2tgZlgByg9ZpRvWyYnpdMb2smF5XLFv5I-sd_UnT-WrKf_87fsLwFzPY5kA |
| CitedBy_id | crossref_primary_10_58564_IJSER_4_2_2025_314 crossref_primary_10_1007_s00500_023_09248_9 crossref_primary_10_62301_usmtd_1716034 crossref_primary_10_1007_s10354_022_00991_6 crossref_primary_10_1111_liv_15618 crossref_primary_10_1136_bmjhci_2024_101363 crossref_primary_10_1016_j_jbi_2024_104654 crossref_primary_10_2139_ssrn_5345227 crossref_primary_10_1371_journal_pone_0307288 crossref_primary_10_1136_bmjopen_2024_086117 crossref_primary_10_2196_50505 crossref_primary_10_1371_journal_pcbi_1011863 crossref_primary_10_3389_fdata_2024_1436019 crossref_primary_10_1038_s41540_025_00523_z crossref_primary_10_1016_j_jceh_2025_103183 crossref_primary_10_1016_j_jaci_2022_10_005 crossref_primary_10_1053_j_semvascsurg_2023_06_003 crossref_primary_10_1057_s41599_024_03282_0 crossref_primary_10_1136_bmjonc_2024_000430 crossref_primary_10_2196_59591 crossref_primary_10_1371_journal_pone_0330899 crossref_primary_10_3390_digital4010001 crossref_primary_10_2196_60269 crossref_primary_10_3389_frai_2024_1458508 crossref_primary_10_1016_j_eclinm_2024_102479 crossref_primary_10_1016_j_jclinepi_2024_111606 crossref_primary_10_70322_fibrosis_2024_10006 crossref_primary_10_1016_j_engstruct_2024_119508 crossref_primary_10_1007_s00062_024_01382_7 crossref_primary_10_2196_71757 crossref_primary_10_1136_bmjhci_2022_100617 crossref_primary_10_1093_jamiaopen_ooad043 crossref_primary_10_1109_ACCESS_2023_3260639 crossref_primary_10_1038_s44294_024_00041_z crossref_primary_10_4081_hls_2025_13433 crossref_primary_10_1186_s13256_024_04573_5 crossref_primary_10_1016_j_ocl_2025_02_002 crossref_primary_10_1177_20552076241247939 crossref_primary_10_3390_life14060652 crossref_primary_10_1007_s12325_025_03198_4 |
| Cites_doi | 10.1001/jama.2020.13378 10.1002/ejhf.1771 10.1038/s41581-021-00501-8 10.1038/s41746-020-0288-5 10.2217/17455057.4.3.237 10.2174/1874473710666170125151410 10.1186/s12874-019-0681-4 10.1007/s12519-018-0126-x 10.1111/j.1748-720X.2001.tb00037.x 10.1111/j.1365-2362.2009.02189.x 10.14257/ijbsbt.2014.6.4.16 10.1016/j.canep.2016.01.002 10.1056/NEJMms2004740 10.1016/j.artmed.2020.101965 10.2190/LWLH-NMCJ-UACL-U80Y 10.1016/j.jhep.2012.12.005 10.1371/journal.pone.0240376 10.1136/bmj.1.6066.939 10.1111/j.1600-6143.2011.03498.x 10.1109/ICOEI.2018.8553682 10.1186/s12874-019-0666-3 10.1109/CCWC.2019.8666497 |
| ContentType | Journal Article |
| Copyright | Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. 2022 Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY. Published by BMJ. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/ . Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY. Published by BMJ. 2022 |
| Copyright_xml | – notice: Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. – notice: 2022 Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY. Published by BMJ. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/ . Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY. Published by BMJ. 2022 |
| DBID | 9YT ACMMV AAYXX CITATION CGR CUY CVF ECM EIF NPM 3V. 7RV 7X7 7XB 88C 8C1 8FI 8FJ 8FK ABUWG AFKRA BENPR CCPQU FYUFA GHDGH K9. KB0 M0S M0T NAPCQ PHGZM PHGZT PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI 7X8 5PM DOA |
| DOI | 10.1136/bmjhci-2021-100457 |
| DatabaseName | BMJ Open Access Journals BMJ Journals:Open Access CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed ProQuest Central (Corporate) Nursing & Allied Health Database Health & Medical Collection ProQuest Central (purchase pre-March 2016) Healthcare Administration Database (Alumni) Public Health Database ProQuest Hospital Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni Edition) ProQuest Central UK/Ireland ProQuest Central ProQuest One Community College Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) Nursing & Allied Health Database (Alumni Edition) Health & Medical Collection (Alumni Edition) Healthcare Administration Database Nursing & Allied Health Premium ProQuest Central Premium ProQuest One Academic (New) ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic (retired) ProQuest One Academic UKI Edition MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) ProQuest One Academic Middle East (New) ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) Health & Medical Research Collection ProQuest Central (New) ProQuest Public Health ProQuest One Academic Eastern Edition ProQuest Health Management ProQuest Nursing & Allied Health Source ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) Nursing & Allied Health Premium ProQuest Health & Medical Complete ProQuest One Academic UKI Edition ProQuest Health Management (Alumni Edition) ProQuest Nursing & Allied Health Source (Alumni) ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic ProQuest One Academic Middle East (New) MEDLINE |
| Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: 7RV name: Nursing & Allied Health Database url: https://search.proquest.com/nahs sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| EISSN | 2632-1009 |
| ExternalDocumentID | oai_doaj_org_article_92a14de84b2f4a2591e8899b703e892b PMC9039354 35470133 10_1136_bmjhci_2021_100457 bmjhci |
| Genre | Journal Article Review |
| GeographicLocations | United Kingdom--UK |
| GeographicLocations_xml | – name: United Kingdom--UK |
| GrantInformation_xml | – fundername: UK Research and Innovation grantid: EP/S021612/1 funderid: http://dx.doi.org/10.13039/100014013 – fundername: Medical Research Council grantid: MR/S004149/2 – fundername: Medical Research Council grantid: MC_PC_18029 – fundername: ; grantid: EP/S021612/1 |
| GroupedDBID | 7RV 7X7 8C1 8FI 8FJ 9YT ABUWG ACMMV ADBBV AFKRA ALIPV ALMA_UNASSIGNED_HOLDINGS BCNDV BENPR CCPQU FYUFA GROUPED_DOAJ HMCUK M0T M~E NAPCQ OK1 PHGZT RMJ RPM UKHRP AAYXX AFFHD CITATION PHGZM PJZUB PPXIY CGR CUY CVF ECM EIF NPM 3V. 7XB 8FK K9. PKEHL PQEST PQQKQ PQUKI 7X8 PUEGO 5PM |
| ID | FETCH-LOGICAL-b535t-501eafef45c6da5ad188218db00b091e69962c64bfd96f05b68ea8982290450b3 |
| IEDL.DBID | DOA |
| ISICitedReferencesCount | 46 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000787787700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2632-1009 |
| IngestDate | Tue Oct 14 19:09:05 EDT 2025 Tue Nov 04 01:49:21 EST 2025 Fri Sep 05 07:33:46 EDT 2025 Tue Oct 07 07:11:28 EDT 2025 Mon Jul 21 06:03:38 EDT 2025 Tue Nov 18 21:53:54 EST 2025 Sat Nov 29 07:43:13 EST 2025 Thu Apr 24 22:50:10 EDT 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Keywords | Public health informatics BMJ Health Informatics Health Equity Artificial intelligence Machine Learning |
| Language | English |
| License | This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/. Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-b535t-501eafef45c6da5ad188218db00b091e69962c64bfd96f05b68ea8982290450b3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 ObjectType-Review-3 content type line 23 |
| ORCID | 0000-0003-0003-3550 |
| OpenAccessLink | https://doaj.org/article/92a14de84b2f4a2591e8899b703e892b |
| PMID | 35470133 |
| PQID | 2666218707 |
| PQPubID | 5160720 |
| ParticipantIDs | doaj_primary_oai_doaj_org_article_92a14de84b2f4a2591e8899b703e892b pubmedcentral_primary_oai_pubmedcentral_nih_gov_9039354 proquest_miscellaneous_2655562017 proquest_journals_2666218707 pubmed_primary_35470133 crossref_citationtrail_10_1136_bmjhci_2021_100457 crossref_primary_10_1136_bmjhci_2021_100457 bmj_journals_10_1136_bmjhci_2021_100457 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-04-01 |
| PublicationDateYYYYMMDD | 2022-04-01 |
| PublicationDate_xml | – month: 04 year: 2022 text: 2022-04-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | England |
| PublicationPlace_xml | – name: England – name: London – name: BMA House, Tavistock Square, London, WC1H 9JR |
| PublicationTitle | BMJ health & care informatics |
| PublicationTitleAbbrev | BMJ Health Care Inform |
| PublicationTitleAlternate | BMJ Health Care Inform |
| PublicationYear | 2022 |
| Publisher | BMJ Publishing Group Ltd BMJ Publishing Group LTD BMJ Publishing Group |
| Publisher_xml | – name: BMJ Publishing Group Ltd – name: BMJ Publishing Group LTD – name: BMJ Publishing Group |
| References | Li, Wang, Yang (R27) 2018; 14 Cirillo, Catuara-Solarz, Morey (R15) 2020; 3 Suthahar, Meems, Ho (R12) 2020; 22 Straw, Callison-Burch (R17) 2020; 15 Krieger, Fee (R8) 1994; 24 Grimm, Haslacher, Kampitsch (R11) 2009; 39 Hoffmann, Tarzian (R9) 2001; 29 Hamberg (R10) 2008; 4 Gulia, Praveen Rani (R20) 2014; 5 Mathur, Schaubel, Gong (R4) 2011; 11 Jin, Kim, Kim (R23) 2014; 6 Vatsalya, Liaquat, Ghosh (R3) 2016; 9 Sidey-Gibbons, Sidey-Gibbons (R14) 2019; 19 Eneanya, Boulware, Tsai (R29) 2022; 18 Morgan, Sherlock (R2) 1977; 1 Stepien, Fedirko, Duarte-Salles (R13) 2016; 40 Guy, Peters (R26) 2013; 9 Blachier, Leleu, Peck-Radosavljevic (R1) 2013; 58 Powe (R30) 2020; 324 Straw (R7) 2020; 110 Vyas, Eisenstein, Jones (R28) 2020; 383 2025072913053808000_29.1.e100457.21 Hamberg (2025072913053808000_29.1.e100457.10) 2008; 4 2025072913053808000_29.1.e100457.22 Gulia (2025072913053808000_29.1.e100457.20) 2014; 5 2025072913053808000_29.1.e100457.29 2025072913053808000_29.1.e100457.25 2025072913053808000_29.1.e100457.24 2025072913053808000_29.1.e100457.6 Stepien (2025072913053808000_29.1.e100457.13) 2016; 40 Straw (2025072913053808000_29.1.e100457.17) 2020; 15 Vatsalya (2025072913053808000_29.1.e100457.3) 2016; 9 Krieger (2025072913053808000_29.1.e100457.8) 1994; 24 Li (2025072913053808000_29.1.e100457.27) 2018; 14 2025072913053808000_29.1.e100457.11 Straw (2025072913053808000_29.1.e100457.7) 2020; 110 Hoffmann (2025072913053808000_29.1.e100457.9) 2001; 29 2025072913053808000_29.1.e100457.30 2025072913053808000_29.1.e100457.18 Jin (2025072913053808000_29.1.e100457.23) 2014; 6 2025072913053808000_29.1.e100457.19 2025072913053808000_29.1.e100457.14 2025072913053808000_29.1.e100457.16 2025072913053808000_29.1.e100457.5 2025072913053808000_29.1.e100457.2 Vyas (2025072913053808000_29.1.e100457.28) 2020; 383 2025072913053808000_29.1.e100457.1 Suthahar (2025072913053808000_29.1.e100457.12) 2020; 22 Guy (2025072913053808000_29.1.e100457.26) 2013; 9 Cirillo (2025072913053808000_29.1.e100457.15) 2020; 3 Mathur (2025072913053808000_29.1.e100457.4) 2011; 11 |
| References_xml | – volume: 324 start-page: 737 year: 2020 ident: R30 article-title: Black kidney function matters: use or misuse of race? publication-title: JAMA doi: 10.1001/jama.2020.13378 – volume: 22 start-page: 775 year: 2020 ident: R12 article-title: Sex-Related differences in contemporary biomarkers for heart failure: a review publication-title: Eur J Heart Fail doi: 10.1002/ejhf.1771 – volume: 18 start-page: 84 year: 2022 ident: R29 article-title: Health inequities and the inappropriate use of race in nephrology publication-title: Nat Rev Nephrol doi: 10.1038/s41581-021-00501-8 – volume: 3 year: 2020 ident: R15 article-title: Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare publication-title: NPJ Digit Med doi: 10.1038/s41746-020-0288-5 – volume: 4 start-page: 237 year: 2008 ident: R10 article-title: Gender bias in medicine publication-title: Womens Health doi: 10.2217/17455057.4.3.237 – volume: 9 start-page: 87 year: 2016 ident: R3 article-title: A review on the sex differences in organ and system pathology with alcohol drinking publication-title: Curr Drug Abuse Rev doi: 10.2174/1874473710666170125151410 – volume: 19 start-page: 1 year: 2019 ident: R14 article-title: Machine learning in medicine: a practical introduction publication-title: BMC Med Res Methodol doi: 10.1186/s12874-019-0681-4 – volume: 14 start-page: 151 year: 2018 ident: R27 article-title: Establishment of age- and gender-specific pediatric reference intervals for liver function tests in healthy Han children publication-title: World J Pediatr doi: 10.1007/s12519-018-0126-x – volume: 9 year: 2013 ident: R26 article-title: Liver disease in women: the influence of gender on epidemiology, natural history, and patient outcomes publication-title: Gastroenterol Hepatol – volume: 29 start-page: 13 year: 2001 ident: R9 article-title: The girl who cried pain: a bias against women in the treatment of pain publication-title: J Law Med Ethics doi: 10.1111/j.1748-720X.2001.tb00037.x – volume: 39 start-page: 860 year: 2009 ident: R11 article-title: Sex differences in the association between albumin and all-cause and vascular mortality publication-title: Eur J Clin Invest doi: 10.1111/j.1365-2362.2009.02189.x – volume: 6 start-page: 167 year: 2014 ident: R23 article-title: Decision factors on effective liver patient data prediction publication-title: International Journal of Bio-Science and Bio-Technology doi: 10.14257/ijbsbt.2014.6.4.16 – volume: 40 start-page: 179 year: 2016 ident: R13 article-title: Prospective association of liver function biomarkers with development of hepatobiliary cancers publication-title: Cancer Epidemiol doi: 10.1016/j.canep.2016.01.002 – volume: 383 start-page: 874 year: 2020 ident: R28 article-title: Hidden in plain sight — reconsidering the use of race correction in clinical algorithms publication-title: N Engl J Med Overseas Ed doi: 10.1056/NEJMms2004740 – volume: 110 year: 2020 ident: R7 article-title: The automation of bias in medical artificial intelligence (AI): decoding the past to create a better future publication-title: Artif Intell Med doi: 10.1016/j.artmed.2020.101965 – volume: 24 start-page: 265 year: 1994 ident: R8 article-title: Man-made medicine and women's health: the biopolitics of sex/gender and race/ethnicity publication-title: Int J Health Serv doi: 10.2190/LWLH-NMCJ-UACL-U80Y – volume: 5 start-page: 5110 year: 2014 ident: R20 article-title: Liver patient classification using intelligence techniques publication-title: Int J Comput Sci Inf Technol Res – volume: 58 start-page: 593 year: 2013 ident: R1 article-title: The burden of liver disease in Europe: a review of available epidemiological data publication-title: J Hepatol doi: 10.1016/j.jhep.2012.12.005 – volume: 15 year: 2020 ident: R17 article-title: Artificial intelligence in mental health and the biases of language based models publication-title: PLoS One doi: 10.1371/journal.pone.0240376 – volume: 1 start-page: 939 year: 1977 ident: R2 article-title: Sex-Related differences among 100 patients with alcoholic liver disease publication-title: Br Med J doi: 10.1136/bmj.1.6066.939 – volume: 11 start-page: 1435 year: 2011 ident: R4 article-title: Sex-based disparities in liver transplant rates in the United States publication-title: Am J Transplant doi: 10.1111/j.1600-6143.2011.03498.x – volume: 110 year: 2020 ident: 2025072913053808000_29.1.e100457.7 article-title: The automation of bias in medical artificial intelligence (AI): decoding the past to create a better future publication-title: Artif Intell Med doi: 10.1016/j.artmed.2020.101965 – volume: 6 start-page: 167 year: 2014 ident: 2025072913053808000_29.1.e100457.23 article-title: Decision factors on effective liver patient data prediction publication-title: International Journal of Bio-Science and Bio-Technology doi: 10.14257/ijbsbt.2014.6.4.16 – ident: 2025072913053808000_29.1.e100457.24 – ident: 2025072913053808000_29.1.e100457.25 doi: 10.1109/ICOEI.2018.8553682 – ident: 2025072913053808000_29.1.e100457.22 – volume: 11 start-page: 1435 year: 2011 ident: 2025072913053808000_29.1.e100457.4 article-title: Sex-based disparities in liver transplant rates in the United States publication-title: Am J Transplant doi: 10.1111/j.1600-6143.2011.03498.x – ident: 2025072913053808000_29.1.e100457.18 – volume: 9 year: 2013 ident: 2025072913053808000_29.1.e100457.26 article-title: Liver disease in women: the influence of gender on epidemiology, natural history, and patient outcomes publication-title: Gastroenterol Hepatol – ident: 2025072913053808000_29.1.e100457.14 doi: 10.1186/s12874-019-0666-3 – ident: 2025072913053808000_29.1.e100457.6 – ident: 2025072913053808000_29.1.e100457.5 – volume: 3 year: 2020 ident: 2025072913053808000_29.1.e100457.15 article-title: Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare publication-title: NPJ Digit Med doi: 10.1038/s41746-020-0288-5 – ident: 2025072913053808000_29.1.e100457.16 – ident: 2025072913053808000_29.1.e100457.1 doi: 10.1016/j.jhep.2012.12.005 – volume: 5 start-page: 5110 year: 2014 ident: 2025072913053808000_29.1.e100457.20 article-title: Liver patient classification using intelligence techniques publication-title: Int J Comput Sci Inf Technol Res – volume: 383 start-page: 874 year: 2020 ident: 2025072913053808000_29.1.e100457.28 article-title: Hidden in plain sight — reconsidering the use of race correction in clinical algorithms publication-title: N Engl J Med Overseas Ed doi: 10.1056/NEJMms2004740 – ident: 2025072913053808000_29.1.e100457.21 doi: 10.1109/CCWC.2019.8666497 – volume: 9 start-page: 87 year: 2016 ident: 2025072913053808000_29.1.e100457.3 article-title: A review on the sex differences in organ and system pathology with alcohol drinking publication-title: Curr Drug Abuse Rev doi: 10.2174/1874473710666170125151410 – volume: 29 start-page: 13 year: 2001 ident: 2025072913053808000_29.1.e100457.9 article-title: The girl who cried pain: a bias against women in the treatment of pain publication-title: J Law Med Ethics doi: 10.1111/j.1748-720X.2001.tb00037.x – volume: 24 start-page: 265 year: 1994 ident: 2025072913053808000_29.1.e100457.8 article-title: Man-made medicine and women's health: the biopolitics of sex/gender and race/ethnicity publication-title: Int J Health Serv doi: 10.2190/LWLH-NMCJ-UACL-U80Y – ident: 2025072913053808000_29.1.e100457.19 – volume: 40 start-page: 179 year: 2016 ident: 2025072913053808000_29.1.e100457.13 article-title: Prospective association of liver function biomarkers with development of hepatobiliary cancers publication-title: Cancer Epidemiol doi: 10.1016/j.canep.2016.01.002 – ident: 2025072913053808000_29.1.e100457.2 doi: 10.1136/bmj.1.6066.939 – volume: 4 start-page: 237 year: 2008 ident: 2025072913053808000_29.1.e100457.10 article-title: Gender bias in medicine publication-title: Womens Health – volume: 22 start-page: 775 year: 2020 ident: 2025072913053808000_29.1.e100457.12 article-title: Sex-Related differences in contemporary biomarkers for heart failure: a review publication-title: Eur J Heart Fail doi: 10.1002/ejhf.1771 – ident: 2025072913053808000_29.1.e100457.29 doi: 10.1038/s41581-021-00501-8 – ident: 2025072913053808000_29.1.e100457.30 doi: 10.1001/jama.2020.13378 – volume: 15 year: 2020 ident: 2025072913053808000_29.1.e100457.17 article-title: Artificial intelligence in mental health and the biases of language based models publication-title: PLoS One doi: 10.1371/journal.pone.0240376 – ident: 2025072913053808000_29.1.e100457.11 doi: 10.1111/j.1365-2362.2009.02189.x – volume: 14 start-page: 151 year: 2018 ident: 2025072913053808000_29.1.e100457.27 article-title: Establishment of age- and gender-specific pediatric reference intervals for liver function tests in healthy Han children publication-title: World J Pediatr doi: 10.1007/s12519-018-0126-x |
| SSID | ssj0002244637 |
| Score | 2.4581988 |
| SecondaryResourceType | review_article |
| Snippet | ObjectivesThe Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing... The Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing demographic... Objectives The Indian Liver Patient Dataset (ILPD) is used extensively to create algorithms that predict liver disease. Given the existing research describing... |
| SourceID | doaj pubmedcentral proquest pubmed crossref bmj |
| SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | e100457 |
| SubjectTerms | Algorithms Artificial Intelligence Bayes Theorem Bias Biomarkers BMJ Health Informatics Delivery of Health Care Female Gender differences Health Equity Health informatics Humans Liver Diseases Machine Learning Male Original Research Public health informatics Supervised Machine Learning |
| SummonAdditionalLinks | – databaseName: ProQuest Central dbid: BENPR link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lj9MwEB5BlwMXHuIVWJCRkDigaPPwK1wQi3bFAVUrBNLeIjux26zapDQt4kfwoxk7bnaLUC9cHSdyNJ9nPtvjbwDeUJ7KWudFbITJY8qldYXcacwUZzaTWljtLf1FTKfy8rK4CBtufUir3PlE76jrrnJ75CcYSDiGI5GID6sfsasa5U5XQwmN23DklMroBI5Oz6YXX8ddFgxQlOdid1sm5yd6eTWvGgRH5lK5kNC4uISNe3HJy_f_i3P-nTp5Ixad3__fv3gA9wILJR8H2DyEW6Z9BL9vaG60M4JsluhG9aRpyXzMEiNqMcMPbubL_j1RpDe_4kF51yKXJSpInJDOkn67cn6ox-alz9g0JJSomBFff8d_eeHyQkg4JiKrtTs3clh5DN_Pz759-hyHYg2xZjnbxCxJjbLGUlbxWjFVp8jdHQ6SRCMnMRwXVlnFqbZ1wW3CNJdGycLrzVOW6PwJTNquNc-AWIQMKxBaglmaKSpTIawwVSq5YVVSR_AWDVaGydaXfh2T83IwbelMWw6mjSDdGbWsgua5K72xOPjOu_Gd1aD4cbD3qcPK2NOpdfuGbj0rw-Qvi0yltDaS6sxShQvO1Ehc52r0tkYWmY7geIeW67-6hkoEr8fHOPndiY5qTbd1fRhDAoteNYKnAzDHkeSMCuT3eQRiD7J7Q91_0jZzLzBe-Avb9PnhYb2Au5m7C-LTmI5hsllvzUu4U_3cNP36VZiJfwA18j-Y priority: 102 providerName: ProQuest |
| Title | Investigating for bias in healthcare algorithms: a sex-stratified analysis of supervised machine learning models in liver disease prediction |
| URI | https://informatics.bmj.com/content/29/1/e100457.full https://www.ncbi.nlm.nih.gov/pubmed/35470133 https://www.proquest.com/docview/2666218707 https://www.proquest.com/docview/2655562017 https://pubmed.ncbi.nlm.nih.gov/PMC9039354 https://doaj.org/article/92a14de84b2f4a2591e8899b703e892b |
| Volume | 29 |
| WOSCitedRecordID | wos000787787700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2632-1009 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002244637 issn: 2632-1009 databaseCode: DOA dateStart: 20190101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVHPJ databaseName: ROAD: Directory of Open Access Scholarly Resources customDbUrl: eissn: 2632-1009 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002244637 issn: 2632-1009 databaseCode: M~E dateStart: 20190101 isFulltext: true titleUrlDefault: https://road.issn.org providerName: ISSN International Centre – providerCode: PRVPQU databaseName: Health & Medical Collection customDbUrl: eissn: 2632-1009 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002244637 issn: 2632-1009 databaseCode: 7X7 dateStart: 20150101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthcomplete providerName: ProQuest – providerCode: PRVPQU databaseName: Nursing & Allied Health Database customDbUrl: eissn: 2632-1009 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002244637 issn: 2632-1009 databaseCode: 7RV dateStart: 20150101 isFulltext: true titleUrlDefault: https://search.proquest.com/nahs providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 2632-1009 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002244637 issn: 2632-1009 databaseCode: BENPR dateStart: 20150101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Healthcare Administration Database customDbUrl: eissn: 2632-1009 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002244637 issn: 2632-1009 databaseCode: M0T dateStart: 20150101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthmanagement providerName: ProQuest – providerCode: PRVPQU databaseName: Public Health Database customDbUrl: eissn: 2632-1009 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002244637 issn: 2632-1009 databaseCode: 8C1 dateStart: 20150101 isFulltext: true titleUrlDefault: https://search.proquest.com/publichealth providerName: ProQuest |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nj9MwEB3BwoELAvEVWCojIXFA0SbxZ7ixq11xgGq1WlBvkZ3YbVZtWjUt4kfwoxk7aWgRWi5cfHCc1vEb22_k8RuAt0ykqjI0j620NGZCOZ_IncVcC-4yZaQzAenPcjxWk0l-uZfqy8eEdfLA3cCd5JlOWWUVM5ljGsl6ahX6CAYt1ao8M371Rdaz50zdBFEXdHOo3N2SoeLELG5mZY1GkfkQLiQyfj_CyoP9KMj2_41r_hkyubcHXTyChz15JB-7Tj-GO7Z5Aj_3pDKaKUESSkytW1I3ZDYEdxE9ny7X9Wa2aD8QTVr7I-4Ecx1SUKJ7ZRKydKTdrvzy0WL1IgRaWtJnlpiSkDYn_PLch3OQ_nSHrNb-uMdD_BS-Xpxfn32K-xwLseGUb2KepFY76xgvRaW5rlKk3B6-JDFIJaxAfygrBTOuyoVLuBHKapUHmXjGE0OfwVGzbOwLIA6R5jlahOSOZZqpVEonbZkqYXmZVBG8w_Eu-jnSFsH9oKLokCk8MkWHTATpDpOi7KXKfcaM-a3vvB_eWXVCHbe2PvVQDy29yHaoQNMretMr_mV6ERzvDOX3VyHhETiAMsH_eDM8xjnrD2J0Y5db34Zz5J24GEbwvLOroSeUM4m0nEYgDyzuoKuHT5p6FnTB83DPmr38H9_2Ch5k_qJHiFE6hqPNemtfw_3y-6Zu1yO4K6---XIiQ6mwVGfpCO6dno8vr0ZhMmL5Jbn-BXN8Nck |
| linkProvider | Directory of Open Access Journals |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Jb9QwFLZKQYILi4ASKMVIIA4oahZvQUKIAlWrTkccBqm3YCf2TNBMMkxmWP4Dv4XfyLOztIPQ3Hrg6iWyk-9t8fP3EHpGWChyFSe-5jr2CRPGFnInPpWMmkgobpT70gM-HIqzs-TjFvrd3YWxaZWdTnSKOq8y-498HwwJA3PEA_5m_tW3VaPs6WpXQqOBxYn--R1Ctvr18Xv4vs-j6PDD6N2R31YV8BWN6dKnQail0YbQjOWSyjwEJ9MuOAgUGE_NIAKIMkaUyRNmAqqY0FIkjhid0EDF8Nwr6Krl1bMphKfBqP-nA-aQsJh3d3Nitq9mXyZZAVCMbOIYzLBWEBrXrKArFvAvD_fvRM0Llu_w1v_2zm6jm62Pjd82QnEHbenyLvp1gVGkHGPw1bEqZI2LEk_6HDgsp2PYwHIyq19hiWv9w294hQ146li2BC64Mrheza2WraF55vJRNW4LcIyxqy7knjy1WS-4PQTD84U9FbOScA99upQXcB9tl1WpHyBsQCBoAoLDqSGRJCLk3HCdhYJpmgW5h14AQNJWldSpi9JiljZQSi2U0gZKHgo7EKVZy-huC4tMN8552c-ZN3wmG0cfWGz2Iy0XuWuoFuO0VW1pEsmQ5FoQFRkiIZwOtYAoXoEt0SKJlId2O3Se7-ocmh562neDarPnVbLU1cqOoRTcc7AZHtppBKFfSUwJh-gl9hBfE5G1pa73lMXE0acn7jo6ebh5WU_Q9aPR6SAdHA9PHqEbkb314hK2dtH2crHSj9G17NuyqBd7Tgdg9PmyBegPVomaYw |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Jj9MwFLaGASEuLGILDGAkEAcUNXG8BQkhmKFiNKNqDiDNLdiJ3Qa1aWlalv_AL-LX8ewsM0WotzlwdezISb73vvfityD0jPJYFjpJQyNMElIurWvkTkOmOLNEamG1_9LHYjSSp6fpyQ763eXCuLDKTid6RV3Mc_ePfABEwoGORCQGtg2LODkYvll8DV0HKXfS2rXTaCByZH5-B_etfn14AN_6OSHD9x_3P4Rth4FQs4StQhbFRlljKct5oZgqYjA43eajSAORGg7eAMk51bZIuY2Y5tIomfoi6ZRFOoH7XkKXRQIodlnq-3H_fweokfJEdHk6CR_o2ZdJXgIsiQsig_WOEWFwgxF944B_Wbt_B22eY8Hhjf_5_d1E11vbG79thOUW2jHVbfTrXKWRaozBhse6VDUuKzzpY-Owmo7hAVaTWf0KK1ybH2FTb9iCBY9VW9gFzy2u1wunfWsYnvk4VYPbxhxj7LsO-TtPXTQMbg_H8GLpTsuchNxBny7kBdxFu9W8MvcRtiAoLAWBEsxSoqiMhbDC5LHkhuVREaAXAJasVTF15r23hGcNrDIHq6yBVYDiDlBZ3lZ6dw1HplvXvOzXLJo6J1tnv3M47We6GuV-YL4cZ63Ky1KiYloYSTWxVIGbHRsJ3r0GjjEyJTpAex1Sz57qDKYBetpfBpXnzrFUZeZrN4cxMNuBSwJ0rxGKficJowK8miRAYkNcNra6eaUqJ76seurT1OmD7dt6gq6C3GTHh6Ojh-gacckwPo5rD-2ulmvzCF3Jv63KevnYqwOMPl-0_PwB6GWi4Q |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Investigating+for+bias+in+healthcare+algorithms%3A+a+sex-stratified+analysis+of+supervised+machine+learning+models+in+liver+disease+prediction&rft.jtitle=BMJ+health+%26+care+informatics&rft.au=Straw%2C+Isabel&rft.au=Wu%2C+Honghan&rft.date=2022-04-01&rft.issn=2632-1009&rft.eissn=2632-1009&rft.volume=29&rft.issue=1&rft_id=info:doi/10.1136%2Fbmjhci-2021-100457&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2632-1009&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2632-1009&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2632-1009&client=summon |