Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial
IMPORTANCE: Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning. OBJECTIVE: To assess the effect of an LLM on physician...
Saved in:
| Published in: | JAMA Network Open Vol. 7; no. 10; p. e2440969 |
|---|---|
| Main Authors: | , , , , , , , , , , , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
American Medical Association
01.10.2024
|
| Subjects: | |
| ISSN: | 2574-3805, 2574-3805 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | IMPORTANCE: Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning. OBJECTIVE: To assess the effect of an LLM on physicians’ diagnostic reasoning compared with conventional resources. DESIGN, SETTING, AND PARTICIPANTS: A single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited. INTERVENTION: Participants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes. MAIN OUTCOMES AND MEASURES: The primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group. RESULTS: Fifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, −4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of −82 (95% CI, −195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group. CONCLUSIONS AND RELEVANCE: In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT06157944 |
|---|---|
| AbstractList | Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning.ImportanceLarge language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning.To assess the effect of an LLM on physicians' diagnostic reasoning compared with conventional resources.ObjectiveTo assess the effect of an LLM on physicians' diagnostic reasoning compared with conventional resources.A single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited.Design, Setting, and ParticipantsA single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited.Participants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes.InterventionParticipants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes.The primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group.Main Outcomes and MeasuresThe primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group.Fifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, -4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of -82 (95% CI, -195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group.ResultsFifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, -4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of -82 (95% CI, -195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group.In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice.Conclusions and RelevanceIn this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice.ClinicalTrials.gov Identifier: NCT06157944.Trial RegistrationClinicalTrials.gov Identifier: NCT06157944. IMPORTANCE: Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning. OBJECTIVE: To assess the effect of an LLM on physicians’ diagnostic reasoning compared with conventional resources. DESIGN, SETTING, AND PARTICIPANTS: A single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited. INTERVENTION: Participants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes. MAIN OUTCOMES AND MEASURES: The primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group. RESULTS: Fifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, −4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of −82 (95% CI, −195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group. CONCLUSIONS AND RELEVANCE: In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT06157944 ImportanceLarge language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning.ObjectiveTo assess the effect of an LLM on physicians’ diagnostic reasoning compared with conventional resources.Design, Setting, and ParticipantsA single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited.InterventionParticipants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes.Main Outcomes and MeasuresThe primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group.ResultsFifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, −4 to 8 percentage points;P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of −82 (95% CI, −195 to 31;P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points;P = .03) higher than the conventional resources group.Conclusions and RelevanceIn this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice.Trial RegistrationClinicalTrials.gov Identifier:NCT06157944 Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning. To assess the effect of an LLM on physicians' diagnostic reasoning compared with conventional resources. A single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited. Participants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes. The primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group. Fifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, -4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of -82 (95% CI, -195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group. In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice. ClinicalTrials.gov Identifier: NCT06157944. This randomized clinical trial evaluates the diagnostic performance of physicians with use of a large language model compared with conventional resources. |
| Author | Hom, Jason Ahuja, Neera Kanjee, Zahir Parsons, Andrew S Horvitz, Eric Cool, Joséphine A Strong, Eric Yang, Daniel Goh, Ethan Rodman, Adam Weng, Yingjie Kerman, Hannah Gallo, Robert Chen, Jonathan H Milstein, Arnold Olson, Andrew P. J |
| AuthorAffiliation | 3 Center for Innovation to Implementation, VA Palo Alto Health Care System, Palo Alto, California 12 Department of Hospital Medicine, University of Minnesota Medical School, Minneapolis 9 Microsoft Corp, Redmond, Washington 5 Quantitative Sciences Unit, Stanford University School of Medicine, Stanford, California 2 Stanford Clinical Excellence Research Center, Stanford University, Stanford, California 4 Department of Hospital Medicine, Stanford University School of Medicine, Stanford, California 6 Department of Hospital Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts 11 Department of Hospital Medicine, Kaiser Permanente, Oakland, California 1 Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California 7 Department of Hospital Medicine, Harvard Medical School, Boston, Massachusetts 10 Stanford Institute for Human-Centered Artificial Intelligence, Stanford, California 13 Division of Hospital Medicine, Stanford University, Stanford, California |
| AuthorAffiliation_xml | – name: 9 Microsoft Corp, Redmond, Washington – name: 3 Center for Innovation to Implementation, VA Palo Alto Health Care System, Palo Alto, California – name: 4 Department of Hospital Medicine, Stanford University School of Medicine, Stanford, California – name: 12 Department of Hospital Medicine, University of Minnesota Medical School, Minneapolis – name: 8 Department of Hospital Medicine, School of Medicine, University of Virginia, Charlottesville – name: 5 Quantitative Sciences Unit, Stanford University School of Medicine, Stanford, California – name: 10 Stanford Institute for Human-Centered Artificial Intelligence, Stanford, California – name: 13 Division of Hospital Medicine, Stanford University, Stanford, California – name: 11 Department of Hospital Medicine, Kaiser Permanente, Oakland, California – name: 1 Stanford Center for Biomedical Informatics Research, Stanford University, Stanford, California – name: 2 Stanford Clinical Excellence Research Center, Stanford University, Stanford, California – name: 7 Department of Hospital Medicine, Harvard Medical School, Boston, Massachusetts – name: 6 Department of Hospital Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts |
| Author_xml | – sequence: 1 givenname: Ethan surname: Goh fullname: Goh, Ethan – sequence: 2 givenname: Robert surname: Gallo fullname: Gallo, Robert – sequence: 3 givenname: Jason surname: Hom fullname: Hom, Jason – sequence: 4 givenname: Eric surname: Strong fullname: Strong, Eric – sequence: 5 givenname: Yingjie surname: Weng fullname: Weng, Yingjie – sequence: 6 givenname: Hannah surname: Kerman fullname: Kerman, Hannah – sequence: 7 givenname: Joséphine A surname: Cool fullname: Cool, Joséphine A – sequence: 8 givenname: Zahir surname: Kanjee fullname: Kanjee, Zahir – sequence: 9 givenname: Andrew S surname: Parsons fullname: Parsons, Andrew S – sequence: 10 givenname: Neera surname: Ahuja fullname: Ahuja, Neera – sequence: 11 givenname: Eric surname: Horvitz fullname: Horvitz, Eric – sequence: 12 givenname: Daniel surname: Yang fullname: Yang, Daniel – sequence: 13 givenname: Arnold surname: Milstein fullname: Milstein, Arnold – sequence: 14 givenname: Andrew P. J surname: Olson fullname: Olson, Andrew P. J – sequence: 15 givenname: Adam surname: Rodman fullname: Rodman, Adam – sequence: 16 givenname: Jonathan H surname: Chen fullname: Chen, Jonathan H |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39466245$$D View this record in MEDLINE/PubMed |
| BookMark | eNqNkUtv1DAUhS1URB_0D7BAEWy6mcHPxOmmVMOr0iCkUtbWTXIdPHjsIU5A8OvxMG1VZsXKV_J3jo_vOSYHIQYk5AWjc0Ype7WCNQQcf8bhW9xgmHPK5VzSuqwfkSOuKjkTmqqDB_MhOU1pRSnllIm6VE_IoahlWXKpjsjnJQw9FksI_QR5-Bg79MVVsH7C0GIRQ_HGQR9iGl1bXCOkGFzoz4vL4hpCF9fuN3bFwrvgWvDFzeDAPyWPLfiEp7fnCfny7u3N4sNs-en91eJyOQPJ-TizkjcWm04p0FJQRATZaUsFNprrFhrGGgmgVWdrLJu20oLXUFtqLWakEifkYue7mZo1di2GcQBvNoNbw_DLRHDm35vgvpo-_jCMKVZXSmWHs1uHIX6fMI1m7VKL3ucVxykZwTjjWmixRV_uoas4DSH_L1NS8LKqKpap5w8j3We5W3gGXu-AdogpDWhN60YYXdwmdN4warY1m72azbZm87fmbHG-Z3H3yn-Jn-3EGbnXcc2VqJX4A2gwvgM |
| CitedBy_id | crossref_primary_10_1056_AIoa2300015 crossref_primary_10_2196_56774 crossref_primary_10_1097_ACM_0000000000006117 crossref_primary_10_1017_ash_2025_10059 crossref_primary_10_1007_s10439_025_03676_4 crossref_primary_10_1093_jamiaopen_ooaf097 crossref_primary_10_1016_j_inpm_2025_100592 crossref_primary_10_1007_s00270_024_03956_x crossref_primary_10_1186_s40779_025_00617_z crossref_primary_10_2196_80801 crossref_primary_10_1016_j_ero_2025_04_007 crossref_primary_10_7759_cureus_81068 crossref_primary_10_1515_cclm_2025_0647 crossref_primary_10_2196_64028 crossref_primary_10_3389_fmed_2025_1441747 crossref_primary_10_3390_jpm15050194 crossref_primary_10_2196_71916 crossref_primary_10_1097_CMR_0000000000001047 crossref_primary_10_1007_s44443_025_00123_1 crossref_primary_10_1038_s41598_025_05892_3 crossref_primary_10_1111_dmcn_16291 crossref_primary_10_1515_dx_2025_0111 crossref_primary_10_18261_njdl_20_2_5 crossref_primary_10_1016_j_landig_2025_01_005 crossref_primary_10_18261_njdl_20_2_1 crossref_primary_10_1038_s44401_025_00015_6 crossref_primary_10_3389_fdgth_2024_1448351 crossref_primary_10_1093_jamia_ocae294 crossref_primary_10_2196_73374 crossref_primary_10_1136_rmdopen_2024_004309 crossref_primary_10_1016_j_compbiomed_2025_110614 crossref_primary_10_1038_s41746_025_01924_4 crossref_primary_10_1177_10755470251345234 crossref_primary_10_1002_joom_70012 crossref_primary_10_1038_s43856_025_00781_2 crossref_primary_10_1080_15265161_2025_2457728 crossref_primary_10_1016_j_infoandorg_2025_100560 crossref_primary_10_2196_63602 crossref_primary_10_2196_69709 crossref_primary_10_1016_j_bpg_2025_102044 crossref_primary_10_1038_s41746_025_01746_4 crossref_primary_10_2196_65903 crossref_primary_10_1016_j_chstcc_2025_100180 crossref_primary_10_1001_jamanetworkopen_2025_7199 crossref_primary_10_1093_ijnp_pyaf025 crossref_primary_10_1093_qjmed_hcaf114 crossref_primary_10_1136_bmjhci_2024_101426 crossref_primary_10_1007_s11695_025_08115_w crossref_primary_10_1017_ice_2024_205 crossref_primary_10_1177_20552076251365006 crossref_primary_10_1093_milmed_usaf010 crossref_primary_10_1148_radiol_243659 crossref_primary_10_1177_08465371251323124 crossref_primary_10_1001_jama_2025_0797 crossref_primary_10_1111_tid_70102 crossref_primary_10_1111_bjet_13591 crossref_primary_10_1016_j_ijmedinf_2025_106088 crossref_primary_10_3348_kjr_2025_0071 crossref_primary_10_3390_diagnostics15060735 crossref_primary_10_3390_biomedinformatics5030037 crossref_primary_10_1007_s40670_025_02397_6 crossref_primary_10_1016_j_jceh_2025_102628 crossref_primary_10_1038_s41746_025_01674_3 crossref_primary_10_1186_s13054_025_05479_4 crossref_primary_10_3390_jcm14103503 crossref_primary_10_3389_fsoc_2025_1604709 crossref_primary_10_1111_imj_70007 crossref_primary_10_1093_jamia_ocaf045 crossref_primary_10_31435_ijitss_3_47__2025_3809 crossref_primary_10_1038_s41746_025_01901_x crossref_primary_10_1016_j_ejrad_2025_112111 crossref_primary_10_1016_j_landig_2025_100883 crossref_primary_10_3390_healthcare13151843 crossref_primary_10_2196_69929 crossref_primary_10_5858_arpa_2024_0215_RA crossref_primary_10_1093_jme_tjaf003 crossref_primary_10_1016_j_cgh_2025_03_012 crossref_primary_10_1016_j_jclinane_2025_111839 crossref_primary_10_1038_s41591_024_03425_5 crossref_primary_10_1136_bmj_r27 crossref_primary_10_1177_21501319251326663 crossref_primary_10_1080_15570274_2025_2531654 crossref_primary_10_1186_s12909_025_07235_2 crossref_primary_10_1287_mnsc_2022_03918 crossref_primary_10_59717_j_xinn_med_2025_100120 crossref_primary_10_1097_AOG_0000000000006052 crossref_primary_10_1016_j_pop_2025_07_010 crossref_primary_10_2196_73226 crossref_primary_10_1038_s41598_025_16999_y crossref_primary_10_1093_jamiaopen_ooaf055 crossref_primary_10_1186_s12888_025_06912_2 crossref_primary_10_3389_fdgth_2025_1655154 crossref_primary_10_1007_s10238_025_01743_7 crossref_primary_10_1186_s12883_025_04280_8 crossref_primary_10_1016_j_jhepr_2025_101579 crossref_primary_10_1056_NEJMra2503232 crossref_primary_10_2139_ssrn_5281742 crossref_primary_10_1002_jeo2_70247 crossref_primary_10_1146_annurev_biodatasci_103123_095824 crossref_primary_10_3389_frai_2025_1616145 crossref_primary_10_3390_jcm14176169 crossref_primary_10_1007_s00059_025_05296_z crossref_primary_10_1007_s11606_025_09844_5 crossref_primary_10_1016_j_jhep_2025_07_003 crossref_primary_10_1016_j_jbi_2025_104844 crossref_primary_10_1016_j_aprim_2025_103223 crossref_primary_10_1021_acs_jchemed_4c01499 crossref_primary_10_1093_eurheartj_ehaf125 crossref_primary_10_2196_70903 crossref_primary_10_2196_74428 crossref_primary_10_1177_20552076251365059 crossref_primary_10_1038_s41746_025_01960_0 crossref_primary_10_1038_s41746_025_01486_5 crossref_primary_10_1002_wjs_12554 crossref_primary_10_2196_73212 crossref_primary_10_3389_fcell_2025_1642539 crossref_primary_10_1016_j_amjmed_2025_05_012 crossref_primary_10_1177_2755323X251357643 crossref_primary_10_1016_j_imu_2025_101692 crossref_primary_10_1016_j_arthro_2024_12_009 crossref_primary_10_1001_jama_2025_0946 crossref_primary_10_1016_j_arthro_2024_12_006 crossref_primary_10_1038_s41746_025_01565_7 crossref_primary_10_1177_08850666251368270 crossref_primary_10_1016_j_bulcan_2024_12_005 crossref_primary_10_1007_s40124_025_00350_0 crossref_primary_10_1186_s12903_025_06619_6 crossref_primary_10_3389_fendo_2025_1593321 crossref_primary_10_1016_j_rbmo_2025_104990 crossref_primary_10_1038_s41746_025_01475_8 crossref_primary_10_1111_epi_18475 crossref_primary_10_3389_fped_2025_1646989 crossref_primary_10_1016_j_bbe_2025_04_002 crossref_primary_10_1177_00220345251334575 crossref_primary_10_1001_jamahealthforum_2025_0289 crossref_primary_10_1016_j_ymgme_2025_109098 crossref_primary_10_1097_JPA_0000000000000664 crossref_primary_10_1002_jhm_70070 crossref_primary_10_2196_69742 crossref_primary_10_1016_j_inffus_2024_102888 crossref_primary_10_1038_s41591_024_03456_y crossref_primary_10_1001_jamasurg_2025_2550 crossref_primary_10_21240_mpaed_65_2025_08_03_X crossref_primary_10_7759_cureus_86654 crossref_primary_10_2196_67967 crossref_primary_10_3390_jcm14113698 crossref_primary_10_1515_almed_2025_0039 crossref_primary_10_1016_j_jvir_2024_12_585 crossref_primary_10_1038_s41746_025_01544_y crossref_primary_10_3389_fdgth_2025_1588479 crossref_primary_10_3390_jcm14020572 crossref_primary_10_1016_j_compbiomed_2025_110351 crossref_primary_10_1186_s12909_024_06399_7 crossref_primary_10_1001_jamanetworkopen_2025_11922 crossref_primary_10_1001_jamanetworkopen_2025_16400 crossref_primary_10_1007_s40121_025_01114_5 crossref_primary_10_1038_s41598_025_08601_2 crossref_primary_10_1038_s41746_025_01834_5 crossref_primary_10_1038_s43856_025_01061_9 crossref_primary_10_1016_j_psychres_2025_116501 crossref_primary_10_1038_s41431_024_01782_w crossref_primary_10_2196_76669 crossref_primary_10_1016_j_jds_2025_06_007 crossref_primary_10_1007_s12325_025_03273_w crossref_primary_10_1007_s00345_025_05533_4 crossref_primary_10_1016_j_genhosppsych_2025_02_018 crossref_primary_10_1001_jamanetworkopen_2024_40901 crossref_primary_10_1001_jama_2025_2789 |
| Cites_doi | 10.1056/NEJM199406233302506 10.1101/2023.11.24.23298844 10.1136/bmjqs-2021-013493 10.1007/s10459-005-5066-2 10.3109/01421598809019321 10.1007/s11606-021-06805-6 10.1080/10872981.2022.2135421 10.1001/jamainternmed.2023.1838 10.1097/ACM.0000000000002618 10.1515/dx-2018-0107 10.1001/jama.289.21.2849 10.11613/BM.2012.031 10.3109/0142159X.2010.507716 10.1111/j.1365-2923.2012.04217.x 10.1001/jama.2010.1276 10.17226/21794 10.1001/jamainternmed.2023.2909 10.1111/medu.15089 10.7326/M23-2772 10.2196/50638 10.3399/bjgp15X683161 10.1097/ACM.0b013e3182a31c1e 10.1007/BF02310555 10.1136/bmjqs-2022-014865 10.1001/jamainternmed.2013.2777 10.1038/s41586-023-06291-2 10.1111/medu.14863 10.1001/jama.2023.8288 10.3109/0142159X.2013.818634 10.1056/CAT.23.0404 10.1038/s41746-024-01010-1 10.1001/jamainternmed.2023.7347 10.1186/1472-6920-13-144 10.1037/h0026256 10.1080/0142159X.2017.1245856 |
| ContentType | Journal Article |
| Copyright | 2024. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. Copyright 2024 Goh E et al. . |
| Copyright_xml | – notice: 2024. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: Copyright 2024 Goh E et al. . |
| DBID | ZGVWO AAYXX CITATION CGR CUY CVF ECM EIF NPM K9. 7X8 5PM |
| DOI | 10.1001/jamanetworkopen.2024.40969 |
| DatabaseName | JAMA Network (Open Access) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic PubMed Central (Full Participant titles) |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic ProQuest Health & Medical Complete (Alumni) MEDLINE |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: ZGVWO name: JAMA Network (Open Access) url: https://jamanetwork.com sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| DocumentTitleAlternate | Large Language Model Influence on Diagnostic Reasoning |
| EISSN | 2574-3805 |
| ExternalDocumentID | PMC11519755 39466245 10_1001_jamanetworkopen_2024_40969 2825395 |
| Genre | Randomized Controlled Trial Research Support, Non-U.S. Gov't Journal Article Research Support, N.I.H., Extramural |
| GroupedDBID | ZGVWO 0R~ 53G 7X7 8FI 8FJ AAYXX ABUWG ADBBV ADPDF AFFHD AFKRA ALMA_UNASSIGNED_HOLDINGS AMJDE BCNDV BENPR CCPQU CITATION EBS EMOBN FYUFA GROUPED_DOAJ H13 HMCUK OK1 OVD OVEED PHGZM PHGZT PIMPY RAJ TEORI UKHRP W2D ALIPV CGR CUY CVF ECM EIF M~E NPM K9. 7X8 PUEGO 5PM |
| ID | FETCH-LOGICAL-a422t-f42bfebd55a8430eeea4d8f03eb828cab11b4aa85df9e6bc78329a9f0ffeeb873 |
| IEDL.DBID | ZGVWO |
| ISICitedReferencesCount | 188 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001346416900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2574-3805 |
| IngestDate | Tue Nov 04 02:06:00 EST 2025 Thu Sep 04 17:37:07 EDT 2025 Tue Oct 07 07:37:12 EDT 2025 Wed Jul 16 06:53:47 EDT 2025 Tue Nov 18 22:48:38 EST 2025 Sat Nov 29 03:35:21 EST 2025 Tue Nov 05 07:48:18 EST 2024 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 10 |
| Language | English |
| License | This is an open access article distributed under the terms of the CC-BY License. |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-a422t-f42bfebd55a8430eeea4d8f03eb828cab11b4aa85df9e6bc78329a9f0ffeeb873 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 ObjectType-Undefined-3 |
| OpenAccessLink | http://dx.doi.org/10.1001/jamanetworkopen.2024.40969 |
| PMID | 39466245 |
| PQID | 3143267771 |
| PQPubID | 5319538 |
| ParticipantIDs | pubmedcentral_primary_oai_pubmedcentral_nih_gov_11519755 proquest_miscellaneous_3121283835 proquest_journals_3143267771 pubmed_primary_39466245 crossref_citationtrail_10_1001_jamanetworkopen_2024_40969 crossref_primary_10_1001_jamanetworkopen_2024_40969 ama_primary_2825395 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-10-01 |
| PublicationDateYYYYMMDD | 2024-10-01 |
| PublicationDate_xml | – month: 10 year: 2024 text: 2024-10-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: Chicago |
| PublicationTitle | JAMA Network Open |
| PublicationTitleAlternate | JAMA Netw Open |
| PublicationYear | 2024 |
| Publisher | American Medical Association |
| Publisher_xml | – name: American Medical Association |
| References | zoi241182r17 Mamede (zoi241182r22) 2023; 57 zoi241182r18 Mamede (zoi241182r26) 2005; 10 zoi241182r19 Auerbach (zoi241182r3) 2024; 184 Kanjee (zoi241182r11) 2023; 330 zoi241182r12 zoi241182r15 Sibbald (zoi241182r34) 2022; 31 Meskó (zoi241182r27) 2023; 25 Harden (zoi241182r46) 1988; 10 Mamede (zoi241182r25) 2012; 46 Omiye (zoi241182r21) 2024; 177 Kostopoulou (zoi241182r35) 2015; 65 Cronbach (zoi241182r29) 1951; 16 Mamede (zoi241182r38) 2010; 304 zoi241182r43 Singh (zoi241182r2) 2013; 173 zoi241182r45 Strong (zoi241182r14) 2023; 183 Staal (zoi241182r39) 2022; 31 zoi241182r9 zoi241182r5 Humphrey-Murto (zoi241182r24) 2017; 39 Schaye (zoi241182r40) 2022; 37 Ayers (zoi241182r13) 2023; 183 Balogh (zoi241182r4) 2015 zoi241182r31 zoi241182r32 Olson (zoi241182r37) 2019; 6 Groves (zoi241182r42) 2013; 13 Savage (zoi241182r10) 2024; 7 Khan (zoi241182r48) 2013; 35 Singhal (zoi241182r44) 2023; 620 Ilgen (zoi241182r7) 2013; 88 Sibbald (zoi241182r36) 2022; 31 Cohen (zoi241182r28) 1968; 70 Omega (zoi241182r41) 2022; 27 zoi241182r20 Berner (zoi241182r23) 1994; 330 Chan (zoi241182r49) 2023; 57 Shojania (zoi241182r1) 2003; 289 Kostopoulou (zoi241182r33) 2015; 65 Pell (zoi241182r47) 2010; 32 Daniel (zoi241182r6) 2019; 94 Mamede (zoi241182r8) 2010; 304 McHugh (zoi241182r30) 2012; 22 Tierney (zoi241182r16) 39148822 - medRxiv. 2024 Aug 07:2024.08.05.24311485. doi: 10.1101/2024.08.05.24311485. 38559045 - medRxiv. 2024 Mar 14:2024.03.12.24303785. doi: 10.1101/2024.03.12.24303785. |
| References_xml | – volume: 330 start-page: 1792 issue: 25 year: 1994 ident: zoi241182r23 article-title: Performance of four computer-based diagnostic systems. publication-title: N Engl J Med doi: 10.1056/NEJM199406233302506 – ident: zoi241182r9 doi: 10.1101/2023.11.24.23298844 – volume: 31 start-page: 426 issue: 6 year: 2022 ident: zoi241182r34 article-title: Should electronic differential diagnosis support be used early or late in the diagnostic process? A multicentre experimental study of Isabel. publication-title: BMJ Qual Saf doi: 10.1136/bmjqs-2021-013493 – volume: 10 start-page: 327 issue: 4 year: 2005 ident: zoi241182r26 article-title: Correlates of reflective practice in medicine. publication-title: Adv Health Sci Educ Theory Pract doi: 10.1007/s10459-005-5066-2 – volume: 10 start-page: 19 issue: 1 year: 1988 ident: zoi241182r46 article-title: What is an OSCE? publication-title: Med Teach doi: 10.3109/01421598809019321 – volume: 37 start-page: 507 issue: 3 year: 2022 ident: zoi241182r40 article-title: Development of a clinical reasoning documentation assessment tool for resident and fellow admission notes: a shared mental model for feedback. publication-title: J Gen Intern Med doi: 10.1007/s11606-021-06805-6 – volume: 27 issue: 1 year: 2022 ident: zoi241182r41 article-title: Assessing clinical reasoning in airway related cases among anesthesiology fellow residents using Script Concordance Test (SCT). publication-title: Med Educ Online doi: 10.1080/10872981.2022.2135421 – ident: zoi241182r43 – volume: 183 start-page: 589 issue: 6 year: 2023 ident: zoi241182r13 article-title: Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. publication-title: JAMA Intern Med doi: 10.1001/jamainternmed.2023.1838 – ident: zoi241182r20 – volume: 94 start-page: 902 issue: 6 year: 2019 ident: zoi241182r6 article-title: Clinical reasoning assessment methods: a scoping review and practical guidance. publication-title: Acad Med doi: 10.1097/ACM.0000000000002618 – ident: zoi241182r15 – volume: 6 start-page: 335 issue: 4 year: 2019 ident: zoi241182r37 article-title: Competencies for improving diagnosis: an interprofessional framework for education and training in health care. publication-title: Diagnosis (Berl) doi: 10.1515/dx-2018-0107 – volume: 289 start-page: 2849 issue: 21 year: 2003 ident: zoi241182r1 article-title: Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. publication-title: JAMA doi: 10.1001/jama.289.21.2849 – volume: 22 start-page: 276 issue: 3 year: 2012 ident: zoi241182r30 article-title: Interrater reliability: the kappa statistic. publication-title: Biochem Med (Zagreb) doi: 10.11613/BM.2012.031 – volume: 32 start-page: 802 issue: 10 year: 2010 ident: zoi241182r47 article-title: How to measure the quality of the OSCE: a review of metrics—AMEE guide no. 49. publication-title: Med Teach doi: 10.3109/0142159X.2010.507716 – ident: zoi241182r19 – volume: 46 start-page: 464 issue: 5 year: 2012 ident: zoi241182r25 article-title: Reflection as a strategy to foster medical students’ acquisition of diagnostic competence. publication-title: Med Educ doi: 10.1111/j.1365-2923.2012.04217.x – volume: 304 start-page: 1198 issue: 11 year: 2010 ident: zoi241182r38 article-title: Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. publication-title: JAMA doi: 10.1001/jama.2010.1276 – volume-title: Improving Diagnosis in Health Care year: 2015 ident: zoi241182r4 doi: 10.17226/21794 – volume: 183 start-page: 1028 issue: 9 year: 2023 ident: zoi241182r14 article-title: Chatbot vs medical student performance on free-response clinical reasoning examinations. publication-title: JAMA Intern Med doi: 10.1001/jamainternmed.2023.2909 – volume: 57 start-page: 833 issue: 9 year: 2023 ident: zoi241182r49 article-title: Implementation of virtual OSCE in health professions education: a systematic review. publication-title: Med Educ doi: 10.1111/medu.15089 – volume: 31 start-page: 426 issue: 6 year: 2022 ident: zoi241182r36 article-title: Should electronic differential diagnosis support be used early or late in the diagnostic process? a multicentre experimental study of Isabel. publication-title: BMJ Qual Saf doi: 10.1136/bmjqs-2021-013493 – volume: 177 start-page: 210 issue: 2 year: 2024 ident: zoi241182r21 article-title: Large language models in medicine: the potentials and pitfalls: a narrative review. publication-title: Ann Intern Med doi: 10.7326/M23-2772 – volume: 25 year: 2023 ident: zoi241182r27 article-title: Prompt engineering as an important emerging skill for medical professionals: tutorial. publication-title: J Med Internet Res doi: 10.2196/50638 – volume: 65 start-page: e49 issue: 630 year: 2015 ident: zoi241182r33 article-title: Early diagnostic suggestions improve accuracy of GPs: a randomised controlled trial using computer-simulated patients. publication-title: Br J Gen Pract doi: 10.3399/bjgp15X683161 – volume: 88 start-page: 1545 issue: 10 year: 2013 ident: zoi241182r7 article-title: Comparing diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought. publication-title: Acad Med doi: 10.1097/ACM.0b013e3182a31c1e – ident: zoi241182r5 – volume: 16 start-page: 297 year: 1951 ident: zoi241182r29 article-title: Coefficient alpha and the internal structure of tests. publication-title: Psychometrika doi: 10.1007/BF02310555 – volume: 31 start-page: 899 issue: 12 year: 2022 ident: zoi241182r39 article-title: Effect on diagnostic accuracy of cognitive reasoning tools for the workplace setting: systematic review and meta-analysis. publication-title: BMJ Qual Saf doi: 10.1136/bmjqs-2022-014865 – volume: 304 start-page: 1198 issue: 11 year: 2010 ident: zoi241182r8 article-title: Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. publication-title: JAMA doi: 10.1001/jama.2010.1276 – ident: zoi241182r18 – ident: zoi241182r31 – volume: 173 start-page: 418 issue: 6 year: 2013 ident: zoi241182r2 article-title: Types and origins of diagnostic errors in primary care settings. publication-title: JAMA Intern Med doi: 10.1001/jamainternmed.2013.2777 – volume: 620 start-page: 172 issue: 7972 year: 2023 ident: zoi241182r44 article-title: Large language models encode clinical knowledge. publication-title: Nature doi: 10.1038/s41586-023-06291-2 – volume: 65 start-page: e49 issue: 630 year: 2015 ident: zoi241182r35 article-title: Early diagnostic suggestions improve accuracy of GPs: a randomised controlled trial using computer-simulated patients. publication-title: Br J Gen Pract doi: 10.3399/bjgp15X683161 – ident: zoi241182r45 – ident: zoi241182r17 – volume: 57 start-page: 76 issue: 1 year: 2023 ident: zoi241182r22 article-title: Deliberate reflection and clinical reasoning: founding ideas and empirical findings. publication-title: Med Educ doi: 10.1111/medu.14863 – ident: zoi241182r32 – volume: 330 start-page: 78 issue: 1 year: 2023 ident: zoi241182r11 article-title: Accuracy of a generative artificial intelligence model in a complex diagnostic challenge. publication-title: JAMA doi: 10.1001/jama.2023.8288 – volume: 35 start-page: e1437 issue: 9 year: 2013 ident: zoi241182r48 article-title: The Objective Structured Clinical Examination (OSCE): AMEE guide No. 81. part I: an historical and theoretical perspective. publication-title: Med Teach doi: 10.3109/0142159X.2013.818634 – ident: zoi241182r16 article-title: Ambient artificial intelligence scribes to alleviate the burden of clinical documentation. publication-title: NEJM Catal doi: 10.1056/CAT.23.0404 – volume: 7 start-page: 20 issue: 1 year: 2024 ident: zoi241182r10 article-title: Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine. publication-title: NPJ Digit Med doi: 10.1038/s41746-024-01010-1 – volume: 184 start-page: 164 issue: 2 year: 2024 ident: zoi241182r3 article-title: Diagnostic errors in hospitalized adults who died or were transferred to intensive care. publication-title: JAMA Intern Med doi: 10.1001/jamainternmed.2023.7347 – volume: 13 start-page: 144 issue: 1 year: 2013 ident: zoi241182r42 article-title: Analysing clinical reasoning characteristics using a combined methods approach. publication-title: BMC Med Educ doi: 10.1186/1472-6920-13-144 – volume: 70 start-page: 213 issue: 4 year: 1968 ident: zoi241182r28 article-title: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. publication-title: Psychol Bull doi: 10.1037/h0026256 – volume: 39 start-page: 14 issue: 1 year: 2017 ident: zoi241182r24 article-title: Using consensus group methods such as Delphi and nominal group in medical education research. publication-title: Med Teach doi: 10.1080/0142159X.2017.1245856 – ident: zoi241182r12 – reference: 38559045 - medRxiv. 2024 Mar 14:2024.03.12.24303785. doi: 10.1101/2024.03.12.24303785. – reference: 39148822 - medRxiv. 2024 Aug 07:2024.08.05.24311485. doi: 10.1101/2024.08.05.24311485. |
| SSID | ssj0002013965 |
| Score | 2.6427674 |
| Snippet | IMPORTANCE: Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it... Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains... ImportanceLarge language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it... This randomized clinical trial evaluates the diagnostic performance of physicians with use of a large language model compared with conventional resources. |
| SourceID | pubmedcentral proquest pubmed crossref ama |
| SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | e2440969 |
| SubjectTerms | Adult Clinical Competence - statistics & numerical data Clinical Reasoning Clinical trials Female Health Informatics Humans Language Large language models Male Multiple choice Online Only Original Investigation Physicians - psychology Physicians - statistics & numerical data Single-Blind Method |
| Title | Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial |
| URI | http://dx.doi.org/10.1001/jamanetworkopen.2024.40969 https://www.ncbi.nlm.nih.gov/pubmed/39466245 https://www.proquest.com/docview/3143267771 https://www.proquest.com/docview/3121283835 https://pubmed.ncbi.nlm.nih.gov/PMC11519755 |
| Volume | 7 |
| WOSCitedRecordID | wos001346416900001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAON databaseName: DOAJ Directory of Open Access Journals customDbUrl: eissn: 2574-3805 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002013965 issn: 2574-3805 databaseCode: DOA dateStart: 20180101 isFulltext: true titleUrlDefault: https://www.doaj.org/ providerName: Directory of Open Access Journals – providerCode: PRVPQU databaseName: Health & Medical Collection customDbUrl: eissn: 2574-3805 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002013965 issn: 2574-3805 databaseCode: 7X7 dateStart: 20180101 isFulltext: true titleUrlDefault: https://search.proquest.com/healthcomplete providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Central customDbUrl: eissn: 2574-3805 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002013965 issn: 2574-3805 databaseCode: BENPR dateStart: 20180101 isFulltext: true titleUrlDefault: https://www.proquest.com/central providerName: ProQuest – providerCode: PRVPQU databaseName: ProQuest Open Access (no login required) customDbUrl: eissn: 2574-3805 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002013965 issn: 2574-3805 databaseCode: PIMPY dateStart: 20180101 isFulltext: true titleUrlDefault: http://search.proquest.com/publiccontent providerName: ProQuest |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LbxMxEB7RpIdeeJWWQIlcqdetsl57bXMr0EClNERpC4HLyo5tNVK6qZqUA7-e8b6UFIQQl5VW9nit8czOjD3-BuBI-GBaLI-81zJiTpjIaEEj5Y01UhrTi01RbEIMh3IyUaO1S2EPT_BLfKC8TIkOBaUwpKPsGKOSVG1BmwrOeQva3z9--fq5wRT9G9E6ompthX5zLR9mSK6ZnP6T_5vsU3hcuZjkpJSJZ_DI5btwMQgp32RQbU-SUANtTs7qEiVkkZMPZdYdUpGx08tin_YtOSFjndvFzeyns6RCEZ2TyyC3L-Cqf3r5_lNUFVSINKN0FXlGjXfGcq4lS3rOOc2s9L3EGQy8ptrEsWFaS269cqmZClR3pZXvee-wi0j2oJUvcvcSiLcuthb_lKmXLGYeoyYjDRcBKSlNqOnAc2RFdltCZmThhmyieAdUzfZsWmGQh1IY86xET8ZYZJOBWWBgVjCwA0lDWw_7L1QH9epmlXYuswSdRJoKIeIOHDbNqFfhsAQHWtyHPmjUJcbvOOn9UhiazyYBlJ8ybJEbYtJ0CJjdmy357LrA7kYHPFYotK_-yJ_XsBOmXiYLHkBrdXfv3sD29MdqtrzrwpaYiOIpu9B-dzocjbvFHgK-jc7OR9-6lSL8Asd3C7Y |
| linkProvider | American Medical Association |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Large+Language+Model+Influence+on+Diagnostic+Reasoning%3A+A+Randomized+Clinical+Trial&rft.jtitle=JAMA+network+open&rft.au=Goh%2C+Ethan&rft.au=Gallo%2C+Robert&rft.au=Hom%2C+Jason&rft.au=Strong%2C+Eric&rft.date=2024-10-01&rft.pub=American+Medical+Association&rft.eissn=2574-3805&rft.volume=7&rft.issue=10&rft.spage=e2440969&rft_id=info:doi/10.1001%2Fjamanetworkopen.2024.40969&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2574-3805&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2574-3805&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2574-3805&client=summon |