The doc versus the bot: A pilot study to assess the quality and accuracy of physician and chatbot responses to clinical questions in gynecologic oncology

•Physicians provided higher quality responses to common clinical questions in gynecologic oncology compared to chatbots.•Chatbots provided longer, more inaccurate, and lower quality responses to gynecologic oncology clinical questions.•Patients should be cautioned against non-approved/validated arti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Gynecologic oncology reports Jg. 55; S. 101477
Hauptverfasser: Anastasio, Mary Katherine, Peters, Pamela, Foote, Jonathan, Melamed, Alexander, Modesitt, Susan C., Musa, Fernanda, Rossi, Emma, Albright, Benjamin B., Havrilesky, Laura J., Moss, Haley A.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Netherlands Elsevier Inc 01.10.2024
Elsevier
Schlagworte:
ISSN:2352-5789, 2352-5789
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Physicians provided higher quality responses to common clinical questions in gynecologic oncology compared to chatbots.•Chatbots provided longer, more inaccurate, and lower quality responses to gynecologic oncology clinical questions.•Patients should be cautioned against non-approved/validated artificial intelligence platforms for medical advice. Artificial intelligence (AI) applications to medical care are currently under investigation. We aimed to evaluate and compare the quality and accuracy of physician and chatbot responses to common clinical questions in gynecologic oncology. In this cross-sectional pilot study, ten questions about the knowledge and management of gynecologic cancers were selected. Each question was answered by a recruited gynecologic oncologist, ChatGPT (Generative Pretreated Transformer) AI platform, and Bard by Google AI platform. Five recruited gynecologic oncologists who were blinded to the study design were allowed 15 min to respond to each of two questions. Chatbot responses were generated by inserting the question into a fresh session in September 2023. Qualifiers and language identifying the response source were removed. Three gynecologic oncology providers who were blinded to the response source independently reviewed and rated response quality using a 5-point Likert scale, evaluated each response for accuracy, and selected the best response for each question. Overall, physician responses were judged to be best in 76.7 % of evaluations versus ChatGPT (10.0 %) and Bard (13.3 %; p < 0.001). The average quality of responses was 4.2/5.0 for physicians, 3.0/5.0 for ChatGPT and 2.8/5.0 for Bard (t-test for both and ANOVA p < 0.001). Physicians provided a higher proportion of accurate responses (86.7 %) compared to ChatGPT (60 %) and Bard (43 %; p < 0.001 for both). Physicians provided higher quality responses to gynecologic oncology clinical questions compared to chatbots. Patients should be cautioned against non-validated AI platforms for medical advice; larger studies on the use of AI for medical advice are needed.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2352-5789
2352-5789
DOI:10.1016/j.gore.2024.101477