Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations

Background ChatGPT is a powerful artificial intelligence large language model with great potential as a tool in medical practice and education, but its performance in radiology remains unclear. Purpose To assess the performance of ChatGPT on radiology board-style examination questions without images...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Radiology Jg. 307; H. 5; S. e230582
Hauptverfasser: Bhayana, Rajesh, Krishna, Satheesh, Bleakney, Robert R
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States 01.06.2023
Schlagworte:
ISSN:1527-1315, 1527-1315
Online-Zugang:Weitere Angaben
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background ChatGPT is a powerful artificial intelligence large language model with great potential as a tool in medical practice and education, but its performance in radiology remains unclear. Purpose To assess the performance of ChatGPT on radiology board-style examination questions without images and to explore its strengths and limitations. Materials and Methods In this exploratory prospective study performed from February 25 to March 3, 2023, 150 multiple-choice questions designed to match the style, content, and difficulty of the Canadian Royal College and American Board of Radiology examinations were grouped by question type (lower-order [recall, understanding] and higher-order [apply, analyze, synthesize] thinking) and topic (physics, clinical). The higher-order thinking questions were further subclassified by type (description of imaging findings, clinical management, application of concepts, calculation and classification, disease associations). ChatGPT performance was evaluated overall, by question type, and by topic. Confidence of language in responses was assessed. Univariable analysis was performed. Results ChatGPT answered 69% of questions correctly (104 of 150). The model performed better on questions requiring lower-order thinking (84%, 51 of 61) than on those requiring higher-order thinking (60%, 53 of 89) ( = .002). When compared with lower-order questions, the model performed worse on questions involving description of imaging findings (61%, 28 of 46; = .04), calculation and classification (25%, two of eight; = .01), and application of concepts (30%, three of 10; = .01). ChatGPT performed as well on higher-order clinical management questions (89%, 16 of 18) as on lower-order questions ( = .88). It performed worse on physics questions (40%, six of 15) than on clinical questions (73%, 98 of 135) ( = .02). ChatGPT used confident language consistently, even when incorrect (100%, 46 of 46). Conclusion Despite no radiology-specific pretraining, ChatGPT nearly passed a radiology board-style examination without images; it performed well on lower-order thinking questions and clinical management questions but struggled with higher-order thinking questions involving description of imaging findings, calculation and classification, and application of concepts. © RSNA, 2023 See also the editorial by Lourenco et al and the article by Bhayana et al in this issue.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1527-1315
1527-1315
DOI:10.1148/radiol.230582