One Year On: Assessing Progress of Multimodal Large Language Model Performance on RSNA 2024 Case of the Day Questions
Background With the growing use of multimodal large language models (LLMs), numerous vision-enabled models have been developed and made available to the public. Purpose To assess and quantify the advancements of multimodal LLMs in interpreting radiologic quiz cases by examining both image and textua...
Gespeichert in:
| Veröffentlicht in: | Radiology Jg. 316; H. 2; S. e250617 |
|---|---|
| Hauptverfasser: | , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
01.08.2025
|
| Schlagworte: | |
| ISSN: | 1527-1315, 1527-1315 |
| Online-Zugang: | Weitere Angaben |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Background With the growing use of multimodal large language models (LLMs), numerous vision-enabled models have been developed and made available to the public. Purpose To assess and quantify the advancements of multimodal LLMs in interpreting radiologic quiz cases by examining both image and textual content over the course of 1 year, and to compare model performance with that of radiologists. Materials and Methods For this retrospective study, 95 questions from Case of the Day at the RSNA 2024 Annual Meeting were collected. Seventy-six questions from the 2023 meeting were included as a baseline for comparison. The test accuracies of prominent multimodal LLMs (including OpenAI's ChatGPT, Google's Gemini, and Meta's open-source Llama 3.2 models) were evaluated and compared with each other and with the accuracies of two senior radiologists. McNemar statistical test was used to assess statistical significance. Results The newly released models OpenAI o1 and GPT-4o achieved scores on the questions from 2024 of 59% (56 of 95; 95% CI: 48, 69) and 54% (51 of 95; 95% CI: 43, 64), respectively, whereas Gemini 1.5 Pro (Google) achieved a score of 36% (34 of 95; 95% CI: 26, 46), and Llama 3.2-90B-Vision (Meta) achieved a score of 33% (31 of 95; 95% CI: 23, 43). For the questions from 2023, OpenAI o1 and GPT-4o scored 62% (47 of 76; 95% CI: 50, 73) and 54% (41 of 76; 95% CI: 42, 65), respectively. GPT-4 (from 2023), the only publicly available vision-language model from OpenAI last year, achieved 43% (33 of 76; 95% CI: 32, 55). The accuracy of OpenAI o1 on the 2024 questions (59%) was comparable to two radiologists, who scored 58% (55 of 95; 95% CI: 47, 68;
= .99) and 66% (63 of 95; 95% CI: 56, 76;
= .99). Conclusion In 1 year, multimodal LLMs demonstrated substantial advancements, with latest models from OpenAI outperforming those from Google and Meta. Notably, there was no evidence of a statistically significant difference between the accuracy of OpenAI o1 and the accuracies of two expert radiologists. © RSNA, 2025
See also the editorial by Suh and Suh in this issue. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ISSN: | 1527-1315 1527-1315 |
| DOI: | 10.1148/radiol.250617 |