Encoder-decoder models for chest X-ray report generation perform no better than unconditioned baselines

High quality radiology reporting of chest X-ray images is of core importance for high-quality patient diagnosis and care. Automatically generated reports can assist radiologists by reducing their workload and even may prevent errors. Machine Learning (ML) models for this task take an X-ray image as...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:PloS one Ročník 16; číslo 11; s. e0259639
Hlavní autori: Babar, Zaheer, van Laarhoven, Twan, Marchiori, Elena
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States Public Library of Science 29.11.2021
Public Library of Science (PLoS)
Predmet:
ISSN:1932-6203, 1932-6203
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:High quality radiology reporting of chest X-ray images is of core importance for high-quality patient diagnosis and care. Automatically generated reports can assist radiologists by reducing their workload and even may prevent errors. Machine Learning (ML) models for this task take an X-ray image as input and output a sequence of words. In this work, we show that ML models for this task based on the popular encoder-decoder approach, like ‘Show, Attend and Tell’ (SA&T) have similar or worse performance than models that do not use the input image, called unconditioned baseline. An unconditioned model achieved diagnostic accuracy of 0.91 on the IU chest X-ray dataset, and significantly outperformed SA&T (0.877) and other popular ML models ( p -value < 0.001). This unconditioned model also outperformed SA&T and similar ML methods on the BLEU-4 and METEOR metrics. Also, an unconditioned version of SA&T obtained by permuting the reports generated from images of the test set, achieved diagnostic accuracy of 0.862, comparable to that of SA&T ( p -value ≥ 0.05).
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Competing Interests: NO authors have competing interests.
ISSN:1932-6203
1932-6203
DOI:10.1371/journal.pone.0259639