Multi-task vision transformer using low-level chest X-ray feature corpus for COVID-19 diagnosis and severity quantification

•Backbone with pre-built large CXR dataset is less prone to overfitting.•A multi-task Vision Transformer leveraging low-level features from the backbone was devised.•The proposed multi-task model was capable of performing both classification and severity prediction simultaneously.•The proposed model...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Medical image analysis Ročník 75; s. 102299
Hlavní autoři: Park, Sangjoon, Kim, Gwanghyun, Oh, Yujin, Seo, Joon Beom, Lee, Sang Min, Kim, Jin Hwan, Moon, Sungjun, Lim, Jae-Kwang, Ye, Jong Chul
Médium: Journal Article
Jazyk:angličtina
Vydáno: Netherlands Elsevier B.V 01.01.2022
Elsevier BV
Témata:
ISSN:1361-8415, 1361-8423, 1361-8423
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•Backbone with pre-built large CXR dataset is less prone to overfitting.•A multi-task Vision Transformer leveraging low-level features from the backbone was devised.•The proposed multi-task model was capable of performing both classification and severity prediction simultaneously.•The proposed model showed superb generalizability in various external data in both tasks. [Display omitted] Developing a robust algorithm to diagnose and quantify the severity of the novel coronavirus disease 2019 (COVID-19) using Chest X-ray (CXR) requires a large number of well-curated COVID-19 datasets, which is difficult to collect under the global COVID-19 pandemic. On the other hand, CXR data with other findings are abundant. This situation is ideally suited for the Vision Transformer (ViT) architecture, where a lot of unlabeled data can be used through structural modeling by the self-attention mechanism. However, the use of existing ViT may not be optimal, as the feature embedding by direct patch flattening or ResNet backbone in the standard ViT is not intended for CXR. To address this problem, here we propose a novel Multi-task ViT that leverages low-level CXR feature corpus obtained from a backbone network that extracts common CXR findings. Specifically, the backbone network is first trained with large public datasets to detect common abnormal findings such as consolidation, opacity, edema, etc. Then, the embedded features from the backbone network are used as corpora for a versatile Transformer model for both the diagnosis and the severity quantification of COVID-19. We evaluate our model on various external test datasets from totally different institutions to evaluate the generalization capability. The experimental results confirm that our model can achieve state-of-the-art performance in both diagnosis and severity quantification tasks with outstanding generalization capability, which are sine qua non of widespread deployment.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Sangjoon Park and Gwanghyun Kim are co-first authors.
ISSN:1361-8415
1361-8423
1361-8423
DOI:10.1016/j.media.2021.102299