Multi-task vision transformer using low-level chest X-ray feature corpus for COVID-19 diagnosis and severity quantification

•Backbone with pre-built large CXR dataset is less prone to overfitting.•A multi-task Vision Transformer leveraging low-level features from the backbone was devised.•The proposed multi-task model was capable of performing both classification and severity prediction simultaneously.•The proposed model...

Full description

Saved in:
Bibliographic Details
Published in:Medical image analysis Vol. 75; p. 102299
Main Authors: Park, Sangjoon, Kim, Gwanghyun, Oh, Yujin, Seo, Joon Beom, Lee, Sang Min, Kim, Jin Hwan, Moon, Sungjun, Lim, Jae-Kwang, Ye, Jong Chul
Format: Journal Article
Language:English
Published: Netherlands Elsevier B.V 01.01.2022
Elsevier BV
Subjects:
ISSN:1361-8415, 1361-8423, 1361-8423
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Backbone with pre-built large CXR dataset is less prone to overfitting.•A multi-task Vision Transformer leveraging low-level features from the backbone was devised.•The proposed multi-task model was capable of performing both classification and severity prediction simultaneously.•The proposed model showed superb generalizability in various external data in both tasks. [Display omitted] Developing a robust algorithm to diagnose and quantify the severity of the novel coronavirus disease 2019 (COVID-19) using Chest X-ray (CXR) requires a large number of well-curated COVID-19 datasets, which is difficult to collect under the global COVID-19 pandemic. On the other hand, CXR data with other findings are abundant. This situation is ideally suited for the Vision Transformer (ViT) architecture, where a lot of unlabeled data can be used through structural modeling by the self-attention mechanism. However, the use of existing ViT may not be optimal, as the feature embedding by direct patch flattening or ResNet backbone in the standard ViT is not intended for CXR. To address this problem, here we propose a novel Multi-task ViT that leverages low-level CXR feature corpus obtained from a backbone network that extracts common CXR findings. Specifically, the backbone network is first trained with large public datasets to detect common abnormal findings such as consolidation, opacity, edema, etc. Then, the embedded features from the backbone network are used as corpora for a versatile Transformer model for both the diagnosis and the severity quantification of COVID-19. We evaluate our model on various external test datasets from totally different institutions to evaluate the generalization capability. The experimental results confirm that our model can achieve state-of-the-art performance in both diagnosis and severity quantification tasks with outstanding generalization capability, which are sine qua non of widespread deployment.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Sangjoon Park and Gwanghyun Kim are co-first authors.
ISSN:1361-8415
1361-8423
1361-8423
DOI:10.1016/j.media.2021.102299