Self Pre-Training with Adaptive Mask Autoencoders for Variable-Contrast 3D Medical Imaging

The Masked Autoencoder (MAE) has recently demonstrated effectiveness in pre-training Vision Transformers (ViT) for analyzing natural images. By reconstructing complete images from partially masked inputs, the ViT encoder gathers contextual information to predict the missing regions. This capability...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings (International Symposium on Biomedical Imaging) s. 1 - 5
Hlavní autori: Das, Badhan Kumar, Zhao, Gengyan, Liu, Han, Re, Thomas J., Comaniciu, Dorin, Gibson, Eli, Maier, Andreas
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 14.04.2025
Predmet:
ISSN:1945-8452
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The Masked Autoencoder (MAE) has recently demonstrated effectiveness in pre-training Vision Transformers (ViT) for analyzing natural images. By reconstructing complete images from partially masked inputs, the ViT encoder gathers contextual information to predict the missing regions. This capability to aggregate context is especially important in medical imaging, where anatomical structures are functionally and mechanically linked to surrounding regions. However, current methods do not consider variations in the number of input images, which is typically the case in realworld Magnetic Resonance (MR) studies. To address this limitation, we propose a 3D Adaptive Masked Autoencoders (AMAE) architecture that accommodates a variable number of 3D input contrasts per subject. A magnetic resonance imaging (MRI) dataset of 45,364 subjects was used for pretraining and a subset of 1648 training, 193 validation and 215 test subjects were used for finetuning. The performance demonstrates that self pre-training of this adaptive masked autoencoders can enhance the infarct segmentation performance by 2.8%-3.7% for ViT-based segmentation models.
ISSN:1945-8452
DOI:10.1109/ISBI60581.2025.10981097