Multi-Modal masked autoencoder and parallel Mamba for 3D brain tumor segmentation

Accurate segmentation of brain tumors from multimodal MRI is essential for diagnosis and treatment planning. However, most existing approaches can only process single type of data modality, without exploiting the complementary information across different modalities. To overcome this limitation, a n...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Pattern recognition letters Ročník 199; s. 40 - 46
Hlavní autoři: Huang, Yaya, Liu, Litong, Zhang, Tianzhen, Wang, Sisi, Ting, Chee-Ming
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.01.2026
Témata:
ISSN:0167-8655
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Accurate segmentation of brain tumors from multimodal MRI is essential for diagnosis and treatment planning. However, most existing approaches can only process single type of data modality, without exploiting the complementary information across different modalities. To overcome this limitation, a novel framework called MFMamba which integrates modality-aware masked autoencoder pretraining, a gated fusion strategy, and a Mamba-based backbone for efficient long-range modeling is proposed. In this design, one modality is fully masked while others are partially masked, forcing the network to reconstruct missing data through cross-modal learning. The gated fusion module then selectively incorporates generative priors into task-specific features, enhancing multimodal representations. Experimental results on the BraTS 2023 dataset show that MFMamba achieves Dice score of 93.77% for Whole Tumor and 92.69% for Tumor Core, corresponding to 1.6–2.1% improvements over state-of-the-art baselines. The gains are statistically significant (p<0.05), indicating the framework’s ability to deliver more precise tumor boundary delineation. Overall, the results suggest that modality-aware fusion can enhance segmentation quality while maintaining computational efficiency, underscoring its potential application for clinical image analysis. The implementation is publicly available at https://github.com/ministerhuang/MFMamba. •A modality-aware MAE strategy is introduced for effective cross-modal pretraining.•Gated fusion selectively combines generative priors with modality-specific features.•A Mamba-based segmentation network captures long-range dependencies efficiently.•The proposed method achieves superior Dice scores on BraTS 2023 benchmark.•Results show consistent improvements in Whole Tumor and Tumor Core segmentation.
ISSN:0167-8655
DOI:10.1016/j.patrec.2025.10.020