PFEMed: Few-shot medical image classification using prior guided feature enhancement

•A novel dual-encoder architecture is introduced to extract feature representation.•To our knowledge, we are the first to investigate the proposed VAE model.•We present a novel method to initialize the priors estimated in the VAE module.•Proposed approach will help medical industry utilize knowledge...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Pattern recognition Ročník 134; s. 109108
Hlavní autoři: Dai, Zhiyong, Yi, Jianjun, Yan, Lei, Xu, Qingwen, Hu, Liang, Zhang, Qi, Li, Jiahui, Wang, Guoqiang
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.02.2023
Témata:
ISSN:0031-3203, 1873-5142
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•A novel dual-encoder architecture is introduced to extract feature representation.•To our knowledge, we are the first to investigate the proposed VAE model.•We present a novel method to initialize the priors estimated in the VAE module.•Proposed approach will help medical industry utilize knowledge from public datasets. Deep learning-based methods have recently demonstrated outstanding performance on general image classification tasks. As optimization of these methods is dependent on a large amount of labeled data, their application in medical image classification is limited. To address this issue, we propose PFEMed, a novel few-shot classification method for medical images. To extract general and specific features from medical images, this method employs a dual-encoder structure, that is, one encoder with fixed weights pre-trained on public image classification datasets and another encoder trained on the target medical dataset. In addition, we introduce a novel prior-guided Variational Autoencoder (VAE) module to enhance the robustness of the target feature, which is the concatenation of the general and specific features. Then, we match the target features extracted from both the support and query medical image samples and predict the category attribution of the query examples. Extensive experiments on several publicly available medical image datasets show that our method outperforms current state-of-the-art few-shot methods by a wide margin, particularly outperforming MetaMed on the Pap smear dataset by over 2.63%.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.109108