Small-data image classification via drop-in variational autoencoder

It is unclear whether generative approaches can achieve state-of-the-art performance with supervised classification in high-dimensional feature spaces and extremely small datasets. In this paper, we propose a drop-in variational autoencoder (VAE) for the task of supervised learning using an extremel...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Signal, image and video processing Ročník 19; číslo 9; s. 766
Hlavní autoři: Mahdian, Babak, Nedbal, Radim
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Springer London 01.09.2025
Springer Nature B.V
Témata:
ISSN:1863-1703, 1863-1711
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:It is unclear whether generative approaches can achieve state-of-the-art performance with supervised classification in high-dimensional feature spaces and extremely small datasets. In this paper, we propose a drop-in variational autoencoder (VAE) for the task of supervised learning using an extremely small train set (i.e., n = 1 , . . , 5 images per class). Drop-in classifiers form a usual alternative when traditional approaches to Few-Shot Learning cannot be used. The classification will be defined as a posterior probability density function and approximated by the variational principle. We perform experiments on a large variety of deep feature representations extracted from different layers of popular convolutional neural network (CNN) architectures. We also benchmark with modern classifiers, including Neural Tangent Kernel (NTK), Support Vector Machine (SVM) with NTK kernel and Neural Network Gaussian Process (NNGP). Results obtained indicate that the drop-in VAE classifier outperforms all the compared classifiers in the extremely small data regime.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-025-04376-1