Sample reconstruction with deep autoencoder for one sample per person face recognition

One sample per person (OSPP) face recognition is a challenging problem in face recognition community. Lack of samples is the main reason for the failure of most algorithms in OSPP. In this study, the authors propose a new algorithm to generalise intra-class variations of multi-sample subjects to sin...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IET computer vision Ročník 11; číslo 6; s. 471 - 478
Hlavní autoři: Zhang, Yan, Peng, Hua
Médium: Journal Article
Jazyk:angličtina
Vydáno: The Institution of Engineering and Technology 01.09.2017
Wiley
Témata:
ISSN:1751-9632, 1751-9640, 1751-9640
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:One sample per person (OSPP) face recognition is a challenging problem in face recognition community. Lack of samples is the main reason for the failure of most algorithms in OSPP. In this study, the authors propose a new algorithm to generalise intra-class variations of multi-sample subjects to single-sample subjects by deep autoencoder and reconstruct new samples. In the proposed algorithm, a generalised deep autoencoder is first trained with all images in the gallery, then a class-specific deep autoencoder (CDA) is fine-tuned for each single-sample subject with its single sample. Samples of the multi-sample subject, which is most like the single-sample subject, are input to the corresponding CDA to reconstruct new samples. For classification, minimum L2 distance, principle component analysis, sparse represented-based classifier and softmax regression are used. Experiments on the Extended Yale Face Database B, AR database and CMU PIE database are provided to show the validity of the proposed algorithm.
ISSN:1751-9632
1751-9640
1751-9640
DOI:10.1049/iet-cvi.2016.0322