Sample reconstruction with deep autoencoder for one sample per person face recognition

One sample per person (OSPP) face recognition is a challenging problem in face recognition community. Lack of samples is the main reason for the failure of most algorithms in OSPP. In this study, the authors propose a new algorithm to generalise intra-class variations of multi-sample subjects to sin...

Full description

Saved in:
Bibliographic Details
Published in:IET computer vision Vol. 11; no. 6; pp. 471 - 478
Main Authors: Zhang, Yan, Peng, Hua
Format: Journal Article
Language:English
Published: The Institution of Engineering and Technology 01.09.2017
Wiley
Subjects:
ISSN:1751-9632, 1751-9640, 1751-9640
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:One sample per person (OSPP) face recognition is a challenging problem in face recognition community. Lack of samples is the main reason for the failure of most algorithms in OSPP. In this study, the authors propose a new algorithm to generalise intra-class variations of multi-sample subjects to single-sample subjects by deep autoencoder and reconstruct new samples. In the proposed algorithm, a generalised deep autoencoder is first trained with all images in the gallery, then a class-specific deep autoencoder (CDA) is fine-tuned for each single-sample subject with its single sample. Samples of the multi-sample subject, which is most like the single-sample subject, are input to the corresponding CDA to reconstruct new samples. For classification, minimum L2 distance, principle component analysis, sparse represented-based classifier and softmax regression are used. Experiments on the Extended Yale Face Database B, AR database and CMU PIE database are provided to show the validity of the proposed algorithm.
ISSN:1751-9632
1751-9640
1751-9640
DOI:10.1049/iet-cvi.2016.0322