A Two-Phase Test Sample Sparse Representation Method for Use With Face Recognition

In this paper, we propose a two-phase test sample representation method for face recognition. The first phase of the proposed method seeks to represent the test sample as a linear combination of all the training samples and exploits the representation ability of each training sample to determine M &...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on circuits and systems for video technology Ročník 21; číslo 9; s. 1255 - 1262
Hlavní autoři: Yong Xu, Zhang, D., Jian Yang, Jing-Yu Yang
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York, NY IEEE 01.09.2011
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1051-8215, 1558-2205
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In this paper, we propose a two-phase test sample representation method for face recognition. The first phase of the proposed method seeks to represent the test sample as a linear combination of all the training samples and exploits the representation ability of each training sample to determine M "nearest neighbors" for the test sample. The second phase represents the test sample as a linear combination of the determined M nearest neighbors and uses the representation result to perform classification. We propose this method with the following assumption: the test sample and its some neighbors are probably from the same class. Thus, we use the first phase to detect the training samples that are far from the test sample and assume that these samples have no effects on the ultimate classification decision. This is helpful to accurately classify the test sample. We will also show the probability explanation of the proposed method. A number of face recognition experiments show that our method performs very well.
Bibliografie:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
content type line 23
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2011.2138790