Face recognition via Deep Stacked Denoising Sparse Autoencoders (DSDSA)

Face recognition is still a hot topic under investigation due to many challenges of variation including the difference in poses, illumination, expression, occlusion and scenes. Recently, deep learning methods achieved remarkable results in image representation and recognition fields. Such methods ex...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied mathematics and computation Jg. 355; S. 325 - 342
Hauptverfasser: Görgel, Pelin, Simsek, Ahmet
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 15.08.2019
Schlagworte:
ISSN:0096-3003, 1873-5649
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Face recognition is still a hot topic under investigation due to many challenges of variation including the difference in poses, illumination, expression, occlusion and scenes. Recently, deep learning methods achieved remarkable results in image representation and recognition fields. Such methods extract salient features automatically from images to reduce the dimension and obtain more useful representation of raw data. In this paper, the proposed face recognition system namely Deep Stacked Denoising Sparse Autoencoders (DSDSA) combines the deep neural network technology, sparse autoencoders and denoising task. Autoencoder is used to construct a neural network that learns an approximation of an identity function by placing constraints to learn fine representations of the inputs. Autoencoders have unique capabilities in dealing with interpretation of the input data; in this way produce more meaningful results. They are successfully applied to many object recognition fields. For the classification task, two classifiers were used, namely multi-class SVM and Softmax classifier. Experimental results on known face databases including ORL, Yale, Caltech and a subset of PubFig show that the proposed system yields promising performance and achieves comparable accuracy.
ISSN:0096-3003
1873-5649
DOI:10.1016/j.amc.2019.02.071