A robust feature extraction with optimized DBN-SMO for facial expression recognition

Facial expression is the most common technique is used to convey the expressions of human beings. Due to different ethnicity and age, faces differ from one individual to another so that an automatic facial expression analysis and recognition is a difficult operation. To solve this difficulty, this p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications Jg. 79; H. 29-30; S. 21487 - 21512
Hauptverfasser: Vedantham, Ramachandran, Reddy, Edara Sreenivasa
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York Springer US 01.08.2020
Springer Nature B.V
Schlagworte:
ISSN:1380-7501, 1573-7721
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Facial expression is the most common technique is used to convey the expressions of human beings. Due to different ethnicity and age, faces differ from one individual to another so that an automatic facial expression analysis and recognition is a difficult operation. To solve this difficulty, this paper proposes a robust feature extraction with optimized DBN-SMO for facial expression recognition (FER). Initially, the pre-processing stage is performed then texture descriptors of Local Phase Quantization (LPQ), Weber Local Descriptor (WLD) and Local Binary Pattern (LBP) are used to extract the features. Moreover, Discrete Cosine Transform (DCT) features are extracted to enhance the recognition rate and reduce the computational cost. After that, the Principal component analysis (PCA) is used for dimension reduction. Finally, a deep belief network (DBN) with Spider monkey optimization (SMO) algorithm is used to classify basic expressions for FER. Here, SMO algorithm is used to optimize bias factors and initial connection weights that control the efficiency of the DBN. The proposed work is performed in the MATLAB environment. Experiments performed on Karolinska Directed Emotional Faces (KDEF), Man-Machine Interaction (MMI), Cohn Kanade (CK+), Extended Denver Intensity of Spontaneous Facial Actions (DISFA+) and Oulu-Chinese Academy of Science Institute of Automation (Oulu-CASIA) datasets and it provides a classification accuracy of 97.93%, 95.42%, 97.58%, 95.76%, and 92.38% respectively, this is higher than other current procedures for seven-class emotion.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-020-08901-x