Facial Expression Recognition based on Convolutional Neural Network with Sparse Representation

Facial Expression Recognition (FER) in the wild using Convolutional Neural Networks (CNNs) has been a challenge for years because of the significant intra-class variances and interclass similarities. In contrast, facial expression recognition in the wild is vital for human-computer interactions and...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2022 8th International Conference on Systems and Informatics (ICSAI) s. 1 - 6
Hlavní autoři: Liu, Xuan, Ma, Jiachen, Wang, Qiang
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 10.12.2022
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Facial Expression Recognition (FER) in the wild using Convolutional Neural Networks (CNNs) has been a challenge for years because of the significant intra-class variances and interclass similarities. In contrast, facial expression recognition in the wild is vital for human-computer interactions and has numerous applications. Enhancing the discriminative features extraction ability is one approach to solving this issue. In this work, a sparse transform is used to improve a CNN's ability to extract features without adding to the network's computational load. We use a sparse representation layer that is built by the Haar wavelet transform or shearlet transform prior to the convolutional layers of a standard CNN. With the proposed sparse representation layers, we introduce a VGGNet and an AlexNet architecture and conduct experiments on the FER2013 dataset without the use of additional training data. The experimental results demonstrated that the wavelet transform's sparse representation layer can improve FER performance without increasing an excessive computational burden. We achieved testing accuracy of 73.25 percent on the FER2013 dataset using VGGNet paired with a sparse representation layer built inside a wavelet transform, which is among the best results for a single network.
DOI:10.1109/ICSAI57119.2022.10005481