Locality-constrained linear coding based bi-layer model for multi-view facial expression recognition

•Apply locality-constrained linear coding to multi-view facial expression recognition.•A novel locality-constrained linear coding based bi-layer model (LLCBL) is proposed.•LLCBL preserves both relationship between views and characteristics of every view.•View-dependent models are constructed in LLCB...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Neurocomputing (Amsterdam) Ročník 239; s. 143 - 152
Hlavní autoři: Wu, Jianlong, Lin, Zhouchen, Zheng, Wenming, Zha, Hongbin
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 24.05.2017
Témata:
ISSN:0925-2312, 1872-8286
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•Apply locality-constrained linear coding to multi-view facial expression recognition.•A novel locality-constrained linear coding based bi-layer model (LLCBL) is proposed.•LLCBL preserves both relationship between views and characteristics of every view.•View-dependent models are constructed in LLCBL to eliminate pose influence.•LLCBL achieves superior performance. Multi-view facial expression recognition is a challenging and active research area in computer vision. In this paper, we propose a simple yet effective method, called the locality-constrained linear coding based bi-layer (LLCBL) model, to learn discriminative representation for multi-view facial expression recognition. To address the issue of large pose variations, locality-constrained linear coding is adopted to construct an overall bag-of-features model, which is then used to extract overall features as well as estimate poses in the first layer. In the second layer, we establish one specific view-dependent model for each view, respectively. After the pose information of the facial image is known, we use the corresponding view-dependent model in the second layer to further extract features. By combining all the features in these two layers, we obtain a unified representation of the image. To evaluate the proposed approach, we conduct extensive experiments on both BU-3DFE and Multi-PIE databases. Experimental results show that our approach outperforms the state-of-the-art methods.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2017.02.012