Multi‐view facial action unit detection via deep feature enhancement

Multi‐view facial action unit (AU) analysis has been a challenging research topic due to multiple disturbing variables, including subject identity biases, variational facial action unit intensities, facial occlusions and non‐frontal head‐poses. A deep feature enhancement (DFE) framework is presented...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Electronics letters Ročník 57; číslo 25; s. 970 - 972
Hlavní autoři: Tang, Chuangao, Lu, Cheng, Zheng, Wenming, Zong, Yuan, Li, Sunan
Médium: Journal Article
Jazyk:angličtina
Vydáno: Stevenage John Wiley & Sons, Inc 01.12.2021
Wiley
Témata:
ISSN:0013-5194, 1350-911X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Multi‐view facial action unit (AU) analysis has been a challenging research topic due to multiple disturbing variables, including subject identity biases, variational facial action unit intensities, facial occlusions and non‐frontal head‐poses. A deep feature enhancement (DFE) framework is presented to tackle some of these coupled complex disturbing variables for multi‐view facial action unit detection. The authors' DFE framework is a novel end‐to‐end three‐stage feature learning model with taking subject identity biases, dynamic facial changes and head‐pose into consideration. It contains three feature enhancement modules, including coarse‐grained local and holistic spatial feature learning (LHSF), spatio‐temporal feature learning (STF) and head‐pose feature disentanglement (FD). Experimental results show that the proposed method achieved state‐of‐the‐art recognition performance on the FERA2017 dataset. The code is released at http://aip.seu.edu.cn/cgtang/. A deep feature enhancement (DFE) framework is proposed, in which a novel end‐to‐end three‐stage feature learning model is presented taking subjects identity biases, dynamic facial changes and head‐pose into consideration. Experimental results show that the proposed method achieved state‐of‐the‐art recognition performance on the FERA2017 dataset.
Bibliografie:Chuangao Tang and Cheng Lu has equally contributed for the article.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0013-5194
1350-911X
DOI:10.1049/ell2.12322