Multitask Autoencoder Model for Recovering Human Poses

Human pose recovery in videos is usually conducted by matching 2-D image features and retrieving relevant 3-D human poses. In the retrieving process, the mapping between images and poses is critical. Traditional methods assume this mapping relationship as local joint detection or global joint locali...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on industrial electronics (1982) Ročník 65; číslo 6; s. 5060 - 5068
Hlavní autoři: Yu, Jun, Hong, Chaoqun, Rui, Yong, Tao, Dacheng
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.06.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:0278-0046, 1557-9948
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Human pose recovery in videos is usually conducted by matching 2-D image features and retrieving relevant 3-D human poses. In the retrieving process, the mapping between images and poses is critical. Traditional methods assume this mapping relationship as local joint detection or global joint localization, which limits recovery performance of these methods since this two tasks are actually unified. In this paper, we propose a novel pose recovery framework by simultaneously learning the tasks of joint localization and joint detection. To obtain this framework, multiple manifold learning is used and the shared parameter is calculated. With them, multiple manifold regularizers are integrated and generalized eigendecomposition is utilized to achieve parameter optimization. In this way, pose recovery is boosted by both global mapping and local refinement. Experimental results on two popular datasets demonstrates that the recovery error has been reduced by 10%-20%, which proves the performance improvement of the proposed method.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0278-0046
1557-9948
DOI:10.1109/TIE.2017.2739691