Multitask Autoencoder Model for Recovering Human Poses
Human pose recovery in videos is usually conducted by matching 2-D image features and retrieving relevant 3-D human poses. In the retrieving process, the mapping between images and poses is critical. Traditional methods assume this mapping relationship as local joint detection or global joint locali...
Saved in:
| Published in: | IEEE transactions on industrial electronics (1982) Vol. 65; no. 6; pp. 5060 - 5068 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
New York
IEEE
01.06.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 0278-0046, 1557-9948 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Human pose recovery in videos is usually conducted by matching 2-D image features and retrieving relevant 3-D human poses. In the retrieving process, the mapping between images and poses is critical. Traditional methods assume this mapping relationship as local joint detection or global joint localization, which limits recovery performance of these methods since this two tasks are actually unified. In this paper, we propose a novel pose recovery framework by simultaneously learning the tasks of joint localization and joint detection. To obtain this framework, multiple manifold learning is used and the shared parameter is calculated. With them, multiple manifold regularizers are integrated and generalized eigendecomposition is utilized to achieve parameter optimization. In this way, pose recovery is boosted by both global mapping and local refinement. Experimental results on two popular datasets demonstrates that the recovery error has been reduced by 10%-20%, which proves the performance improvement of the proposed method. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0278-0046 1557-9948 |
| DOI: | 10.1109/TIE.2017.2739691 |