Progressively Generating Better Initial Guesses Towards Next Stages for High-Quality Human Motion Prediction

This paper presents a high-quality human motion pre-diction method that accurately predicts future human poses given observed ones. Our method is based on the observation that a good "initial guess" of the future poses is very helpful in improving the forecasting accuracy. This mo-tivates...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online) s. 6427 - 6436
Hlavní autoři: Ma, Tiezheng, Nie, Yongwei, Long, Chengjiang, Zhang, Qing, Li, Guiqing
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.06.2022
Témata:
ISSN:1063-6919
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper presents a high-quality human motion pre-diction method that accurately predicts future human poses given observed ones. Our method is based on the observation that a good "initial guess" of the future poses is very helpful in improving the forecasting accuracy. This mo-tivates us to propose a novel two-stage prediction frame-work, including an init-prediction network that just computes the good guess and then a formal-prediction network that predicts the target future poses based on the guess. More importantly, we extend this idea further and design a multi-stage prediction framework where each stage pre-dicts initial guess for the next stage, which brings more performance gain. To fulfill the prediction task at each stage, we propose a network comprising Spatial Dense Graph Convolutional Networks (S-DGCN) and Temporal Dense Graph Convolutional Networks (T-DGCN). Alternatively executing the two networks helps extract spatiotem-poral features over the global receptive field of the whole pose sequence. All the above design choices cooperating together make our method outperform previous approaches by large margins: 6%-7% on Human3.6M, 5%-10% on CMU-MoCap, and 13%-16% on 3DPW. Code is available at https://github.com/705062791/PGBIG.
ISSN:1063-6919
DOI:10.1109/CVPR52688.2022.00633