SelfPose: 3D Egocentric Pose Estimation From a Headset Mounted Camera
Uloženo v:
| Název: | SelfPose: 3D Egocentric Pose Estimation From a Headset Mounted Camera |
|---|---|
| Autoři: | Denis Tome, Thiemo Alldieck, Patrick Peluse, Gerard Pons-Moll, Lourdes Agapito, Hernan Badino, Fernando de la Torre |
| Zdroj: | IEEE Transactions on Pattern Analysis and Machine Intelligence |
| Publication Status: | Preprint |
| Informace o vydavateli: | Institute of Electrical and Electronics Engineers (IEEE), 2023. |
| Rok vydání: | 2023 |
| Témata: | FOS: Computer and information sciences, Computer Vision and Pattern Recognition (cs.CV), Computer Science - Computer Vision and Pattern Recognition, Two dimensional displays, 02 engineering and technology, Cameras, 03 medical and health sciences, 0302 clinical medicine, 0202 electrical engineering, electronic engineering, information engineering, Three-dimensional displays, Training, Head, Pose estimation, Visualization |
| Popis: | We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint. 14 pages. arXiv admin note: substantial text overlap with arXiv:1907.10045 |
| Druh dokumentu: | Article |
| Popis souboru: | application/pdf |
| ISSN: | 1939-3539 0162-8828 |
| DOI: | 10.1109/tpami.2020.3029700 |
| DOI: | 10.48550/arxiv.2011.01519 |
| Přístupová URL adresa: | https://ieeexplore.ieee.org/ielx7/34/4359286/09217955.pdf https://pubmed.ncbi.nlm.nih.gov/33031034 http://arxiv.org/abs/2011.01519 https://dblp.uni-trier.de/db/journals/corr/corr2011.html#abs-2011-01519 http://arxiv.org/pdf/2011.01519.pdf http://ui.adsabs.harvard.edu/abs/2020arXiv201101519T/abstract https://pubmed.ncbi.nlm.nih.gov/33031034/ https://www.ncbi.nlm.nih.gov/pubmed/33031034 https://arxiv.org/pdf/2011.01519.pdf http://hdl.handle.net/21.11116/0000-0007-7008-2 http://hdl.handle.net/21.11116/0000-0007-700A-0 https://discovery-pp.ucl.ac.uk/id/eprint/10113623/ |
| Rights: | CC BY CC BY NC SA |
| Přístupové číslo: | edsair.doi.dedup.....ee9fad12aab2a199a7cc7abf72e57ef5 |
| Databáze: | OpenAIRE |
| Abstrakt: | We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.<br />14 pages. arXiv admin note: substantial text overlap with arXiv:1907.10045 |
|---|---|
| ISSN: | 19393539 01628828 |
| DOI: | 10.1109/tpami.2020.3029700 |
Full Text Finder
Nájsť tento článok vo Web of Science