Adaptable Anatomical Models for Realistic Bone Motion Reconstruction

We present a system to reconstruct subject‐specific anatomy models while relying only on exterior measurements represented by point clouds. Our model combines geometry, kinematics, and skin deformations (skinning). This joint model can be adapted to different individuals without breaking its functio...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Computer graphics forum Ročník 34; číslo 2; s. 459 - 471
Hlavní autori: Zhu, Lifeng, Hu, Xiaoyan, Kavan, Ladislav
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Oxford Blackwell Publishing Ltd 01.05.2015
Predmet:
ISSN:0167-7055, 1467-8659
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We present a system to reconstruct subject‐specific anatomy models while relying only on exterior measurements represented by point clouds. Our model combines geometry, kinematics, and skin deformations (skinning). This joint model can be adapted to different individuals without breaking its functionality, i.e., the bones and the skin remain well‐articulated after the adaptation. We propose an optimization algorithm which learns the subject‐specific (anthropometric) parameters from input point clouds captured using commodity depth cameras. The resulting personalized models can be used to reconstruct motion of human subjects. We validate our approach for upper and lower limbs, using both synthetic data and recordings of three different human subjects. Our reconstructed bone motion is comparable to results obtained by optical motion capture (Vicon) combined with anatomically‐based inverse kinematics (OpenSIM). We demonstrate that our adapted models better preserve the joint structure than previous methods such as OpenSIM or Anatomy Transfer.
Bibliografia:Supporting InformationSupporting Information
ArticleID:CGF12575
istex:A41E839625654C41670CC24DD3C069FDEC551649
ark:/67375/WNG-W7288DHB-S
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.12575