Adaptable Anatomical Models for Realistic Bone Motion Reconstruction

We present a system to reconstruct subject‐specific anatomy models while relying only on exterior measurements represented by point clouds. Our model combines geometry, kinematics, and skin deformations (skinning). This joint model can be adapted to different individuals without breaking its functio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum Jg. 34; H. 2; S. 459 - 471
Hauptverfasser: Zhu, Lifeng, Hu, Xiaoyan, Kavan, Ladislav
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Oxford Blackwell Publishing Ltd 01.05.2015
Schlagworte:
ISSN:0167-7055, 1467-8659
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present a system to reconstruct subject‐specific anatomy models while relying only on exterior measurements represented by point clouds. Our model combines geometry, kinematics, and skin deformations (skinning). This joint model can be adapted to different individuals without breaking its functionality, i.e., the bones and the skin remain well‐articulated after the adaptation. We propose an optimization algorithm which learns the subject‐specific (anthropometric) parameters from input point clouds captured using commodity depth cameras. The resulting personalized models can be used to reconstruct motion of human subjects. We validate our approach for upper and lower limbs, using both synthetic data and recordings of three different human subjects. Our reconstructed bone motion is comparable to results obtained by optical motion capture (Vicon) combined with anatomically‐based inverse kinematics (OpenSIM). We demonstrate that our adapted models better preserve the joint structure than previous methods such as OpenSIM or Anatomy Transfer.
Bibliographie:Supporting InformationSupporting Information
ArticleID:CGF12575
istex:A41E839625654C41670CC24DD3C069FDEC551649
ark:/67375/WNG-W7288DHB-S
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.12575