Pose‐to‐Motion: Cross‐Domain Motion Retargeting with Pose Prior

Creating plausible motions for a diverse range of characters is a long‐standing goal in computer graphics. Current learning‐based motion synthesis methods rely on large‐scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, sin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum Jg. 43; H. 8
Hauptverfasser: Zhao, Qingqing, Li, Peizhuo, Yifan, Wang, Olga, Sorkine‐Hornung, Wetzstein, Gordon
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Oxford Blackwell Publishing Ltd 01.12.2024
Schlagworte:
ISSN:0167-7055, 1467-8659
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Creating plausible motions for a diverse range of characters is a long‐standing goal in computer graphics. Current learning‐based motion synthesis methods rely on large‐scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we tap into this alternative data source and introduce a neural motion synthesis approach through retargeting, which generates plausible motion of various characters that only have pose data by transferring motion from one single existing motion capture dataset of another drastically different characters. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist‐created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Our code and dataset can be accessed here.
AbstractList Creating plausible motions for a diverse range of characters is a long‐standing goal in computer graphics. Current learning‐based motion synthesis methods rely on large‐scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we tap into this alternative data source and introduce a neural motion synthesis approach through retargeting, which generates plausible motion of various characters that only have pose data by transferring motion from one single existing motion capture dataset of another drastically different characters. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist‐created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Our code and dataset can be accessed here.
Author Olga, Sorkine‐Hornung
Li, Peizhuo
Wetzstein, Gordon
Yifan, Wang
Zhao, Qingqing
Author_xml – sequence: 1
  givenname: Qingqing
  surname: Zhao
  fullname: Zhao, Qingqing
  organization: Stanford University USA
– sequence: 2
  givenname: Peizhuo
  surname: Li
  fullname: Li, Peizhuo
  organization: ETH Zurich Switzerland
– sequence: 3
  givenname: Wang
  surname: Yifan
  fullname: Yifan, Wang
  organization: Stanford University USA
– sequence: 4
  givenname: Sorkine‐Hornung
  surname: Olga
  fullname: Olga, Sorkine‐Hornung
  organization: ETH Zurich Switzerland
– sequence: 5
  givenname: Gordon
  surname: Wetzstein
  fullname: Wetzstein, Gordon
  organization: Stanford University USA
BookMark eNotUMtOwzAQtFCRSAsH_iASJw4p61ccc0OhPKQiKgRnyyR2SEXjYruquPEJfCNfgkvZw740s6OdMRoNbjAInWKY4hQXTWenmGMBByjDrBRFVXI5Qhng1Avg_AiNQ1gCABMlz9Bs4YL5-fqOLqUHF3s3XOa1dyGk-dqtdD_k-3X-ZKL2nYn90OXbPr7lO2q-8L3zx-jQ6vdgTv7rBL3czJ7ru2L-eHtfX80LTTGJBZGGG0urtqkqxnRFiWYCiBVli4kRtAVpCaEs4aQUWr6WtrUMEqUpgTNLJ-hsf3ft3cfGhKiWbuOHJKmSgKwkAyoS6nyPanZ_eGPV2vcr7T8VBrVzSSWX1J9L9BdXFF2n
Cites_doi 10.1109/ICCV48922.2021.00958
10.1007/978-3-031-20086-1_37
10.1145/1477926.1477927
10.1007/978-3-319-70139-4
10.1145/3550469.3555428
10.1109/3DV50981.2020.00102
10.1109/ICCV.2019.00546
10.1109/ICCV.2017.244
10.1109/CVPR52729.2023.00051
10.1016/j.cag.2022.04.001
10.1145/2601097.2601192
10.1145/3424636.3426909
10.1145/2897824.2925975
10.1145/3197517.3201366
10.1145/3528223.3530157
10.1145/311535.311539
10.1145/3283289.3283316
10.1145/3355089.3356505
10.1109/CVPR.2019.01123
10.1145/3606928
10.1145/3272127.3275028
10.1145/3414685.3417877
10.1109/ICCV48922.2021.01315
10.1109/CVPR.2018.00901
10.1145/2485895.2485903
10.1145/1015706.1015736
10.1145/3386569.3392480
10.1145/311535.311536
10.1002/1099-1778(200012)11:5<223::AID-VIS236>3.0.CO;2-5
10.1111/cgf.12860
10.1111/cgf.12507
10.1145/3450626.3459670
10.1109/ICCV.2015.494
10.1145/3528223.3530178
10.1145/3386569.3392469
10.1145/1037957.1037963
10.1007/s10654-016-0149-3
10.1109/WACVW60836.2024.00017
10.1145/3528223.3530106
10.1145/1531326.1531342
10.1145/3386569.3392462
ContentType Journal Article
Copyright 2024. This article is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: 2024. This article is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1111/cgf.15170
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts
CrossRef
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1467-8659
ExternalDocumentID 10_1111_cgf_15170
GroupedDBID .3N
.4S
.DC
.GA
.Y3
05W
0R~
10A
15B
1OB
1OC
29F
31~
33P
3SF
4.4
50Y
50Z
51W
51X
52M
52N
52O
52P
52S
52T
52U
52W
52X
5GY
5HH
5LA
5VS
66C
6J9
702
7PT
8-0
8-1
8-3
8-4
8-5
8UM
8VB
930
A03
AAESR
AAEVG
AAHQN
AAMMB
AAMNL
AANHP
AANLZ
AAONW
AASGY
AAXRX
AAYCA
AAYXX
AAZKR
ABCQN
ABCUV
ABDBF
ABDPE
ABEML
ABPVW
ACAHQ
ACBWZ
ACCZN
ACFBH
ACGFS
ACPOU
ACRPL
ACSCC
ACUHS
ACXBN
ACXQS
ACYXJ
ADBBV
ADEOM
ADIZJ
ADKYN
ADMGS
ADMLS
ADNMO
ADOZA
ADXAS
ADZMN
AEFGJ
AEGXH
AEIGN
AEIMD
AEMOZ
AENEX
AEUYR
AEYWJ
AFBPY
AFEBI
AFFNX
AFFPM
AFGKR
AFWVQ
AFZJQ
AGHNM
AGQPQ
AGXDD
AGYGG
AHBTC
AHEFC
AHQJS
AIDQK
AIDYY
AIQQE
AITYG
AIURR
AJXKR
AKVCP
ALAGY
ALMA_UNASSIGNED_HOLDINGS
ALVPJ
AMBMR
AMYDB
ARCSS
ASPBG
ATUGU
AUFTA
AVWKF
AZBYB
AZFZN
AZVAB
BAFTC
BDRZF
BFHJK
BHBCM
BMNLL
BMXJE
BNHUX
BROTX
BRXPI
BY8
CAG
CITATION
COF
CS3
CWDTD
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRSTM
DU5
EAD
EAP
EBA
EBO
EBR
EBS
EBU
EDO
EJD
EMK
EST
ESX
F00
F01
F04
F5P
FEDTE
FZ0
G-S
G.N
GODZA
H.T
H.X
HF~
HGLYW
HVGLF
HZI
HZ~
I-F
IHE
IX1
J0M
K1G
K48
LATKE
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LW6
LYRES
MEWTI
MK4
MRFUL
MRSTM
MSFUL
MSSTM
MXFUL
MXSTM
N04
N05
N9A
NF~
O66
O8X
O9-
OIG
P2W
P2X
P4D
PALCI
PQQKQ
Q.N
Q11
QB0
QWB
R.K
RDJ
RIWAO
RJQFR
ROL
RX1
SAMSI
SUPJJ
TH9
TN5
TUS
UB1
V8K
W8V
W99
WBKPD
WIH
WIK
WOHZO
WQJ
WXSBR
WYISQ
WZISG
XG1
ZL0
ZZTAW
~IA
~IF
~WT
7SC
8FD
ALUQN
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-a312t-29e5ef38dc8844a832a4702f76d12e73d09f223429e997a9b6fdf40ef3c6054f3
ISICitedReferencesCount 3
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001324647400005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0167-7055
IngestDate Fri Jul 25 21:28:52 EDT 2025
Sat Nov 29 03:41:24 EST 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-a312t-29e5ef38dc8844a832a4702f76d12e73d09f223429e997a9b6fdf40ef3c6054f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cgf.15170
PQID 3129894037
PQPubID 30877
ParticipantIDs proquest_journals_3129894037
crossref_primary_10_1111_cgf_15170
PublicationCentury 2000
PublicationDate 2024-12-01
PublicationDateYYYYMMDD 2024-12-01
PublicationDate_xml – month: 12
  year: 2024
  text: 2024-12-01
  day: 01
PublicationDecade 2020
PublicationPlace Oxford
PublicationPlace_xml – name: Oxford
PublicationTitle Computer graphics forum
PublicationYear 2024
Publisher Blackwell Publishing Ltd
Publisher_xml – name: Blackwell Publishing Ltd
References Monzani J.-S. (e_1_2_9_37_2) 2000
e_1_2_9_52_2
e_1_2_9_50_2
e_1_2_9_10_2
e_1_2_9_56_2
e_1_2_9_12_2
e_1_2_9_31_2
e_1_2_9_54_2
Gleicher M. (e_1_2_9_17_2) 1998
e_1_2_9_14_2
e_1_2_9_35_2
e_1_2_9_58_2
e_1_2_9_39_2
e_1_2_9_41_2
e_1_2_9_62_2
e_1_2_9_60_2
e_1_2_9_20_2
e_1_2_9_45_2
e_1_2_9_43_2
e_1_2_9_64_2
e_1_2_9_6_2
e_1_2_9_4_2
Goodfellow I. (e_1_2_9_18_2) 2014; 27
e_1_2_9_8_2
e_1_2_9_24_2
e_1_2_9_49_2
e_1_2_9_26_2
e_1_2_9_47_2
e_1_2_9_28_2
e_1_2_9_51_2
e_1_2_9_30_2
Holden D. (e_1_2_9_22_2) 2015
e_1_2_9_34_2
e_1_2_9_55_2
e_1_2_9_11_2
e_1_2_9_32_2
e_1_2_9_53_2
Li R. (e_1_2_9_33_2) 2022
Andreou N. (e_1_2_9_2_2) 2022
e_1_2_9_13_2
e_1_2_9_38_2
e_1_2_9_59_2
e_1_2_9_15_2
e_1_2_9_36_2
e_1_2_9_57_2
Gulrajani I. (e_1_2_9_16_2) 2017; 30
e_1_2_9_19_2
e_1_2_9_40_2
e_1_2_9_63_2
e_1_2_9_61_2
e_1_2_9_21_2
e_1_2_9_44_2
e_1_2_9_23_2
e_1_2_9_42_2
e_1_2_9_65_2
e_1_2_9_7_2
e_1_2_9_5_2
e_1_2_9_3_2
e_1_2_9_9_2
e_1_2_9_25_2
e_1_2_9_48_2
e_1_2_9_27_2
e_1_2_9_46_2
e_1_2_9_29_2
References_xml – volume: 30
  year: 2017
  ident: e_1_2_9_16_2
  article-title: Improved training of wasserstein gans
  publication-title: Advances in neural information processing systems
– ident: e_1_2_9_53_2
  doi: 10.1109/ICCV48922.2021.00958
– volume-title: SIGGRAPH Asia 2015 Technical Briefs
  year: 2015
  ident: e_1_2_9_22_2
– ident: e_1_2_9_36_2
  doi: 10.1007/978-3-031-20086-1_37
– ident: e_1_2_9_24_2
  doi: 10.1145/1477926.1477927
– ident: e_1_2_9_30_2
  doi: 10.1007/978-3-319-70139-4
– ident: e_1_2_9_26_2
  doi: 10.1145/3550469.3555428
– ident: e_1_2_9_27_2
  doi: 10.1109/3DV50981.2020.00102
– ident: e_1_2_9_61_2
  doi: 10.1109/ICCV.2019.00546
– ident: e_1_2_9_63_2
  doi: 10.1109/ICCV.2017.244
– ident: e_1_2_9_59_2
– start-page: 155
  volume-title: Computer Graphics Forum
  year: 2022
  ident: e_1_2_9_2_2
– ident: e_1_2_9_49_2
  doi: 10.1109/CVPR52729.2023.00051
– ident: e_1_2_9_34_2
  doi: 10.1016/j.cag.2022.04.001
– ident: e_1_2_9_56_2
  doi: 10.1145/2601097.2601192
– ident: e_1_2_9_13_2
  doi: 10.1145/3424636.3426909
– ident: e_1_2_9_21_2
  doi: 10.1145/2897824.2925975
– ident: e_1_2_9_62_2
– ident: e_1_2_9_64_2
  doi: 10.1145/3197517.3201366
– ident: e_1_2_9_65_2
– ident: e_1_2_9_29_2
  doi: 10.1145/3528223.3530157
– ident: e_1_2_9_7_2
– ident: e_1_2_9_32_2
  doi: 10.1145/311535.311539
– ident: e_1_2_9_25_2
  doi: 10.1145/3283289.3283316
– ident: e_1_2_9_48_2
  doi: 10.1145/3355089.3356505
– ident: e_1_2_9_39_2
  doi: 10.1109/CVPR.2019.01123
– ident: e_1_2_9_51_2
– ident: e_1_2_9_42_2
  doi: 10.1145/3606928
– ident: e_1_2_9_20_2
  doi: 10.1145/3272127.3275028
– ident: e_1_2_9_40_2
– ident: e_1_2_9_60_2
– ident: e_1_2_9_43_2
  doi: 10.1145/3414685.3417877
– ident: e_1_2_9_35_2
  doi: 10.1109/ICCV48922.2021.01315
– ident: e_1_2_9_54_2
  doi: 10.1109/CVPR.2018.00901
– ident: e_1_2_9_46_2
  doi: 10.1145/2485895.2485903
– ident: e_1_2_9_47_2
  doi: 10.1145/1015706.1015736
– ident: e_1_2_9_23_2
  doi: 10.1145/3386569.3392480
– ident: e_1_2_9_52_2
– ident: e_1_2_9_41_2
  doi: 10.1145/311535.311536
– ident: e_1_2_9_57_2
– ident: e_1_2_9_11_2
  doi: 10.1002/1099-1778(200012)11:5<223::AID-VIS236>3.0.CO;2-5
– start-page: 11
  volume-title: Computer Graphics Forum
  year: 2000
  ident: e_1_2_9_37_2
– volume: 27
  year: 2014
  ident: e_1_2_9_18_2
  article-title: Generative adversarial nets
  publication-title: Advances in Neural Information Processing Systems
– ident: e_1_2_9_5_2
  doi: 10.1111/cgf.12860
– ident: e_1_2_9_12_2
  doi: 10.1111/cgf.12507
– ident: e_1_2_9_14_2
– ident: e_1_2_9_38_2
  doi: 10.1145/3450626.3459670
– ident: e_1_2_9_6_2
– ident: e_1_2_9_31_2
– ident: e_1_2_9_15_2
  doi: 10.1109/ICCV.2015.494
– ident: e_1_2_9_45_2
  doi: 10.1145/3528223.3530178
– ident: e_1_2_9_3_2
– start-page: 33
  volume-title: Proc. 25th annual conference on computer graphics and interactive techniques
  year: 1998
  ident: e_1_2_9_17_2
– ident: e_1_2_9_28_2
– ident: e_1_2_9_8_2
  doi: 10.1145/3386569.3392469
– ident: e_1_2_9_50_2
  doi: 10.1145/1037957.1037963
– ident: e_1_2_9_19_2
  doi: 10.1007/s10654-016-0149-3
– start-page: 419
  volume-title: European Conference on Computer Vision
  year: 2022
  ident: e_1_2_9_33_2
– ident: e_1_2_9_44_2
  doi: 10.1109/WACVW60836.2024.00017
– ident: e_1_2_9_9_2
  doi: 10.1145/3528223.3530106
– ident: e_1_2_9_55_2
– ident: e_1_2_9_58_2
– ident: e_1_2_9_10_2
  doi: 10.1145/1531326.1531342
– ident: e_1_2_9_4_2
  doi: 10.1145/3386569.3392462
SSID ssj0004765
Score 2.4395776
Snippet Creating plausible motions for a diverse range of characters is a long‐standing goal in computer graphics. Current learning‐based motion synthesis methods rely...
SourceID proquest
crossref
SourceType Aggregation Database
Index Database
SubjectTerms Computer graphics
Computer vision
Datasets
Image acquisition
Motion capture
Motion graphics
Motion perception
Synthesis
Title Pose‐to‐Motion: Cross‐Domain Motion Retargeting with Pose Prior
URI https://www.proquest.com/docview/3129894037
Volume 43
WOSCitedRecordID wos001324647400005&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVWIB
  databaseName: Wiley Online Library Full Collection 2020
  customDbUrl:
  eissn: 1467-8659
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0004765
  issn: 0167-7055
  databaseCode: DRFUL
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://onlinelibrary.wiley.com
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT9wwELaWhUN7qCi0KhSqCHFbpcqL2Omt4iEOFLYtr1vkODaNtE2WbBahSpX6E_ob-SWMX8lukRA9cLGyXtub9Xwej0fjbxDa5jhjfs5yF9BB3Uhk3CU8Ji4BTcj8hAa-YuA7P8LHx-TyMhn2er_tXZibES5LcnubjJ9V1FAHwpZXZ_9D3O2gUAHPIHQoQexQPknwQ5kr0YYwNFX7-KWycRy7KlW6rd-rftKiHOivYbp1bHjro5XDDYZ1UdWzdqxNBjFQhNeS6bljddBuaOWC_QrDXNvNUYb9FDomuPj1Y1q1CqcQ2gt7QbuWJyPt8f2unPndHzqs6nJqmhlnRRDNBH4Y_yXoZUngo7cfrXOlriaxIQY3SllzNxnwkUd0PbsSH8Fq0dlH5vm0_9nn2uhDe-6BrqnquoAWAzg3eX20uPft4Oyou1iL4x1LDi_f2lBTyVCw9nfnDZr5_VwZKafL6JU5XTifNSpeox4vV9DLGc7JVbQvBXr3529TQaGF_slRiIDPGguOrnZmsOBILDiyq6Ow8AadHeyf7h66JpeGS0M_aNwg4TtchCRnsAojCnqcRtgLBI5zP-A4zL1EgKUI1glPEkyTLBa5iDzowuDAG4nwLeqXVcnfISfLPZzJlhjDgg4x4ZgHccgo9VhEM7GGtuyEpGNNmZI-mPI1tGGnKjWLZ5LCm8p8AF6I158yxnv0osPYBuo39ZRvoiV20xST-oOR5T3Q9ml4
linkProvider Wiley-Blackwell
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Pose%E2%80%90to%E2%80%90Motion%3A+Cross%E2%80%90Domain+Motion+Retargeting+with+Pose+Prior&rft.jtitle=Computer+graphics+forum&rft.au=Zhao%2C+Qingqing&rft.au=Li%2C+Peizhuo&rft.au=Yifan%2C+Wang&rft.au=Olga%2C+Sorkine%E2%80%90Hornung&rft.date=2024-12-01&rft.issn=0167-7055&rft.eissn=1467-8659&rft.volume=43&rft.issue=8&rft_id=info:doi/10.1111%2Fcgf.15170&rft.externalDBID=n%2Fa&rft.externalDocID=10_1111_cgf_15170
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-7055&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-7055&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-7055&client=summon