SketchMetaFace: A Learning-Based Sketching Interface for High-Fidelity 3D Character Face Modeling

Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as a vital component of avatars. However, building 3D character face models usually requires a heavy workload with commercial tools, even for e...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on visualization and computer graphics Ročník 30; číslo 8; s. 5260 - 5275
Hlavní autori: Luo, Zhongjin, Du, Dong, Zhu, Heming, Yu, Yizhou, Fu, Hongbo, Han, Xiaoguang
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 01.08.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1077-2626, 1941-0506, 1941-0506
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as a vital component of avatars. However, building 3D character face models usually requires a heavy workload with commercial tools, even for experienced artists. Various existing sketch-based tools fail to support amateurs in modeling diverse facial shapes and rich geometric details. In this article, we present SketchMetaFace - a sketching system targeting amateur users to model high-fidelity 3D faces in minutes. We carefully design both the user interface and the underlying algorithm. First, curvature-aware strokes are adopted to better support the controllability of carving facial details. Second, considering the key problem of mapping a 2D sketch map to a 3D model, we develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM). It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency. In addition, to further support usability, we present a coarse-to-fine 2D sketching interface design and a data-driven stroke suggestion tool. User studies demonstrate the superiority of our system over existing modeling tools in terms of the ease to use and visual quality of results. Experimental analyses also show that IDGMM reaches a better trade-off between accuracy and efficiency.
AbstractList Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as a vital component of avatars. However, building 3D character face models usually requires a heavy workload with commercial tools, even for experienced artists. Various existing sketch-based tools fail to support amateurs in modeling diverse facial shapes and rich geometric details. In this article, we present SketchMetaFace - a sketching system targeting amateur users to model high-fidelity 3D faces in minutes. We carefully design both the user interface and the underlying algorithm. First, curvature-aware strokes are adopted to better support the controllability of carving facial details. Second, considering the key problem of mapping a 2D sketch map to a 3D model, we develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM). It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency. In addition, to further support usability, we present a coarse-to-fine 2D sketching interface design and a data-driven stroke suggestion tool. User studies demonstrate the superiority of our system over existing modeling tools in terms of the ease to use and visual quality of results. Experimental analyses also show that IDGMM reaches a better trade-off between accuracy and efficiency.
Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as a vital component of avatars. However, building 3D character face models usually requires a heavy workload with commercial tools, even for experienced artists. Various existing sketch-based tools fail to support amateurs in modeling diverse facial shapes and rich geometric details. In this article, we present SketchMetaFace - a sketching system targeting amateur users to model high-fidelity 3D faces in minutes. We carefully design both the user interface and the underlying algorithm. First, curvature-aware strokes are adopted to better support the controllability of carving facial details. Second, considering the key problem of mapping a 2D sketch map to a 3D model, we develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM). It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency. In addition, to further support usability, we present a coarse-to-fine 2D sketching interface design and a data-driven stroke suggestion tool. User studies demonstrate the superiority of our system over existing modeling tools in terms of the ease to use and visual quality of results. Experimental analyses also show that IDGMM reaches a better trade-off between accuracy and efficiency.Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as a vital component of avatars. However, building 3D character face models usually requires a heavy workload with commercial tools, even for experienced artists. Various existing sketch-based tools fail to support amateurs in modeling diverse facial shapes and rich geometric details. In this article, we present SketchMetaFace - a sketching system targeting amateur users to model high-fidelity 3D faces in minutes. We carefully design both the user interface and the underlying algorithm. First, curvature-aware strokes are adopted to better support the controllability of carving facial details. Second, considering the key problem of mapping a 2D sketch map to a 3D model, we develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM). It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency. In addition, to further support usability, we present a coarse-to-fine 2D sketching interface design and a data-driven stroke suggestion tool. User studies demonstrate the superiority of our system over existing modeling tools in terms of the ease to use and visual quality of results. Experimental analyses also show that IDGMM reaches a better trade-off between accuracy and efficiency.
Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as a vital component of avatars. However, building 3D character face models usually requires a heavy workload with commercial tools, even for experienced artists. Various existing sketch-based tools fail to support amateurs in modeling diverse facial shapes and rich geometric details. In this paper, we present SketchMetaFace - a sketching system targeting amateur users to model high-fidelity 3D faces in minutes. We carefully design both the user interface and the underlying algorithm. First, curvature-aware strokes are adopted to better support the controllability of carving facial details. Second, considering the key problem of mapping a 2D sketch map to a 3D model, we develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM). It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency. In addition, to further support usability, we present a coarse-to-fine 2D sketching interface design and a data-driven stroke suggestion tool. User studies demonstrate the superiority of our system over existing modeling tools in terms of the ease to use and visual quality of results. Experimental analyses also show that IDGMM reaches a better trade-off between accuracy and efficiency.
Author Zhu, Heming
Luo, Zhongjin
Du, Dong
Fu, Hongbo
Han, Xiaoguang
Yu, Yizhou
Author_xml – sequence: 1
  givenname: Zhongjin
  orcidid: 0000-0002-3483-4236
  surname: Luo
  fullname: Luo, Zhongjin
  email: 220019015@link.cuhk.edu.cn
  organization: School of Science and Engineering, Chinese University of Hong Kong, Shenzhen, China
– sequence: 2
  givenname: Dong
  orcidid: 0000-0001-5481-389X
  surname: Du
  fullname: Du, Dong
  email: dongdu@mail.ustc.edu.cn
  organization: School of Science and Engineering, Chinese University of Hong Kong, Shenzhen, China
– sequence: 3
  givenname: Heming
  orcidid: 0000-0003-3525-9349
  surname: Zhu
  fullname: Zhu, Heming
  email: hezhu@mpi-inf.mpg.de
  organization: School of Science and Engineering, Chinese University of Hong Kong, Shenzhen, China
– sequence: 4
  givenname: Yizhou
  orcidid: 0000-0002-0470-5548
  surname: Yu
  fullname: Yu, Yizhou
  email: yizhouy@acm.org
  organization: Department of Computer Science, University of Hong Kong, Hong Kong
– sequence: 5
  givenname: Hongbo
  orcidid: 0000-0002-0284-726X
  surname: Fu
  fullname: Fu, Hongbo
  email: fuplus@gmail.com
  organization: School of Creative Media, City University of Hong Kong, Hong Kong
– sequence: 6
  givenname: Xiaoguang
  orcidid: 0000-0003-0162-3296
  surname: Han
  fullname: Han, Xiaoguang
  email: hanxiaoguang@cuhk.edu.cn
  organization: School of Science and Engineering, Chinese University of Hong Kong, Shenzhen, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/37467083$$D View this record in MEDLINE/PubMed
BookMark eNp9kc1uEzEUhS1URH_gAZAQssSGzQT_je1hVwJpK6ViQWFreTzXicvEU2xn0bfHo6QS6oKVr3W_78jyOUcncYqA0FtKFpSS7tPdr-XVghHGF5x1VBH-Ap3RTtCGtESe1Jko1TDJ5Ck6z_meECqE7l6hU66EVETzM2R__IbitrdQ7Mo6-Iwv8RpsiiFumi82w4APQL3jm1gg-UphPyV8HTbbZhUGGEN5xPwrXm5tsq4ieE7Ct9O8ipvX6KW3Y4Y3x_MC_Vx9u1teN-vvVzfLy3XjeMdLMwxEDoR0klHwnikvKOmhBy81c0rZthfctRyY8Fpo3jFJOW0HDaIXVfT8An085D6k6c8ecjG7kB2Mo40w7bNhWhAmJBNdRT88Q--nfYr1dYYT1QrWSjVT74_Uvt_BYB5S2Nn0aJ4-rwLqALg05ZzAGxeKLWGKJdkwGkrMXJOZazJzTeZYUzXpM_Mp_H_Ou4MTAOAfnmrdUsr_AuE0mqg
CODEN ITVGEA
CitedBy_id crossref_primary_10_1016_j_media_2025_103653
crossref_primary_10_3390_electronics13132445
crossref_primary_10_1109_TVCG_2024_3521333
Cites_doi 10.1145/1141911.1141928
10.1007/978-3-319-46484-8_38
10.1145/3450626.3459760
10.1145/2185520.2185541
10.1111/cgf.12200
10.1109/ICCV48922.2021.01278
10.1109/CVPR.2018.00766
10.1109/CVPR.2019.00025
10.1007/978-3-031-25085-9_11
10.1111/j.1467-8659.2009.01418.x
10.1145/1276377.1276429
10.1109/TVCG.2020.3030330
10.1145/3072959.3073632
10.1145/3072959.3073629
10.1111/cgf.12223
10.1145/1449715.1449740
10.1109/CVPR.2019.00459
10.1109/TCSVT.2020.3040900
10.1109/TVCG.2016.2597830
10.1609/aaai.v36i3.20188
10.1145/37402.37422
10.1016/j.cag.2022.06.005
10.1145/2591011
10.1145/3272127.3275051
10.1145/3414685.3417807
10.1145/2710026
10.1109/CVPR.2018.00414
10.1145/2890493
10.1109/CVPR46437.2021.01010
10.1109/3DV.2017.00018
10.1109/CVPR.2018.00767
10.1111/cgf.14184
10.1111/j.1467-8659.2006.00958.x
10.1007/978-3-030-01252-6_4
10.1145/3203186
10.1109/3DV57658.2022.00015
10.1109/MCG.2011.84
10.1007/978-3-031-20062-5_18
10.1109/CVPR.2017.589
10.1007/978-3-031-26293-7_4
10.1145/1281500.1281554
10.1109/CVPR.2015.7298797
10.1109/CVPR42600.2020.00016
10.1007/s11704-016-5422-9
10.1109/CVPR.2018.00030
10.1145/2185520.2185527
10.1145/1057432.1057457
10.1145/311535.311602
10.1145/3472749.3474791
10.1109/CVPRW.2019.00038
10.1109/CVPR42600.2020.00589
10.1109/3DV50981.2020.00064
10.1145/2897824.2925951
10.1145/1661412.1618495
10.1109/TIP.2021.3118975
10.1109/3DV57658.2022.00050
10.1109/CVPR46437.2021.00595
10.1145/2766990
10.1109/CVPR46437.2021.00337
10.1145/3203197
10.1109/CVPR.2019.00609
10.1145/2601097.2601128
10.1109/TVCG.2013.249
10.1145/3386569.3392386
10.1109/CVPR.2017.179
10.1145/1057432.1057456
10.1145/2366145.2366217
10.1109/TPAMI.2022.3148853
10.1145/882262.882354
10.1109/ICCV.2019.00239
10.1109/CVPR.2017.632
10.1007/978-3-319-46484-8_29
10.1016/0098-3004(96)00021-0
10.1145/1661412.1618494
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TVCG.2023.3291703
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
Technology Research Database
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1941-0506
EndPage 5275
ExternalDocumentID 37467083
10_1109_TVCG_2023_3291703
10188511
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Shenzhen General Project
  grantid: JCYJ20220530143604010
– fundername: National Natural Science Foundation of China; NSFC
  grantid: 62172348
  funderid: 10.13039/501100001809
– fundername: National Key R&D Program of China
  grantid: 2018YFB1800800
– fundername: Basic Research Project
  grantid: HZQB-KCZYZ-2021067
– fundername: Shenzhen Outstanding Talents Training Fund
  grantid: 202002
– fundername: Guangdong Research Projects
  grantid: 2017ZT07X152; 2019CX01X104
– fundername: Research Grants Council of the Hong Kong Special Administrative Region, China
  grantid: CityU 11212119
– fundername: Guangdong Provincial Key Laboratory of Future Networks of Intelligence
  grantid: 2022B1212010001
– fundername: Shenzhen Key Laboratory of Big Data and Artificial Intelligence
  grantid: ZDSYS201707251409055
– fundername: Hong Kong Research Grants Council under General Research Funds
  grantid: HKU17206218
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RZB
TN5
VH1
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c393t-dd06d009621eff27f410bebef682c77a5b43c53e24f84839261315d8e4b46d0f3
IEDL.DBID RIE
ISICitedReferencesCount 3
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001262914400075&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1077-2626
1941-0506
IngestDate Sun Nov 09 13:05:15 EST 2025
Mon Jun 30 03:05:15 EDT 2025
Mon Jul 21 05:14:52 EDT 2025
Sat Nov 29 03:31:44 EST 2025
Tue Nov 18 22:20:45 EST 2025
Wed Aug 27 02:05:21 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c393t-dd06d009621eff27f410bebef682c77a5b43c53e24f84839261315d8e4b46d0f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-0284-726X
0000-0003-3525-9349
0000-0002-0470-5548
0000-0002-3483-4236
0000-0001-5481-389X
0000-0003-0162-3296
OpenAccessLink https://scholars.cityu.edu.hk/en/publications/isketchmetafacei-a-learning-based-sketching-interface-for-high-fi
PMID 37467083
PQID 3075425679
PQPubID 75741
PageCount 16
ParticipantIDs ieee_primary_10188511
proquest_journals_3075425679
pubmed_primary_37467083
proquest_miscellaneous_2840246249
crossref_citationtrail_10_1109_TVCG_2023_3291703
crossref_primary_10_1109_TVCG_2023_3291703
PublicationCentury 2000
PublicationDate 2024-08-01
PublicationDateYYYYMMDD 2024-08-01
PublicationDate_xml – month: 08
  year: 2024
  text: 2024-08-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on visualization and computer graphics
PublicationTitleAbbrev TVCG
PublicationTitleAlternate IEEE Trans Vis Comput Graph
PublicationYear 2024
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
ref58
ref53
ref11
ref55
ref10
ref54
ref17
ref16
ref19
ref18
ref51
Li (ref52)
ref50
Joshi (ref44)
ref46
ref45
ref48
ref47
ref42
ref41
ref49
ref8
ref7
ref9
Bangor (ref77) 2009; 4
ref4
ref3
ref6
ref5
ref40
ref35
ref79
ref34
ref78
ref37
ref31
ref75
ref30
ref74
ref33
ref32
ref76
ref2
ref1
ref39
ref38
ref71
ref70
Bernhardt (ref43)
ref73
ref72
ref24
ref68
ref23
ref67
ref26
ref25
ref69
ref20
ref64
ref63
ref22
ref66
ref21
ref65
ref28
ref27
ref29
ref60
ref62
ref61
Zhang (ref36) 2020
References_xml – ident: ref41
  doi: 10.1145/1141911.1141928
– ident: ref79
  doi: 10.1007/978-3-319-46484-8_38
– ident: ref24
  doi: 10.1145/3450626.3459760
– ident: ref49
  doi: 10.1145/2185520.2185541
– year: 2020
  ident: ref36
  article-title: Landmark detection and 3D face reconstruction for caricature using a nonlinear parametric model
– ident: ref56
  doi: 10.1111/cgf.12200
– ident: ref22
  doi: 10.1109/ICCV48922.2021.01278
– ident: ref37
  doi: 10.1109/CVPR.2018.00766
– ident: ref67
  doi: 10.1109/CVPR.2019.00025
– ident: ref20
  doi: 10.1007/978-3-031-25085-9_11
– ident: ref35
  doi: 10.1111/j.1467-8659.2009.01418.x
– ident: ref2
  doi: 10.1145/1276377.1276429
– ident: ref16
  doi: 10.1109/TVCG.2020.3030330
– ident: ref6
  doi: 10.1145/3072959.3073632
– ident: ref10
  doi: 10.1145/3072959.3073629
– ident: ref55
  doi: 10.1111/cgf.12223
– ident: ref47
  doi: 10.1145/1449715.1449740
– ident: ref66
  doi: 10.1109/CVPR.2019.00459
– ident: ref58
  doi: 10.1109/TCSVT.2020.3040900
– ident: ref62
  doi: 10.1109/TVCG.2016.2597830
– ident: ref26
  doi: 10.1609/aaai.v36i3.20188
– ident: ref70
  doi: 10.1145/37402.37422
– ident: ref8
  doi: 10.1016/j.cag.2022.06.005
– ident: ref4
  doi: 10.1145/2591011
– ident: ref14
  doi: 10.1145/3272127.3275051
– ident: ref15
  doi: 10.1145/3414685.3417807
– ident: ref17
  doi: 10.1145/2710026
– ident: ref34
  doi: 10.1109/CVPR.2018.00414
– ident: ref29
  doi: 10.1145/2890493
– start-page: 47
  volume-title: Proc. Eurographics Workshop 3D Object Retrieval
  ident: ref52
  article-title: 3D sketch-based 3D shape retrieval
– ident: ref76
  doi: 10.1109/CVPR46437.2021.01010
– ident: ref18
  doi: 10.1109/3DV.2017.00018
– ident: ref32
  doi: 10.1109/CVPR.2018.00767
– ident: ref63
  doi: 10.1111/cgf.14184
– ident: ref73
  doi: 10.1111/j.1467-8659.2006.00958.x
– ident: ref21
  doi: 10.1007/978-3-030-01252-6_4
– ident: ref61
  doi: 10.1145/3203186
– volume: 4
  start-page: 114
  issue: 3
  year: 2009
  ident: ref77
  article-title: Determining what individual SUS scores mean: Adding an adjective rating scale
  publication-title: J. Usability Stud.
– ident: ref39
  doi: 10.1109/3DV57658.2022.00015
– ident: ref46
  doi: 10.1109/MCG.2011.84
– ident: ref59
  doi: 10.1007/978-3-031-20062-5_18
– ident: ref33
  doi: 10.1109/CVPR.2017.589
– ident: ref60
  doi: 10.1007/978-3-031-26293-7_4
– ident: ref42
  doi: 10.1145/1281500.1281554
– start-page: 57
  volume-title: Proc. Eurographics Workshop Sketch-Based Interfaces Model. Eurograph. Assoc.
  ident: ref43
  article-title: Matisse: Painting 2D regions for modeling free-form shapes
– ident: ref57
  doi: 10.1109/CVPR.2015.7298797
– ident: ref13
  doi: 10.1109/CVPR42600.2020.00016
– ident: ref40
  doi: 10.1007/s11704-016-5422-9
– ident: ref78
  doi: 10.1109/CVPR.2018.00030
– ident: ref51
  doi: 10.1145/2185520.2185527
– ident: ref71
  doi: 10.1145/1057432.1057457
– ident: ref1
  doi: 10.1145/311535.311602
– ident: ref11
  doi: 10.1145/3472749.3474791
– ident: ref31
  doi: 10.1109/CVPRW.2019.00038
– ident: ref27
  doi: 10.1109/CVPR42600.2020.00589
– ident: ref7
  doi: 10.1109/3DV50981.2020.00064
– ident: ref64
  doi: 10.1145/2897824.2925951
– ident: ref48
  doi: 10.1145/1661412.1618495
– ident: ref53
  doi: 10.1109/TIP.2021.3118975
– ident: ref54
  doi: 10.1109/3DV57658.2022.00050
– ident: ref38
  doi: 10.1109/CVPR46437.2021.00595
– ident: ref5
  doi: 10.1145/2766990
– ident: ref28
  doi: 10.1109/CVPR46437.2021.00337
– ident: ref19
  doi: 10.1145/3203197
– ident: ref68
  doi: 10.1109/CVPR.2019.00609
– ident: ref50
  doi: 10.1145/2601097.2601128
– start-page: 49
  volume-title: Proc. Sustain. Bus. Manage.
  ident: ref44
  article-title: Repoussé: Automatic inflation of 2D artwork
– ident: ref30
  doi: 10.1109/TVCG.2013.249
– ident: ref25
  doi: 10.1145/3386569.3392386
– ident: ref75
  doi: 10.1109/CVPR.2017.179
– ident: ref72
  doi: 10.1145/1057432.1057456
– ident: ref3
  doi: 10.1145/2366145.2366217
– ident: ref9
  doi: 10.1109/TPAMI.2022.3148853
– ident: ref65
  doi: 10.1145/882262.882354
– ident: ref12
  doi: 10.1109/ICCV.2019.00239
– ident: ref23
  doi: 10.1109/CVPR.2017.632
– ident: ref69
  doi: 10.1007/978-3-319-46484-8_29
– ident: ref74
  doi: 10.1016/0098-3004(96)00021-0
– ident: ref45
  doi: 10.1145/1661412.1618494
SSID ssj0014489
Score 2.4544573
Snippet Modeling 3D avatars benefits various application scenarios such as AR/VR, gaming, and filming. Character faces contribute significant diversity and vividity as...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 5260
SubjectTerms Accuracy
Algorithms
Artists
Avatars
Computational modeling
Face modeling
Faces
Image reconstruction
Learning
Load modeling
Modelling
neural network
Shape
sketch-based 3D modeling
Solid modeling
Three dimensional models
Three-dimensional displays
Title SketchMetaFace: A Learning-Based Sketching Interface for High-Fidelity 3D Character Face Modeling
URI https://ieeexplore.ieee.org/document/10188511
https://www.ncbi.nlm.nih.gov/pubmed/37467083
https://www.proquest.com/docview/3075425679
https://www.proquest.com/docview/2840246249
Volume 30
WOSCitedRecordID wos001262914400075&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0506
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014489
  issn: 1077-2626
  databaseCode: RIE
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB7RigMcgJZSQktlpJ6QvHVix4_e2qULF6pKLWhvkWM7CBXtou4uv58ZJ7tqD0XilijjxMrM2N94XgDHqERBB1fx2AXFlVItt9GiurdeByV95VPMzSbM5aWdTt3VkKyec2FSSjn4LI3oMvvy4zys6KjshKpLEULYgi1jdJ-stXEZoJ3h-gBDwyuE6YMLsxTu5Ob7-POI-oSPZIXmiaDmOZL6bAgrH-xHucHK41gz7zmTl_8521fwYgCX7KyXhh14kma78PxeycHX4K9viVFf09JPfEin7IwNJVZ_8HPc0SLrCfCe5dPCDqkYIltGESF8QlWxELgz-YmN17WeGb2JUVc1ym3fg2-Ti5vxFz60WeBBOrnkMQodyZSpytR1lelUKVrkbadtFYzxdatkqGWqVGcV4SlEAGUdbVKtwoGdfAPbs_ksvQUWkovaWSmTq1V0oQ10pOKEb7UvRTIFiPXPbsJQg5xaYfxqsi0iXEOsaohVzcCqAj5uhvzuC3D8i3iP-HCPsGdBAYdrljaDji4aXN1qXLG0cQV82DxG7SKXiZ-l-WrR4OaNIEajjVrAfi8Km5evJejdIx89gGc4N9VHCx7C9vJuld7D0_Bn-XNxd4QiPLVHWYT_AsTM6FM
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9MwED-NgcT2wOc2AgOMxBOSO8d2Pszb6ChDbBUSBe3Ncmxnmja1aG3393PnpNV4GBJviXJOrNyd_TvfF8B7VCJfeiN5aL3mWuuG16FGdW9c6bVy0sWQmk1U43F9dma-98nqKRcmxpiCz-KALpMvP8z8ko7KDqi6FCGEe3C_0FqKLl1r7TRAS8N0IYYVlwjUeydmLszB5Nfwy4A6hQ-URANFUPscRZ02RK3-2pFSi5W70WbadUaP_3O-T-BRDy_ZYScPT2EjTp_B9q2ig8_B_bgkVp3GhRs5Hz-yQ9YXWT3nn3BPC6wjwHuWzgtbpGKIbRnFhPAR1cVC6M7UERuuqj0zehOjvmqU3b4DP0efJ8Nj3jda4F4ZteAhiDKQMSPz2LayanUuGuRuW9bSV5UrGq18oaLUba0JUSEGyItQR91oHNiqXdiczqbxBTAfTShNrVQ0hQ7GN54OVYxwTelyEasMxOpnW99XIadmGFc2WSPCWGKVJVbZnlUZfFgP-d2V4PgX8Q7x4RZhx4IM9lcstb2Wzi2ubwWuWWVlMni3foz6RU4TN42z5dzi9o0wpkQrNYO9ThTWL19J0Ms7PvoWHh5PTk_sydfxt1ewhfPUXezgPmwurpfxNTzwN4uL-fWbJMh_AIm36rI
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=SketchMetaFace%3A+A+Learning-based+Sketching+Interface+for+High-fidelity+3D+Character+Face+Modeling&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Luo%2C+Zhongjin&rft.au=Du%2C+Dong&rft.au=Zhu%2C+Heming&rft.au=Yu%2C+Yizhou&rft.date=2024-08-01&rft.eissn=1941-0506&rft.volume=PP&rft_id=info:doi/10.1109%2FTVCG.2023.3291703&rft_id=info%3Apmid%2F37467083&rft.externalDocID=37467083
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon