High-Fidelity Monocular Face Reconstruction Based on an Unsupervised Model-Based Face Autoencoder

In this work, we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that ser...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on pattern analysis and machine intelligence Ročník 42; číslo 2; s. 357 - 370
Hlavní autori: Tewari, Ayush, Zollhofer, Michael, Bernard, Florian, Garrido, Pablo, Kim, Hyeongwoo, Perez, Patrick, Theobalt, Christian
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 01.02.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract In this work, we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is the differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance, and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world datasets feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation. This work is an extended version of [1] , where we additionally present a stochastic vertex sampling technique for faster training of our networks, and moreover, we propose and evaluate analysis-by-synthesis and shape-from-shading refinement approaches to achieve a high-fidelity reconstruction.
AbstractList In this work, we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is the differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance, and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world datasets feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation. This work is an extended version of [1] , where we additionally present a stochastic vertex sampling technique for faster training of our networks, and moreover, we propose and evaluate analysis-by-synthesis and shape-from-shading refinement approaches to achieve a high-fidelity reconstruction.
In this work, we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is the differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance, and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world datasets feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation. This work is an extended version of [1] , where we additionally present a stochastic vertex sampling technique for faster training of our networks, and moreover, we propose and evaluate analysis-by-synthesis and shape-from-shading refinement approaches to achieve a high-fidelity reconstruction.In this work, we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face from a single in-the-wild color image. To this end, we combine a convolutional encoder network with an expert-designed generative model that serves as decoder. The core innovation is the differentiable parametric decoder that encapsulates image formation analytically based on a generative model. Our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose, shape, expression, skin reflectance, and scene illumination. Due to this new way of combining CNN-based with model-based face reconstruction, the CNN-based encoder learns to extract semantically meaningful parameters from a single monocular input image. For the first time, a CNN encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner, which renders training on very large (unlabeled) real world datasets feasible. The obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation. This work is an extended version of [1] , where we additionally present a stochastic vertex sampling technique for faster training of our networks, and moreover, we propose and evaluate analysis-by-synthesis and shape-from-shading refinement approaches to achieve a high-fidelity reconstruction.
Author Kim, Hyeongwoo
Perez, Patrick
Zollhofer, Michael
Tewari, Ayush
Bernard, Florian
Theobalt, Christian
Garrido, Pablo
Author_xml – sequence: 1
  givenname: Ayush
  orcidid: 0000-0002-3805-4421
  surname: Tewari
  fullname: Tewari, Ayush
  email: atewari@mpi-inf.mpg.de
  organization: Max-Planck-Institute for Informatics, Saarbrcken, Germany
– sequence: 2
  givenname: Michael
  surname: Zollhofer
  fullname: Zollhofer, Michael
  email: zollhoefer@cs.stanford.edu
  organization: Stanford University, Stanford, CA, USA
– sequence: 3
  givenname: Florian
  surname: Bernard
  fullname: Bernard, Florian
  email: f.bernardpi@gmail.com
  organization: Max-Planck-Institute for Informatics, Saarbrcken, Germany
– sequence: 4
  givenname: Pablo
  surname: Garrido
  fullname: Garrido, Pablo
  email: pablo.garrido.adrian@gmail.com
  organization: Technicolor, Issy-les-Moulineaux, France
– sequence: 5
  givenname: Hyeongwoo
  orcidid: 0000-0003-0858-0882
  surname: Kim
  fullname: Kim, Hyeongwoo
  email: hyeongwoo.kim@mpi-inf.mpg.de
  organization: Max-Planck-Institute for Informatics, Saarbrcken, Germany
– sequence: 6
  givenname: Patrick
  surname: Perez
  fullname: Perez, Patrick
  email: Patrick.Perez@technicolor.com
  organization: Technicolor, Issy-les-Moulineaux, France
– sequence: 7
  givenname: Christian
  surname: Theobalt
  fullname: Theobalt, Christian
  email: theobalt@mpi-inf.mpg.de
  organization: Max-Planck-Institute for Informatics, Saarbrcken, Germany
BackLink https://www.ncbi.nlm.nih.gov/pubmed/30334783$$D View this record in MEDLINE/PubMed
BookMark eNp9kc1q3DAURkVJaCY_L9BCMWTTjaeSri3Ly2noNIEMLSVZC_nOdavgkaaSXcjbR5OZZpFFVhKfzpHE_U7ZkQ-eGPsg-FwI3n65-7lY3cwlF3oudaN0Jd-xmRSKl61s5RGbcaFkqbXUJ-w0pQfORVVzeM9OgANUjYYZs9fu959y6dY0uPGxWAUfcBpsLJYWqfhFGHwa44SjC774ahOti7yxvrj3adpS_Od20Spkv9wfP4uLaQzkMcfxnB33dkh0cVjP2P3y293VdXn74_vN1eK2RKjFWFpQpFBYQFW1ogHUPXRSWqAGAaHv2k42Pe8QO2g0AvRa6XWlCVUOeAtn7PP-3m0MfydKo9m4hDQM1lOYkpFCyrpptKwzevkKfQhT9Pl3RgK0ADUXPFOfDtTUbWhtttFtbHw0_4eXAbkHMIaUIvUviOBm15B5bsjsGjKHhrKkX0noRrsb7xitG95WP-5VR0Qvb-mqVTrX-gSpj51M
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1109_TVCG_2022_3166666
crossref_primary_10_1016_j_cose_2024_104217
crossref_primary_10_1016_j_cosrev_2021_100400
crossref_primary_10_1109_TVCG_2020_3033838
crossref_primary_10_1007_s11554_023_01257_z
crossref_primary_10_3390_rs15225404
crossref_primary_10_1109_ACCESS_2025_3551397
crossref_primary_10_1109_TIM_2022_3152243
crossref_primary_10_1016_j_media_2022_102730
crossref_primary_10_1007_s41095_021_0238_4
crossref_primary_10_1016_j_cviu_2022_103525
crossref_primary_10_1109_TPAMI_2019_2920821
crossref_primary_10_1109_TPAMI_2021_3084524
crossref_primary_10_1186_s12903_023_03142_4
crossref_primary_10_1109_TIP_2021_3065798
crossref_primary_10_1109_TPAMI_2025_3562651
crossref_primary_10_3389_fams_2022_869830
crossref_primary_10_1007_s10462_021_10039_7
crossref_primary_10_1111_cgf_14400
crossref_primary_10_1109_JIOT_2021_3114373
crossref_primary_10_1007_s00371_023_02946_3
crossref_primary_10_1016_j_patrec_2021_11_022
crossref_primary_10_1109_TVCG_2020_3023573
crossref_primary_10_1051_itmconf_20224403024
crossref_primary_10_1109_ACCESS_2023_3324403
crossref_primary_10_1016_j_imavis_2021_104311
crossref_primary_10_3390_app13116407
crossref_primary_10_1109_TMC_2023_3262233
crossref_primary_10_1109_TCYB_2023_3242368
Cites_doi 10.1145/2647868.2654889
10.1109/ICCVW.2015.126
10.1007/978-3-642-15549-9_25
10.1631/FITEE.1700253
10.1109/TPAMI.2011.172
10.20870/IJVR.2010.9.1.2761
10.1016/j.cviu.2015.01.008
10.1109/CVPR.2017.250
10.1007/978-3-319-46448-0_3
10.1109/CVPR.2011.5995388
10.1109/TIFS.2015.2446438
10.1109/CVPR.2017.44
10.1109/TMM.2015.2477042
10.1109/CVPR.2005.145
10.1109/ICCVW.2017.110
10.1145/2661229.2661290
10.1109/ICCV.2017.175
10.1145/1778765.1778777
10.1109/34.927467
10.1109/ICCV.2017.117
10.1007/978-3-642-21735-7_7
10.1145/3099564.3099581
10.1109/ICCV.2013.21
10.1109/ICCVW.2013.58
10.1007/s11263-010-0408-9
10.1109/CVPR.2013.446
10.1007/978-3-319-46454-1_37
10.1109/CVPR.2014.243
10.1109/CVPR.2017.589
10.1109/CVPR.2017.164
10.1007/s11263-010-0380-4
10.1109/CVPR.2017.580
10.1109/3DV.2016.56
10.1145/2929464.2929475
10.1007/978-3-540-87536-9_99
10.1109/ICCV.2015.450
10.1007/978-3-319-46448-0_21
10.1145/311535.311556
10.1109/TVCG.2013.249
10.1109/CVPR.2018.00877
10.1109/CVPR.2015.7298989
10.1109/ICPR.2016.7899665
10.1111/1467-8659.t01-1-00712
10.1007/BFb0094775
10.1109/CVPR.2016.455
10.1145/2766943
10.1145/2508363.2508380
10.1145/1667239.1667251
10.1109/CVPR.2017.163
10.1109/ICCV.2015.425
10.1126/science.1127647
10.1145/1964921.1964970
10.1109/CVPR.2018.00486
10.1109/ICCV.2011.6126439
10.1145/1015706.1015736
10.1007/978-3-319-10605-2_1
10.1109/ICCVW.2015.132
10.1145/2897824.2925933
10.1109/CVPR.2018.00270
10.1016/j.cviu.2017.08.008
10.1007/978-3-319-49409-8_9
10.1109/ICCV.2017.429
10.1007/s11263-014-0775-8
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TPAMI.2018.2876842
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE/IET Electronic Library (IEL) (UW System Shared)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList MEDLINE
Technology Research Database
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE/IET Electronic Library (IEL) (UW System Shared)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 370
ExternalDocumentID 30334783
10_1109_TPAMI_2018_2876842
8496850
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: Max Planck Center for Visual Computing and Communications
– fundername: ERC Starting
  grantid: CapReal 335545
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
UHB
~02
AAYXX
CITATION
5VS
9M8
AAYOK
ABFSI
ADRHT
AETIX
AGSQL
AI.
AIBXA
ALLEH
CGR
CUY
CVF
ECM
EIF
FA8
H~9
IBMZZ
ICLAB
IFJZH
NPM
PKN
RIC
RIG
RNI
RZB
VH1
XJT
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c351t-a36e6c1a3c649173c8f3b22a3e7c3c3fb9b27f0bccb378c33f868d48ec6b37093
IEDL.DBID RIE
ISICitedReferencesCount 41
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000508386100009&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0162-8828
1939-3539
IngestDate Sun Nov 09 10:32:04 EST 2025
Sun Nov 30 04:32:43 EST 2025
Wed Feb 19 02:29:27 EST 2025
Sat Nov 29 05:15:58 EST 2025
Tue Nov 18 21:49:20 EST 2025
Wed Aug 27 02:40:53 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c351t-a36e6c1a3c649173c8f3b22a3e7c3c3fb9b27f0bccb378c33f868d48ec6b37093
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-3805-4421
0000-0003-0858-0882
PMID 30334783
PQID 2339335010
PQPubID 85458
PageCount 14
ParticipantIDs crossref_primary_10_1109_TPAMI_2018_2876842
ieee_primary_8496850
proquest_journals_2339335010
crossref_citationtrail_10_1109_TPAMI_2018_2876842
proquest_miscellaneous_2122577825
pubmed_primary_30334783
PublicationCentury 2000
PublicationDate 2020-02-01
PublicationDateYYYYMMDD 2020-02-01
PublicationDate_xml – month: 02
  year: 2020
  text: 2020-02-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
kulkarni (ref19) 2015
ref12
ref59
ref15
ref58
ref14
ref53
ref52
zhu (ref36) 2014
ref55
ref11
ref10
huang (ref68) 2007
ref17
parkhi (ref70) 2015
zhao (ref18) 2016
tang (ref34) 2012
ref51
ref50
duong (ref39) 2016
ref46
ref89
ref45
ref48
tewari (ref1) 2017
ref47
ref86
ref42
ref85
ref41
huber (ref25) 2016
ref88
ref44
ref43
(ref74) 2008
güler (ref38) 2016
ref7
ref4
ref3
grant (ref20) 2016
hinton (ref16) 2006; 313
ref6
ref5
ref82
garrido (ref87) 2013; 32
ref81
ref40
cao (ref71) 2014; 20
bulat (ref32) 2016
ref80
ref79
ref35
ref75
ref31
ref30
ref77
ref76
ref2
suwajanakorn (ref9) 2014
garrido (ref8) 2016; 35
tylecek (ref83) 2010; 9
krizhevsky (ref69) 2012
yan (ref56) 2016
ref73
jaderberg (ref54) 2015
zhmoginov (ref49) 2016
ref24
ref67
ref23
ref64
ref63
ref66
ref22
(ref78) 2012
ref65
ref21
li (ref37) 2016
ref28
ref27
ref29
ranjan (ref33) 2016
cao (ref60) 2014; 20
cao (ref84) 2015; 34
wang (ref26) 2014
bradski (ref72) 2000; 25
ref62
ref61
References_xml – ident: ref73
  doi: 10.1145/2647868.2654889
– ident: ref65
  doi: 10.1109/ICCVW.2015.126
– ident: ref11
  doi: 10.1007/978-3-642-15549-9_25
– ident: ref21
  doi: 10.1631/FITEE.1700253
– ident: ref82
  doi: 10.1109/TPAMI.2011.172
– volume: 9
  start-page: 45
  year: 2010
  ident: ref83
  article-title: Refinement of surface mesh for accurate multiview reconstruction
  publication-title: Int J Virtual Reality
  doi: 10.20870/IJVR.2010.9.1.2761
– ident: ref89
  doi: 10.1016/j.cviu.2015.01.008
– ident: ref43
  doi: 10.1109/CVPR.2017.250
– ident: ref31
  doi: 10.1007/978-3-319-46448-0_3
– ident: ref81
  doi: 10.1109/CVPR.2011.5995388
– start-page: 2017
  year: 2015
  ident: ref54
  article-title: Spatial transformer networks
  publication-title: Proc Int Conf Neural Inf Process
– ident: ref52
  doi: 10.1109/TIFS.2015.2446438
– year: 2015
  ident: ref70
  article-title: Deep face recognition
  publication-title: Proc Brit Mach Vis Conf
– year: 2014
  ident: ref26
  article-title: Facial feature point detection: A comprehensive survey
  publication-title: CoRR
– year: 2016
  ident: ref33
  article-title: An all-in-one convolutional neural network for face analysis
– ident: ref46
  doi: 10.1109/CVPR.2017.44
– start-page: 1419
  year: 2012
  ident: ref34
  article-title: Deep lambertian networks
  publication-title: Proc Int Conf Int Conf Mach Learn
– ident: ref53
  doi: 10.1109/TMM.2015.2477042
– start-page: 2539
  year: 2015
  ident: ref19
  article-title: Deep convolutional inverse graphics network
  publication-title: Proc Int Conf Neural Inf Process
– ident: ref24
  doi: 10.1109/CVPR.2005.145
– ident: ref75
  doi: 10.1109/CVPR.2005.145
– ident: ref57
  doi: 10.1109/ICCVW.2017.110
– ident: ref88
  doi: 10.1145/2661229.2661290
– ident: ref15
  doi: 10.1109/ICCV.2017.175
– ident: ref85
  doi: 10.1145/1778765.1778777
– volume: 35
  year: 2016
  ident: ref8
  article-title: Reconstruction of personalized 3D face rigs from monocular video
  publication-title: ACM Trans Graph
– ident: ref22
  doi: 10.1109/34.927467
– ident: ref45
  doi: 10.1109/ICCV.2017.117
– ident: ref17
  doi: 10.1007/978-3-642-21735-7_7
– ident: ref41
  doi: 10.1145/3099564.3099581
– year: 2016
  ident: ref38
  article-title: DenseReg: Fully convolutional dense shape regression in-the-wild
  publication-title: CoRR
– ident: ref35
  doi: 10.1109/ICCV.2013.21
– ident: ref29
  doi: 10.1109/ICCVW.2013.58
– year: 2007
  ident: ref68
  article-title: Labeled faces in the wild: A database for studying face recognition in unconstrained environments
– ident: ref80
  doi: 10.1007/s11263-010-0408-9
– ident: ref28
  doi: 10.1109/CVPR.2013.446
– year: 2008
  ident: ref74
  article-title: NVIDIA CUDA Programming Guide 2.0
– ident: ref42
  doi: 10.1007/978-3-319-46454-1_37
– year: 2012
  ident: ref78
  publication-title: CUBLAS Library User Guide v5 0 ed nVidia
– ident: ref50
  doi: 10.1109/CVPR.2014.243
– ident: ref13
  doi: 10.1109/CVPR.2017.589
– ident: ref47
  doi: 10.1109/CVPR.2017.164
– ident: ref63
  doi: 10.1007/s11263-010-0380-4
– ident: ref23
  doi: 10.1109/CVPR.2017.580
– ident: ref12
  doi: 10.1109/3DV.2016.56
– ident: ref4
  doi: 10.1145/2929464.2929475
– ident: ref48
  doi: 10.1007/978-3-540-87536-9_99
– year: 2016
  ident: ref25
  article-title: 3D face tracking and texture fusion in the wild
  publication-title: CoRR
– ident: ref10
  doi: 10.1109/ICCV.2015.450
– ident: ref79
  doi: 10.1007/978-3-319-46448-0_21
– year: 2016
  ident: ref37
  article-title: Convolutional network for attribute-driven and identity-preserving human face generation
  publication-title: CoRR
– start-page: 1097
  year: 2012
  ident: ref69
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc Int Conf Neural Inf Process
– ident: ref5
  doi: 10.1145/311535.311556
– start-page: 616
  year: 2016
  ident: ref32
  article-title: Two-stage convolutional part heatmap regression for the 1st 3D face alignment in the wild (3DFAW) challenge
  publication-title: Proc Eur Conf Comput Vis Workshops
– volume: 20
  start-page: 413
  year: 2014
  ident: ref60
  article-title: FaceWarehouse: A 3D facial expression database for visual computing
  publication-title: IEEE Trans Vis Comput Graph
  doi: 10.1109/TVCG.2013.249
– ident: ref44
  doi: 10.1109/CVPR.2018.00877
– ident: ref67
  doi: 10.1109/CVPR.2015.7298989
– ident: ref40
  doi: 10.1109/ICPR.2016.7899665
– ident: ref6
  doi: 10.1111/1467-8659.t01-1-00712
– ident: ref62
  doi: 10.1007/BFb0094775
– ident: ref3
  doi: 10.1109/CVPR.2016.455
– year: 2016
  ident: ref49
  article-title: Inverting face embeddings with convolutional neural networks
  publication-title: CoRR
– year: 2016
  ident: ref18
  article-title: Robust LSTM-autoencoders for face de-occlusion in the wild
  publication-title: CoRR
– volume: 34
  start-page: 46:1
  year: 2015
  ident: ref84
  article-title: Real-time high-fidelity facial performance capture
  publication-title: ACM Trans Graph
  doi: 10.1145/2766943
– volume: 32
  start-page: 158:1
  year: 2013
  ident: ref87
  article-title: Reconstructing detailed dynamic face geometry from monocular video
  publication-title: ACM Trans Graph
  doi: 10.1145/2508363.2508380
– ident: ref59
  doi: 10.1145/1667239.1667251
– ident: ref14
  doi: 10.1109/CVPR.2017.163
– year: 2016
  ident: ref39
  article-title: Deep appearance models: A deep Boltzmann machine approach for face modeling
  publication-title: CoRR
– start-page: 3735
  year: 2017
  ident: ref1
  article-title: MoFA: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction
  publication-title: Proc Int Conf Comput Vis
– ident: ref64
  doi: 10.1109/ICCV.2015.425
– volume: 313
  start-page: 504
  year: 2006
  ident: ref16
  article-title: Reducing the dimensionality of data with neural networks
  publication-title: Sci
  doi: 10.1126/science.1127647
– ident: ref86
  doi: 10.1145/1964921.1964970
– ident: ref77
  doi: 10.1109/CVPR.2018.00486
– ident: ref2
  doi: 10.1109/ICCV.2011.6126439
– start-page: 796
  year: 2014
  ident: ref9
  article-title: Total moving face reconstruction
  publication-title: Proc Eur Conf Comput Vis
– volume: 20
  start-page: 413
  year: 2014
  ident: ref71
  article-title: FaceWarehouse: A 3D facial expression database for visual computing
  publication-title: IEEE Trans Vis Comput Graph
  doi: 10.1109/TVCG.2013.249
– ident: ref61
  doi: 10.1145/1015706.1015736
– start-page: 266
  year: 2016
  ident: ref20
  article-title: Deep disentangled representations for volumetric reconstruction
  publication-title: Proc Eur Conf Comput Vis Workshops
– ident: ref51
  doi: 10.1007/978-3-319-10605-2_1
– ident: ref66
  doi: 10.1109/ICCVW.2015.132
– ident: ref7
  doi: 10.1145/2897824.2925933
– ident: ref76
  doi: 10.1109/CVPR.2018.00270
– start-page: 1696
  year: 2016
  ident: ref56
  article-title: Perspective transformer nets: Learning single-view 3D object reconstruction without 3D supervision
  publication-title: Adv Neural Inf Process Syst
– start-page: 217
  year: 2014
  ident: ref36
  article-title: Multi-view perceptron: A deep model for learning face identity and view representations
  publication-title: Proc 27th Int Conf Neural Inf Process Syst
– ident: ref27
  doi: 10.1016/j.cviu.2017.08.008
– ident: ref55
  doi: 10.1007/978-3-319-49409-8_9
– ident: ref58
  doi: 10.1109/ICCV.2017.429
– ident: ref30
  doi: 10.1007/s11263-014-0775-8
– volume: 25
  start-page: 120
  year: 2000
  ident: ref72
  article-title: The OpenCV library
  publication-title: Dr Dobb's J Softw Tools
SSID ssj0014503
Score 2.5622318
Snippet In this work, we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3D human face...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 357
SubjectTerms Accuracy
Coders
Color imagery
Decoding
Deep Learning
Face
Face - anatomy & histology
Face - diagnostic imaging
Female
Humans
Image reconstruction
Imaging, Three-Dimensional - methods
Lighting
Male
Neural Networks, Computer
Shading
Shape
Three-dimensional displays
Training
Unsupervised Machine Learning
Title High-Fidelity Monocular Face Reconstruction Based on an Unsupervised Model-Based Face Autoencoder
URI https://ieeexplore.ieee.org/document/8496850
https://www.ncbi.nlm.nih.gov/pubmed/30334783
https://www.proquest.com/docview/2339335010
https://www.proquest.com/docview/2122577825
Volume 42
WOSCitedRecordID wos000508386100009&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE/IET Electronic Library (IEL) (UW System Shared)
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3db9QwDLfGhNB4YLDxcTCmIPEG3Xp10yaPB-IEEkx72NC9VYmbSkhTO-3u9vfPTj8ACZB4ixKnrWo78S92bIC36LT2ZG3ijTYMULRLXK4poSbFDOvMm3im-_1reXZmVit7vgPvp7swIYQYfBZOpBl9-XVHWzkqOzW5LYwA9HtlWfR3tSaPQa5jFWS2YFjDGUaMF2RSe3pxvvj2RaK4zAnjA3E87cEDXroxLw3-th_FAit_tzXjnrPc_7-vfQyPBttSLXpheAI7oT2A_bFugxrU-AAe_pKE8BCchHokS0l3xRa5YiXvYmyqWjoKSuDpzySz6gNverXihmvVZbveXstSI11SU-0q6YfjxMV200mSzDrcPIXL5aeLj5-TofBCQqjnm8RhEQqaO6QiZziHZBr0WeYwlISEjbc-K5vUE3ksDSE2pjB1bgIV3JFafAa7bdeGF6Bc1oTQGCw8IzdNTJ050nKBl8gW1s9gPv7-ioas5FIc46qK6CS1VeReJdyrBu7N4N0057rPyfFP6kPhzUQ5sGUGRyOXq0Ft11WGaFFcrTz8ZhpmhRMvimtDt2WaOS-BJRtWegbPe-mYnj0K1cs_v_MV7GUC12PQ9xHsMvPCa7hPt5sf65tjluqVOY5SfQced_Bs
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB5VBUE5UGh5LBQwEjdIm_XEiX1cEKtWbFc9bFFvkT1xJKQqqbq7_H7GzgOQAImbZY-TKOOx5_O8AN6hVcqRMYnTSjNAUTaxmaKE6hQlVtLpeKf7dVEsl_rqylzswIcxFsZ7H53P_HFoRlt-1dI2XJWd6MzkOgD0OyrLZNpFa402g0zFOsisw7CMM5AYQmRSc7K6mJ2fBT8ufcwIIZie9uAeb96YFRp_O5FiiZW_a5vx1Jnv_9_3PoKHvXYpZt1yeAw7vjmA_aFyg-gF-QAe_JKG8BBscPZI5iHhFevkgsW8jd6pYm7JiwBQf6aZFR_52KsEN2wjLpv19iZsNqErVFW7TrrhOHG23bQhTWblb5_A5fzz6tNp0pdeSAjVdJNYzH1OU4uUZwzokHSNTkqLviAkrJ1xsqhTR-Sw0IRY61xXmfaUc0dq8CnsNm3jn4Owsva-1pg7xm6KmFpaUiGEl8jkxk1gOvz-kvq85KE8xnUZ8Ulqysi9MnCv7Lk3gffjnJsuK8c_qQ8Db0bKni0TOBq4XPaCuy4losFgbOXht-Mwi1ywo9jGt1ummfImWLBqpSbwrFsd47OHRfXiz-98A_dPV-eLcnG2_PIS9mQA79EF_Ah2mZH-Fdyl75tv69vXcW3_ABoB8ss
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=High-Fidelity+Monocular+Face+Reconstruction+Based+on+an+Unsupervised+Model-Based+Face+Autoencoder&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Tewari%2C+Ayush&rft.au=Zollhofer%2C+Michael&rft.au=Bernard%2C+Florian&rft.au=Garrido%2C+Pablo&rft.date=2020-02-01&rft.issn=0162-8828&rft.eissn=2160-9292&rft.volume=42&rft.issue=2&rft.spage=357&rft.epage=370&rft_id=info:doi/10.1109%2FTPAMI.2018.2876842&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPAMI_2018_2876842
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon