State of the Art on Neural Rendering

Efficient rendering of photo‐realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo‐realistic images from hand‐crafted scene representations. However, the automatic generation of shape, materials, lighting, and other a...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Computer graphics forum Ročník 39; číslo 2; s. 701 - 727
Hlavní autoři: Tewari, A., Fried, O., Thies, J., Sitzmann, V., Lombardi, S., Sunkavalli, K., Martin‐Brualla, R., Simon, T., Saragih, J., Nießner, M., Pandey, R., Fanello, S., Wetzstein, G., Zhu, J.‐Y., Theobalt, C., Agrawala, M., Shechtman, E., Goldman, D. B, Zollhöfer, M.
Médium: Journal Article
Jazyk:angličtina
Vydáno: Oxford Blackwell Publishing Ltd 01.05.2020
Témata:
ISSN:0167-7055, 1467-8659
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Efficient rendering of photo‐realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo‐realistic images from hand‐crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo‐realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state‐of‐the‐art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state‐of‐the‐art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free‐viewpoint video, and the creation of photo‐realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.
AbstractList Efficient rendering of photo‐realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo‐realistic images from hand‐crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo‐realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state‐of‐the‐art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photorealistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on the type of control, i.e., how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. The second half of this state‐of‐the‐art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free‐viewpoint video, and the creation of photo‐realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.
Author Saragih, J.
Sitzmann, V.
Pandey, R.
Thies, J.
Martin‐Brualla, R.
Agrawala, M.
Zollhöfer, M.
Lombardi, S.
Nießner, M.
Theobalt, C.
Zhu, J.‐Y.
Fanello, S.
Shechtman, E.
Tewari, A.
Fried, O.
Wetzstein, G.
Goldman, D. B
Simon, T.
Sunkavalli, K.
Author_xml – sequence: 1
  givenname: A.
  surname: Tewari
  fullname: Tewari, A.
  email: atewari@mpi-inf.mpg.de
  organization: MPI Informatics
– sequence: 2
  givenname: O.
  surname: Fried
  fullname: Fried, O.
  email: ohad@stanford.edu
  organization: Stanford University
– sequence: 3
  givenname: J.
  surname: Thies
  fullname: Thies, J.
  email: justus.thies@tum.de
  organization: Technical University of Munich
– sequence: 4
  givenname: V.
  surname: Sitzmann
  fullname: Sitzmann, V.
  email: sitzmann@cs.stanford.edu
  organization: Stanford University
– sequence: 5
  givenname: S.
  surname: Lombardi
  fullname: Lombardi, S.
  email: stephen.a.lombardi@gmail.com
  organization: Facebook Reality Labs
– sequence: 6
  givenname: K.
  surname: Sunkavalli
  fullname: Sunkavalli, K.
  email: sunkaval@adobe.com
  organization: Adobe Research
– sequence: 7
  givenname: R.
  surname: Martin‐Brualla
  fullname: Martin‐Brualla, R.
  email: rmbrualla@google.com
  organization: Google Inc
– sequence: 8
  givenname: T.
  surname: Simon
  fullname: Simon, T.
  email: tomas.simon@oculus.com
  organization: Facebook Reality Labs
– sequence: 9
  givenname: J.
  surname: Saragih
  fullname: Saragih, J.
  email: jason.saragih@oculus.com
  organization: Facebook Reality Labs
– sequence: 10
  givenname: M.
  surname: Nießner
  fullname: Nießner, M.
  email: niessner@tum.de
  organization: Technical University of Munich
– sequence: 11
  givenname: R.
  surname: Pandey
  fullname: Pandey, R.
  email: rohitpandey@google.com
  organization: Google Inc
– sequence: 12
  givenname: S.
  surname: Fanello
  fullname: Fanello, S.
  email: seanfa@google.com
  organization: Google Inc
– sequence: 13
  givenname: G.
  surname: Wetzstein
  fullname: Wetzstein, G.
  email: gordon.wetzstein@stanford.edu
  organization: Stanford University
– sequence: 14
  givenname: J.‐Y.
  surname: Zhu
  fullname: Zhu, J.‐Y.
  email: junyanz@cs.cmu.edu
  organization: Adobe Research
– sequence: 15
  givenname: C.
  surname: Theobalt
  fullname: Theobalt, C.
  email: theobalt@mpi-inf.mpg.de
  organization: MPI Informatics
– sequence: 16
  givenname: M.
  surname: Agrawala
  fullname: Agrawala, M.
  email: maneesh@cs.stanford.edu
  organization: Stanford University
– sequence: 17
  givenname: E.
  surname: Shechtman
  fullname: Shechtman, E.
  email: elishe@adobe.com
  organization: Adobe Research
– sequence: 18
  givenname: D. B
  surname: Goldman
  fullname: Goldman, D. B
  email: danbgoldman@gmail.com
  organization: Google Inc
– sequence: 19
  givenname: M.
  surname: Zollhöfer
  fullname: Zollhöfer, M.
  email: zollhoefer@cs.stanford.edu
  organization: Facebook Reality Labs
BookMark eNp9kE9PAjEQxRuDiYAe_Aab6MXDwrTbdtsjIYImRBP_nJtSZnHJ2mJ3CeHbW8WTic5l5vB7b2begPR88EjIJYURTTV262pEOTB2QvqUyzJXUuge6QNNcwlCnJFB224AgJdS9Mn1c2c7zEKVdW-YTWKXBZ894C7aJntCv8JY-_U5Oa1s0-LFTx-S19nty_QuXzzO76eTRe4KrVi-FFxQypyzYNFxEFKg0toJ5RQUQDVSW_GV5pWjSxRuqcp0FgIgVwXXshiSq6PvNoaPHbad2YRd9GmlYZwxLUEKSNTNkXIxtG3Eymxj_W7jwVAwXyGYFIL5DiGx41-sq9PDdfBdtHXzn2JfN3j429pM57Oj4hN9IWwX
CitedBy_id crossref_primary_10_1145_3687974
crossref_primary_10_1080_10447318_2025_2530090
crossref_primary_10_1109_TVCG_2022_3141943
crossref_primary_10_3389_fcomp_2022_871808
crossref_primary_10_3390_rs15143585
crossref_primary_10_1109_TPAMI_2023_3315068
crossref_primary_10_3389_fnbot_2025_1630728
crossref_primary_10_1111_cgf_14456
crossref_primary_10_1117_1_JEI_33_3_033006
crossref_primary_10_1007_s10791_025_09628_9
crossref_primary_10_1111_cgf_14578
crossref_primary_10_1109_TVCG_2024_3368443
crossref_primary_10_1111_cgf_14339
crossref_primary_10_1145_3550454_3555452
crossref_primary_10_1109_JOE_2022_3194899
crossref_primary_10_1145_3550454_3555451
crossref_primary_10_1016_j_displa_2024_102773
crossref_primary_10_1145_3450626_3459765
crossref_primary_10_1109_LRA_2024_3444671
crossref_primary_10_1109_MC_2021_3084656
crossref_primary_10_1109_TVCG_2022_3167896
crossref_primary_10_1145_3478513_3480559
crossref_primary_10_1145_3446328
crossref_primary_10_1109_COMST_2024_3392642
crossref_primary_10_1111_cgf_14342
crossref_primary_10_1111_cgf_14465
crossref_primary_10_1145_3550454_3555458
crossref_primary_10_1109_ACCESS_2022_3172492
crossref_primary_10_1111_cgf_14502
crossref_primary_10_1145_3528223_3530122
crossref_primary_10_1109_TVCG_2020_3023573
crossref_primary_10_1111_cgf_14505
crossref_primary_10_1111_cgf_14507
crossref_primary_10_1109_TVCG_2020_3030456
crossref_primary_10_1007_s11263_021_01563_8
crossref_primary_10_1145_3386569_3392475
crossref_primary_10_1016_j_jksuci_2024_102222
crossref_primary_10_1145_3450626_3459756
crossref_primary_10_1016_j_molp_2024_09_004
crossref_primary_10_1145_3675808
crossref_primary_10_1111_cgf_14675
crossref_primary_10_1145_3449063
crossref_primary_10_1007_s41095_022_0301_9
crossref_primary_10_1007_s11263_025_02403_9
crossref_primary_10_1109_TVCG_2021_3114769
crossref_primary_10_1145_3450626_3459785
crossref_primary_10_1016_j_gmod_2023_101199
crossref_primary_10_1109_ACCESS_2025_3541780
crossref_primary_10_1145_3451260
crossref_primary_10_1049_cvi2_12321
crossref_primary_10_1016_j_cag_2023_08_024
crossref_primary_10_1111_cgf_14283
crossref_primary_10_1145_3478513_3480497
crossref_primary_10_1109_TVCG_2023_3297721
crossref_primary_10_1016_j_cviu_2022_103440
crossref_primary_10_2478_amns_2025_1108
crossref_primary_10_1111_cgf_14202
crossref_primary_10_1016_j_cag_2025_104361
crossref_primary_10_1145_3592134
crossref_primary_10_1109_TPAMI_2022_3232502
crossref_primary_10_1007_s11263_023_01903_w
crossref_primary_10_1145_3450626_3459849
crossref_primary_10_1111_cgf_15062
crossref_primary_10_1109_TPAMI_2021_3136301
crossref_primary_10_1145_3478513_3480487
crossref_primary_10_1007_s11633_022_1400_x
crossref_primary_10_1109_TPAMI_2025_3563398
crossref_primary_10_1364_OE_567809
crossref_primary_10_1145_3522735
crossref_primary_10_1145_3528223_3530153
crossref_primary_10_1145_3414685_3417770
crossref_primary_10_1016_j_tics_2021_11_008
crossref_primary_10_3390_electronics12204301
crossref_primary_10_1109_TVCG_2022_3204608
crossref_primary_10_1111_phor_12498
crossref_primary_10_1145_3414685_3417814
crossref_primary_10_1145_3450626_3459848
crossref_primary_10_1145_3478513_3480528
crossref_primary_10_1111_cgf_15071
crossref_primary_10_1049_cvi2_12189
crossref_primary_10_1111_cgf_15194
crossref_primary_10_1111_cgf_15198
crossref_primary_10_1007_s11042_024_19699_3
crossref_primary_10_1007_s11263_021_01471_x
crossref_primary_10_1145_3476576_3476609
crossref_primary_10_1145_3585512
crossref_primary_10_1145_3476576_3476729
crossref_primary_10_1111_cgf_14145
crossref_primary_10_1145_3550454_3555497
crossref_primary_10_1109_JPROC_2021_3049196
crossref_primary_10_1145_3690390
crossref_primary_10_1109_TPAMI_2023_3335311
crossref_primary_10_3390_app13127278
crossref_primary_10_1109_TAES_2023_3303856
crossref_primary_10_1145_3414685_3417804
crossref_primary_10_1145_3414685_3417767
crossref_primary_10_1145_3528223_3530169
crossref_primary_10_1145_3731163
crossref_primary_10_3390_app14062530
crossref_primary_10_1007_s41095_021_0248_2
crossref_primary_10_1088_2516_1091_ac5b13
crossref_primary_10_1111_cgf_142626
crossref_primary_10_1109_TPAMI_2023_3298850
crossref_primary_10_1007_s00371_024_03501_4
crossref_primary_10_1016_j_autcon_2024_105517
crossref_primary_10_1109_TMM_2024_3443637
crossref_primary_10_1145_3658160
crossref_primary_10_1109_TVCG_2022_3226689
crossref_primary_10_3390_app12147212
crossref_primary_10_1007_s00371_022_02562_7
crossref_primary_10_1111_cgf_14480
crossref_primary_10_1177_14771535221148736
crossref_primary_10_1007_s11633_022_1411_7
crossref_primary_10_3390_biomimetics6010016
crossref_primary_10_1007_s00371_022_02518_x
crossref_primary_10_1145_3476576_3476749
crossref_primary_10_1111_cgf_14646
crossref_primary_10_1109_TVCG_2022_3209963
crossref_primary_10_1145_3585499
crossref_primary_10_1109_TMM_2025_3535368
crossref_primary_10_1145_3414685_3417821
Cites_doi 10.1109/CVPR.2018.00659
10.1145/2984511.2984517
10.1145/37402.37422
10.1145/882262.882343
10.1109/ICCV.2019.00882
10.1145/3306305.3332370
10.1145/237170.237200
10.1111/cgf.13632
10.1145/3272127.3275075
10.1109/ACCESS.2019.2905015
10.1109/ICCV.2019.00955
10.1109/CVPR.2016.90
10.1109/ICCV.2019.01065
10.1109/CVPR.2010.5539957
10.1109/CVPR.2018.00070
10.1109/CVPR.2018.00020
10.1109/CVPR.2018.00923
10.1109/IROS.2018.8593800
10.1145/3072959.3073601
10.1109/CVPR.2018.00918
10.1109/ICCV.2017.167
10.1109/CVPR.2019.00453
10.1109/ICCV.2019.00729
10.1007/BF01679684
10.1126/science.aar6170
10.1145/1276377.1276382
10.1145/3197517.3201313
10.1126/science.1127647
10.1145/3306346.3323008
10.1007/978-3-030-01252-6_7
10.1145/3306346.3323027
10.1109/CVPR.2017.723
10.1111/cgf.13633
10.1561/0600000023
10.1109/ICCV.2019.00009
10.1109/CVPR.2019.00882
10.1109/CVMP.2009.18
10.1111/j.1467-8659.2006.00960.x
10.1007/978-3-030-01219-9_8
10.1007/978-3-319-46478-7_51
10.1145/3197517.3201401
10.21437/Interspeech.2018-1929
10.1109/CVPR.2018.00359
10.1007/978-3-030-01231-1_36
10.1109/CVPR.2016.265
10.1109/CVPR.2018.00013
10.1109/ICCV.2017.310
10.1109/ICCV.2019.00466
10.1109/CVPR.2016.595
10.1145/3306346.3323007
10.1007/BFb0094775
10.1145/360825.360839
10.1145/3306346.3323028
10.1109/CVPR.2019.00242
10.1207/s15516709cog0901_7
10.1145/1618452.1618470
10.1109/TCOM.1983.1095851
10.1109/CVPR.2008.4587842
10.1145/3130800.3130801
10.1145/2487228.2487238
10.1109/CVPR.2015.7298965
10.1109/CVPR.2019.00994
10.1109/CVPR.2017.578
10.1109/ICCV.2019.01017
10.1007/978-3-319-24574-4_28
10.1145/3333002
10.1145/3197517.3201378
10.1111/cgf.13225
10.1007/978-3-030-01261-8_41
10.1145/1576246.1531330
10.1109/CVPR.2018.00652
10.1145/3355089.3356571
10.1109/CVPR.2018.00018
10.1007/978-3-319-46475-6_43
10.1111/1467-8659.t01-1-00712
10.1145/3240508.3240536
10.1109/CVPR.2017.19
10.1109/CVPR.2019.00023
10.1109/TIFS.2019.2916364
10.1145/882262.882269
10.1109/CVPR.2017.241
10.1007/978-3-030-01219-9_11
10.1109/CVPR.2015.7298761
10.1016/j.image.2003.07.001
10.1146/annurev-statistics-010814-020120
10.1109/TPAMI.2007.60
10.1109/TMM.2014.2368696
10.1145/3197517.3201283
10.1109/CVPR.2018.00870
10.1109/ICCV.2019.00467
10.1109/TIP.2003.819861
10.1109/TPAMI.2012.213
10.1109/CVPR.2018.00984
10.1007/978-3-319-46484-8_38
10.1109/CVPR.2018.00068
10.1007/978-3-030-01219-9_5
10.1109/CVPR.2019.00247
10.1145/2897824.2925974
10.1145/3272127.3275084
10.1145/383259.383296
10.1109/ICCV.1999.790383
10.1145/3306346.3323030
10.1145/357290.357293
10.1109/TPAMI.2014.2377712
10.1145/3306346.3323013
10.1109/CVPR.2018.00916
10.1145/2897824.2925969
10.1145/3306346.3323035
10.1109/CVPR.2019.00459
10.1109/ISACV.2018.8354080
10.1109/TVCG.2010.233
10.1109/CVPR.2018.00882
10.1007/978-1-4842-4427-2
10.1109/ICCV.2019.00239
10.1109/CVPR.2019.00254
10.1145/2980179.2982420
10.1007/978-3-319-46487-9_40
10.1109/ICCV.2003.1238630
10.1145/3272127.3275099
10.1109/TIFS.2017.2699942
10.1145/1597990.1598050
10.1109/CVPR.2017.18
10.1145/358876.358882
10.1109/MSP.2017.2765202
10.1109/ICCV.2019.00282
10.1109/ICCV.2017.244
10.1145/3306346.3323020
10.1109/CVPR.2016.262
10.1145/383259.383295
10.1145/3272127.3275043
10.1109/CVPR.2017.728
10.1145/3197517.3201323
10.1145/15886.15902
ContentType Journal Article
Copyright 2020 The Author(s) Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
2020 The Eurographics Association and John Wiley & Sons Ltd.
Copyright_xml – notice: 2020 The Author(s) Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
– notice: 2020 The Eurographics Association and John Wiley & Sons Ltd.
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1111/cgf.14022
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts
CrossRef

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1467-8659
EndPage 727
ExternalDocumentID 10_1111_cgf_14022
CGF14022
Genre article
GrantInformation_xml – fundername: ERC Starting Grant Scan2CAD
  funderid: 804724
– fundername: Okawa Research Grant
– fundername: Google, Sony, a TUM‐IAS Rudolf Mößbauer Fellowship
– fundername: Brown Institute for Media Innovation
– fundername: Sloan Fellowship
– fundername: ERC Consolidator Grant 4DRepLy
  funderid: 770784
– fundername: NSF
  funderid: IIS 1553333; CMMI 1839974
GroupedDBID .3N
.4S
.DC
.GA
.Y3
05W
0R~
10A
15B
1OB
1OC
29F
31~
33P
3SF
4.4
50Y
50Z
51W
51X
52M
52N
52O
52P
52S
52T
52U
52W
52X
5GY
5HH
5LA
5VS
66C
6J9
702
7PT
8-0
8-1
8-3
8-4
8-5
8UM
8VB
930
A03
AAESR
AAEVG
AAHHS
AAHQN
AAMNL
AANHP
AANLZ
AAONW
AASGY
AAXRX
AAYCA
AAZKR
ABCQN
ABCUV
ABDBF
ABDPE
ABEML
ABPVW
ACAHQ
ACBWZ
ACCFJ
ACCZN
ACFBH
ACGFS
ACPOU
ACRPL
ACSCC
ACUHS
ACXBN
ACXQS
ACYXJ
ADBBV
ADEOM
ADIZJ
ADKYN
ADMGS
ADNMO
ADOZA
ADXAS
ADZMN
ADZOD
AEEZP
AEGXH
AEIGN
AEIMD
AEMOZ
AENEX
AEQDE
AEUQT
AEUYR
AFBPY
AFEBI
AFFNX
AFFPM
AFGKR
AFPWT
AFWVQ
AFZJQ
AHBTC
AHEFC
AHQJS
AITYG
AIURR
AIWBW
AJBDE
AJXKR
AKVCP
ALAGY
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMBMR
AMYDB
ARCSS
ASPBG
ATUGU
AUFTA
AVWKF
AZBYB
AZFZN
AZVAB
BAFTC
BDRZF
BFHJK
BHBCM
BMNLL
BMXJE
BNHUX
BROTX
BRXPI
BY8
CAG
COF
CS3
CWDTD
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRSTM
DU5
EAD
EAP
EBA
EBO
EBR
EBS
EBU
EDO
EJD
EMK
EST
ESX
F00
F01
F04
F5P
FEDTE
FZ0
G-S
G.N
GODZA
H.T
H.X
HF~
HGLYW
HVGLF
HZI
HZ~
I-F
IHE
IX1
J0M
K1G
K48
LATKE
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LW6
LYRES
MEWTI
MK4
MRFUL
MRSTM
MSFUL
MSSTM
MXFUL
MXSTM
N04
N05
N9A
NF~
O66
O9-
OIG
P2W
P2X
P4D
PALCI
PQQKQ
Q.N
Q11
QB0
QWB
R.K
RDJ
RIWAO
RJQFR
ROL
RX1
SAMSI
SUPJJ
TH9
TN5
TUS
UB1
V8K
W8V
W99
WBKPD
WIH
WIK
WOHZO
WQJ
WRC
WXSBR
WYISQ
WZISG
XG1
ZL0
ZZTAW
~IA
~IF
~WT
AAMMB
AAYXX
ADMLS
AEFGJ
AEYWJ
AGHNM
AGQPQ
AGXDD
AGYGG
AIDQK
AIDYY
AIQQE
CITATION
O8X
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c3982-b545112cca0aec40565e899c58c803019e1af4d94fc1be5cb87016e00e4834963
IEDL.DBID DRFUL
ISICitedReferencesCount 336
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000548709600054&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0167-7055
IngestDate Sun Jul 13 04:38:27 EDT 2025
Tue Nov 18 21:58:46 EST 2025
Sat Nov 29 03:41:18 EST 2025
Wed Jan 22 16:34:48 EST 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c3982-b545112cca0aec40565e899c58c803019e1af4d94fc1be5cb87016e00e4834963
Notes Equal contribution.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink http://hdl.handle.net/21.11116/0000-0007-E114-4
PQID 2422960650
PQPubID 30877
PageCount 27
ParticipantIDs proquest_journals_2422960650
crossref_primary_10_1111_cgf_14022
crossref_citationtrail_10_1111_cgf_14022
wiley_primary_10_1111_cgf_14022_CGF14022
PublicationCentury 2000
PublicationDate May 2020
2020-05-00
20200501
PublicationDateYYYYMMDD 2020-05-01
PublicationDate_xml – month: 05
  year: 2020
  text: May 2020
PublicationDecade 2020
PublicationPlace Oxford
PublicationPlace_xml – name: Oxford
PublicationTitle Computer graphics forum
PublicationYear 2020
Publisher Blackwell Publishing Ltd
Publisher_xml – name: Blackwell Publishing Ltd
References 2015; 34
2018; 360
1975; 18
2014; 27
2011; 17
2016; 35
2007; 29
2017; 30
2014; 3
1982; 1
2001
2017; 36
2000
2006; 25
1987
1986
2018; 37
2007; 26
2018; 35
2015; 2
2019; 7
2015; 17
2012
2010
1995; 14
2019; 1
1985; 9
1980; 23
1983; 31
2009
1998
2019; 38
2008
1996
2007
2003
1992
2006; 313
2012; 35
2011; 6
2009; 28
1999
2013; 32
2004; 19
2020
2004; 13
2017; 12
2019
2014; 37
2018
2017
2016
2015
2014
2013
2018; 15
2003; 22
1966
e_1_2_11_93_2
e_1_2_11_200_2
e_1_2_11_70_2
e_1_2_11_223_2
e_1_2_11_55_2
e_1_2_11_78_2
e_1_2_11_32_2
e_1_2_11_125_2
e_1_2_11_148_2
e_1_2_11_4_2
e_1_2_11_29_2
e_1_2_11_140_2
e_1_2_11_163_2
e_1_2_11_81_2
e_1_2_11_211_2
e_1_2_11_43_2
e_1_2_11_66_2
e_1_2_11_89_2
e_1_2_11_197_2
e_1_2_11_17_2
e_1_2_11_113_2
e_1_2_11_159_2
e_1_2_11_136_2
e_1_2_11_151_2
e_1_2_11_174_2
e_1_2_11_208_2
e_1_2_11_92_2
e_1_2_11_201_2
e_1_2_11_224_2
Goodfellow I. (e_1_2_11_71_2) 2014; 27
e_1_2_11_31_2
e_1_2_11_54_2
e_1_2_11_77_2
e_1_2_11_187_2
e_1_2_11_28_2
e_1_2_11_5_2
e_1_2_11_103_2
e_1_2_11_149_2
e_1_2_11_126_2
e_1_2_11_164_2
e_1_2_11_141_2
e_1_2_11_190_2
e_1_2_11_80_2
e_1_2_11_212_2
e_1_2_11_65_2
e_1_2_11_198_2
e_1_2_11_88_2
e_1_2_11_42_2
e_1_2_11_114_2
e_1_2_11_137_2
e_1_2_11_16_2
e_1_2_11_39_2
e_1_2_11_175_2
e_1_2_11_152_2
e_1_2_11_209_2
e_1_2_11_180_2
e_1_2_11_225_2
e_1_2_11_202_2
e_1_2_11_188_2
e_1_2_11_57_2
Bau D. (e_1_2_11_20_2) 2019; 38
e_1_2_11_34_2
e_1_2_11_72_2
e_1_2_11_11_2
e_1_2_11_139_2
e_1_2_11_95_2
e_1_2_11_104_2
e_1_2_11_127_2
e_1_2_11_2_2
e_1_2_11_69_2
e_1_2_11_142_2
e_1_2_11_165_2
McGuire M. (e_1_2_11_131_2) 2014; 3
e_1_2_11_60_2
e_1_2_11_191_2
e_1_2_11_213_2
e_1_2_11_45_2
e_1_2_11_68_2
e_1_2_11_199_2
e_1_2_11_22_2
e_1_2_11_83_2
e_1_2_11_138_2
e_1_2_11_115_2
e_1_2_11_19_2
e_1_2_11_153_2
e_1_2_11_176_2
e_1_2_11_130_2
Szeliski R. (e_1_2_11_186_2) 2010
e_1_2_11_181_2
e_1_2_11_203_2
e_1_2_11_226_2
e_1_2_11_56_2
e_1_2_11_79_2
e_1_2_11_189_2
e_1_2_11_117_2
e_1_2_11_33_2
e_1_2_11_94_2
e_1_2_11_10_2
e_1_2_11_3_2
e_1_2_11_105_2
Radford A. (e_1_2_11_166_2) 2019; 1
e_1_2_11_120_2
e_1_2_11_143_2
e_1_2_11_82_2
Kalantari N. K. (e_1_2_11_98_2) 2015; 34
e_1_2_11_192_2
Deschaintre V. (e_1_2_11_46_2) 2019; 38
e_1_2_11_214_2
e_1_2_11_44_2
e_1_2_11_67_2
e_1_2_11_128_2
e_1_2_11_21_2
e_1_2_11_116_2
e_1_2_11_177_2
e_1_2_11_18_2
e_1_2_11_154_2
e_1_2_11_182_2
e_1_2_11_204_2
e_1_2_11_13_2
e_1_2_11_118_2
e_1_2_11_51_2
e_1_2_11_97_2
e_1_2_11_74_2
e_1_2_11_25_2
e_1_2_11_121_2
e_1_2_11_144_2
e_1_2_11_167_2
e_1_2_11_48_2
e_1_2_11_216_2
e_1_2_11_193_2
e_1_2_11_215_2
e_1_2_11_230_2
e_1_2_11_24_2
e_1_2_11_62_2
e_1_2_11_85_2
e_1_2_11_106_2
e_1_2_11_129_2
e_1_2_11_8_2
e_1_2_11_36_2
e_1_2_11_155_2
e_1_2_11_178_2
e_1_2_11_59_2
e_1_2_11_132_2
e_1_2_11_227_2
e_1_2_11_170_2
e_1_2_11_220_2
e_1_2_11_183_2
e_1_2_11_35_2
e_1_2_11_50_2
e_1_2_11_73_2
e_1_2_11_96_2
e_1_2_11_12_2
e_1_2_11_119_2
e_1_2_11_168_2
e_1_2_11_47_2
e_1_2_11_145_2
Zollhöfer M. (e_1_2_11_229_2) 2018; 37
e_1_2_11_217_2
e_1_2_11_160_2
e_1_2_11_194_2
e_1_2_11_231_2
e_1_2_11_9_2
e_1_2_11_23_2
e_1_2_11_61_2
e_1_2_11_107_2
e_1_2_11_84_2
e_1_2_11_179_2
e_1_2_11_58_2
e_1_2_11_110_2
e_1_2_11_133_2
e_1_2_11_156_2
e_1_2_11_205_2
e_1_2_11_228_2
e_1_2_11_171_2
e_1_2_11_221_2
e_1_2_11_91_2
e_1_2_11_184_2
e_1_2_11_30_2
e_1_2_11_76_2
e_1_2_11_99_2
e_1_2_11_53_2
e_1_2_11_6_2
e_1_2_11_27_2
e_1_2_11_146_2
e_1_2_11_169_2
e_1_2_11_100_2
e_1_2_11_123_2
e_1_2_11_218_2
e_1_2_11_161_2
Kar A. (e_1_2_11_102_2) 2017; 30
e_1_2_11_195_2
e_1_2_11_232_2
e_1_2_11_87_2
e_1_2_11_41_2
e_1_2_11_64_2
e_1_2_11_108_2
e_1_2_11_15_2
e_1_2_11_157_2
e_1_2_11_134_2
e_1_2_11_38_2
e_1_2_11_111_2
e_1_2_11_172_2
e_1_2_11_206_2
Marschner S. R. (e_1_2_11_122_2) 1998
e_1_2_11_222_2
e_1_2_11_90_2
e_1_2_11_185_2
e_1_2_11_52_2
e_1_2_11_75_2
e_1_2_11_124_2
e_1_2_11_26_2
e_1_2_11_147_2
e_1_2_11_49_2
e_1_2_11_101_2
e_1_2_11_219_2
e_1_2_11_162_2
e_1_2_11_210_2
e_1_2_11_196_2
e_1_2_11_40_2
e_1_2_11_86_2
e_1_2_11_7_2
e_1_2_11_63_2
e_1_2_11_109_2
e_1_2_11_135_2
e_1_2_11_158_2
e_1_2_11_14_2
e_1_2_11_37_2
e_1_2_11_112_2
e_1_2_11_207_2
e_1_2_11_173_2
e_1_2_11_150_2
References_xml – volume: 37
  start-page: 1670
  issue: 8
  year: 2014
  end-page: 1687
  article-title: Shape, illumination, and reflectance from shading
  publication-title: IEEE transactions on pattern analysis and machine intelligence
– volume: 6
  start-page: 1
  year: 2011
  end-page: 183
  article-title: Camera models and fundamental concepts used in geometric computer vision
  publication-title: Found. Trends. Comput. Graph. Vis.
– year: 1966
– volume: 38
  issue: 6
  year: 2019
  article-title: The relightables: Volumetric performance capture of humans with realistic relighting
  publication-title: ACM Trans. Graph.
– volume: 37
  start-page: 231:1
  issue: 6
  year: 2018
  end-page: 231:12
  article-title: Warp‐guided gans for single‐photo facial animation
  publication-title: ACM Trans. Graph.
– volume: 35
  start-page: 1397
  issue: 6
  year: 2012
  end-page: 1409
  article-title: Guided image filtering
  publication-title: IEEE transactions on pattern analysis and machine intelligence
– volume: 38
  start-page: 219
  issue: 2
  year: 2019
  end-page: 233
  article-title: Deep video‐based performance cloning
  publication-title: Comput. Graph. Forum
– year: 2014
– volume: 38
  issue: 4
  year: 2019
  article-title: Semantic photo manipulation with a generative image prior
  publication-title: ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH)
– year: 1998
– start-page: 231:1
  end-page: 231:11
– volume: 22
  start-page: 759
  issue: 3
  year: 2003
  end-page: 769
  article-title: A data‐driven reflectance model
  publication-title: ACM Trans. Graph.
– volume: 23
  start-page: 343
  issue: 6
  year: 1980
  end-page: 349
  article-title: An improved illumination model for shaded display
  publication-title: Commun. ACM
– volume: 35
  start-page: 53
  issue: 1
  year: 2018
  end-page: 65
  article-title: Generative adversarial networks: An overview
  publication-title: IEEE Signal Processing Magazine
– start-page: 145
  year: 2019
  end-page: 154
– volume: 15
  start-page: 144
  year: 2018
  end-page: 159
  article-title: Noiseprint: A cnn‐based camera model fingerprint
  publication-title: IEEE Transactions on Information Forensics and Security
– volume: 17
  start-page: 1273
  year: 2011
  end-page: 1285
  article-title: Cg2real: Improving the realism of computer generated images using a large collection of photographs
  publication-title: IEEE Trans. Vis. Comput. Graph.
– start-page: 4460
  year: 2019
  end-page: 4470
– volume: 13
  start-page: 600
  issue: 4
  year: 2004
  end-page: 612
  article-title: Image quality assessment: from error visibility to structural similarity
  publication-title: IEEE transactions on image processing
– volume: 17
  start-page: 157
  year: 2015
  end-page: 170
  article-title: Towards practical self‐embedding for jpeg‐compressed digital images
  publication-title: IEEE Transactions on Multimedia
– year: 2019
– volume: 37
  start-page: 126:1
  issue: 4
  year: 2018
  end-page: 126:13
  article-title: Deep image‐based relighting from optimal sparse samples
  publication-title: ACM Trans. Graph.
– volume: 37
  issue: 2
  year: 2018
  article-title: State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications
  publication-title: Computer Graphics Forum (Eurographics State of the Art Reports 2018)
– start-page: 234
  year: 2015
  end-page: 241
– volume: 3
  start-page: 73
  issue: 4
  year: 2014
  end-page: 85
  article-title: Efficient GPU screen‐space ray tracing
  publication-title: Journal of Computer Graphics Techniques (JCGT)
– start-page: 118
  year: 2018
  end-page: 129
– volume: 28
  issue: 5
  year: 2009
  article-title: Sketch2photo: internet image montage
  publication-title: ACM Transactions on Graphics (TOG)
– volume: 37
  start-page: 255:1
  issue: 6
  year: 2018
  end-page: 255:14
  article-title: Lookingood: Enhancing performance capture with real‐time neural re‐rendering
  publication-title: ACM Trans. Graph.
– year: 2007
– volume: 37
  start-page: 258
  issue: 6
  year: 2018
  end-page: 1
  article-title: pagan: real‐time avatars using dynamic textures
  publication-title: ACM Trans. Graph.
– start-page: 6296
  year: 2018
  end-page: 6305
– volume: 1
  issue: 8
  year: 2019
  article-title: Language models are unsupervised multitask learners
  publication-title: OpenAI Blog
– volume: 38
  start-page: 67:1
  issue: 4
  year: 2019
  end-page: 67:16
  article-title: Vr facial animation via multiview image translation
  publication-title: ACM Trans. Graph.
– year: 2016
– year: 1992
– year: 2010
– start-page: 8789
  year: 2018
  end-page: 8797
– volume: 34
  issue: 4
  year: 2015
  article-title: A Machine Learning Approach for Filtering Monte Carlo Noise
  publication-title: ACM Transactions on Graphics (TOG) (Proceedings of SIGGRAPH 2015)
– volume: 38
  start-page: 1
  issue: 2
  year: 2019
  end-page: 10
  article-title: Neural BTF compression and interpolation
  publication-title: Computer Graphics Forum (Proc. Eurographics)
– start-page: 7891
  year: 2018
  end-page: 7901
– start-page: 1
  year: 2008
  end-page: 8
– start-page: 2
  year: 2000
  end-page: 13
– volume: 37
  start-page: 68:1
  issue: 4
  year: 2018
  end-page: 68:13
  article-title: Deep appearance models for face rendering
  publication-title: ACM Trans. Graph.
– volume: 32
  start-page: 30:1
  issue: 3
  year: 2013
  end-page: 30:12
  article-title: Depth synthesis and local warps for plausible image‐based navigation
  publication-title: ACM Trans. Graph.
– start-page: 448
  year: 2009
  end-page: 455
– volume: 26
  issue: 3
  year: 2007
  article-title: Scene completion using millions of photographs
  publication-title: ACM Transactions on Graphics (TOG)
– volume: 25
  start-page: 407
  issue: 3
  year: 2006
  end-page: 413
  article-title: Semantic photo synthesis
  publication-title: Computer Graphics Forum
– volume: 9
  start-page: 147
  issue: 1
  year: 1985
  end-page: 169
  article-title: A learning algorithm for boltzmann machines
  publication-title: Cognitive science
– year: 2013
– volume: 38
  start-page: 65:1
  issue: 4
  year: 2019
  end-page: 65:14
  article-title: Neural volumes: Learning dynamic renderable volumes from images
  publication-title: ACM Trans. Graph.
– year: 2009
– volume: 29
  start-page: 463
  issue: 3
  year: 2007
  end-page: 476
  article-title: Space‐time completion of video
  publication-title: Pattern Analysis and Machine Intelligence, IEEE Transactions on
– volume: 36
  start-page: 65
  issue: 4
  year: 2017
  end-page: 78
  article-title: Deep shading: Convolutional neural networks for screen space shading
  publication-title: Comput. Graph. Forum
– year: 2001
– volume: 38
  start-page: 68:1
  issue: 4
  year: 2019
  end-page: 68:14
  article-title: Text‐based editing of talking‐head video
  publication-title: ACM Trans. Graph.
– start-page: 163
  year: 1987
  end-page: 169
– year: 2018
– volume: 30
  start-page: 365
  year: 2017
  end-page: 376
  article-title: Learning a multi‐view stereo machine
  publication-title: Advances in Neural Information Processing Systems
– start-page: 1
  year: 2018
  end-page: 8
– volume: 14
  start-page: 227
  issue: 3
  year: 1995
  end-page: 251
  article-title: Generalization of the lambertian model and implications for machine vision
  publication-title: Int. J. Comput. Vision
– volume: 18
  start-page: 311
  issue: 6
  year: 1975
  end-page: 317
  article-title: Illumination for computer generated pictures
  publication-title: Commun. ACM
– volume: 27
  start-page: 2672
  year: 2014
  end-page: 2680
  article-title: Generative adversarial nets
  publication-title: Advances in Neural Information Processing Systems
– volume: 38
  issue: 4
  year: 2019
  article-title: Single image portrait relighting
  publication-title: ACM Trans. Graph.
– volume: 36
  start-page: 98:1
  issue: 4
  year: 2017
  end-page: 98:12
  article-title: Interactive reconstruction of monte carlo image sequences using a recurrent denoising autoencoder
  publication-title: ACM Trans. Graph.
– volume: 38
  issue: 4
  year: 2019
  article-title: Deep reflectance fields: High‐quality facial reflectance field inference from color gradient illumination
  publication-title: ACM Trans. Graph.
– volume: 31
  start-page: 532
  issue: 4
  year: 1983
  end-page: 540
  article-title: The laplacian pyramid as a compact image code
  publication-title: IEEE Transactions on communications
– volume: 37
  issue: 4
  year: 2018
  article-title: Single‐image svbrdf capture with a rendering‐aware deep network
  publication-title: ACM Transactions on Graphics (TOG)
– volume: 12
  start-page: 2144
  year: 2017
  end-page: 2158
  article-title: Multi‐scale difference map fusion for tamper localization using binary ranking hashing
  publication-title: IEEE Transactions on Information Forensics and Security
– volume: 313
  start-page: 504
  issue: 5786
  year: 2006
  end-page: 507
  article-title: Reducing the dimensionality of data with neural networks
  publication-title: science
– volume: 22
  start-page: 641
  year: 2003
  end-page: 650
  article-title: Reanimating faces in images and video
  publication-title: Computer graphics forum
– year: 2015
– volume: 7
  start-page: 36322
  year: 2019
  end-page: 36333
  article-title: Recent progress on generative adversarial networks (gans): A survey
  publication-title: IEEE Access
– volume: 37
  start-page: 163:1
  issue: 4
  year: 2018
  end-page: 163:14
  article-title: Deep video portraits
  publication-title: ACM Trans. Graph.
– start-page: 72
  year: 2018
  end-page: 87
– volume: 37
  start-page: 257:1
  issue: 6
  year: 2018
  end-page: 257:15
  article-title: Deep blending for free‐viewpoint image‐based rendering
  publication-title: ACM Trans. Graph.
– volume: 38
  start-page: 66:1
  issue: 4
  year: 2019
  end-page: 66:12
  article-title: Deferred neural rendering: Image synthesis using neural textures
  publication-title: ACM Trans. Graph.
– volume: 19
  start-page: 1
  issue: 1
  year: 2004
  end-page: 28
  article-title: A survey on image‐based renderingâĂŤrepresentation, sampling and compression
  publication-title: Signal Processing: Image Communication
– volume: 1
  start-page: 7
  issue: 1
  year: 1982
  end-page: 24
  article-title: A reflectance model for computer graphics
  publication-title: ACM Trans. Graph.
– year: 2003
– start-page: 165
  year: 2019
  end-page: 174
– year: 2000
– year: 1996
– volume: 38
  issue: 5
  year: 2019
  article-title: Neural rendering and reenactment of human actor videos
  publication-title: ACM Trans. Graph.
– volume: 38
  start-page: 78:1
  issue: 4
  year: 2019
  end-page: 78:14
  article-title: Multi‐view relighting using a geometry‐aware network
  publication-title: ACM Trans. Graph.
– start-page: 269
  year: 2018
– start-page: 143
  year: 1986
  end-page: 150
– start-page: 100
  year: 2009
  end-page: 108
– year: 2012
– start-page: 1398
  year: 2003
  end-page: 1402
– volume: 35
  issue: 4
  year: 2016
  article-title: Let there be color!: joint end‐to‐end learning of global and local image priors for automatic image colorization with simultaneous classification
  publication-title: ACM Transactions on Graphics (TOG)
– volume: 22
  start-page: 313
  issue: 3
  year: 2003
  end-page: 318
  article-title: Poisson image editing
  publication-title: ACM Transactions on graphics (TOG)
– volume: 360
  start-page: 1204
  issue: 6394
  year: 2018
  end-page: 1210
  article-title: Neural scene representation and rendering
  publication-title: Science
– year: 2020
– start-page: 4797
  year: 2016
  end-page: 4805
– year: 2017
– start-page: 119
  year: 2018
  end-page: 135
– volume: 2
  start-page: 361
  year: 2015
  end-page: 385
  article-title: Learning deep generative models
  publication-title: Annual Review of Statistics and Its Application
– volume: 38
  issue: 4
  year: 2019
  article-title: Flexible svbrdf capture with a multi‐image deep network
  publication-title: Computer Graphics Forum (Proceedings of the Eurographics Symposium on Rendering)
– volume: 37
  start-page: 65:1
  issue: 4
  year: 2018
  end-page: 65:12
  article-title: Stereo magnification: Learning view synthesis using multiplane images
  publication-title: ACM Trans. Graph.
– year: 1999
– volume: 38
  start-page: 76:1
  issue: 4
  year: 2019
  end-page: 76:13
  article-title: Deep view synthesis from sparse photometric images
  publication-title: ACM Trans. Graph.
– ident: e_1_2_11_175_2
  doi: 10.1109/CVPR.2018.00659
– ident: e_1_2_11_140_2
– ident: e_1_2_11_185_2
– ident: e_1_2_11_224_2
– ident: e_1_2_11_9_2
– ident: e_1_2_11_144_2
  doi: 10.1145/2984511.2984517
– ident: e_1_2_11_168_2
– ident: e_1_2_11_61_2
– ident: e_1_2_11_123_2
– ident: e_1_2_11_30_2
– ident: e_1_2_11_110_2
  doi: 10.1145/37402.37422
– ident: e_1_2_11_134_2
  doi: 10.1145/882262.882343
– ident: e_1_2_11_199_2
– ident: e_1_2_11_130_2
  doi: 10.1109/ICCV.2019.00882
– ident: e_1_2_11_94_2
– ident: e_1_2_11_155_2
  doi: 10.1145/3306305.3332370
– ident: e_1_2_11_67_2
  doi: 10.1145/237170.237200
– ident: e_1_2_11_156_2
– ident: e_1_2_11_89_2
– volume: 34
  issue: 4
  year: 2015
  ident: e_1_2_11_98_2
  article-title: A Machine Learning Approach for Filtering Monte Carlo Noise
  publication-title: ACM Transactions on Graphics (TOG) (Proceedings of SIGGRAPH 2015)
– ident: e_1_2_11_142_2
– ident: e_1_2_11_108_2
– ident: e_1_2_11_5_2
  doi: 10.1111/cgf.13632
– volume: 37
  issue: 2
  year: 2018
  ident: e_1_2_11_229_2
  article-title: State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications
  publication-title: Computer Graphics Forum (Eurographics State of the Art Reports 2018)
– ident: e_1_2_11_32_2
– ident: e_1_2_11_141_2
  doi: 10.1145/3272127.3275075
– volume-title: Computer vision: algorithms and applications
  year: 2010
  ident: e_1_2_11_186_2
– ident: e_1_2_11_159_2
  doi: 10.1109/ACCESS.2019.2905015
– ident: e_1_2_11_227_2
  doi: 10.1109/ICCV.2019.00955
– ident: e_1_2_11_209_2
– ident: e_1_2_11_86_2
  doi: 10.1109/CVPR.2016.90
– ident: e_1_2_11_118_2
– ident: e_1_2_11_112_2
  doi: 10.1109/ICCV.2019.01065
– ident: e_1_2_11_79_2
– ident: e_1_2_11_225_2
  doi: 10.1109/CVPR.2010.5539957
– ident: e_1_2_11_211_2
  doi: 10.1109/CVPR.2018.00070
– ident: e_1_2_11_182_2
  doi: 10.1109/CVPR.2018.00020
– volume-title: Inverse rendering for computer graphics
  year: 1998
  ident: e_1_2_11_122_2
– ident: e_1_2_11_60_2
  doi: 10.1109/CVPR.2018.00923
– ident: e_1_2_11_56_2
– ident: e_1_2_11_191_2
  doi: 10.1109/IROS.2018.8593800
– ident: e_1_2_11_34_2
  doi: 10.1145/3072959.3073601
– ident: e_1_2_11_160_2
  doi: 10.1109/CVPR.2018.00918
– ident: e_1_2_11_74_2
  doi: 10.1109/ICCV.2017.167
– ident: e_1_2_11_103_2
  doi: 10.1109/CVPR.2019.00453
– ident: e_1_2_11_195_2
– ident: e_1_2_11_85_2
– ident: e_1_2_11_106_2
– ident: e_1_2_11_70_2
– ident: e_1_2_11_221_2
  doi: 10.1109/ICCV.2019.00729
– ident: e_1_2_11_147_2
  doi: 10.1007/BF01679684
– ident: e_1_2_11_59_2
  doi: 10.1126/science.aar6170
– ident: e_1_2_11_75_2
  doi: 10.1145/1276377.1276382
– ident: e_1_2_11_215_2
– ident: e_1_2_11_214_2
  doi: 10.1145/3197517.3201313
– ident: e_1_2_11_82_2
  doi: 10.1126/science.1127647
– ident: e_1_2_11_23_2
– ident: e_1_2_11_169_2
  doi: 10.1145/3306346.3323008
– ident: e_1_2_11_78_2
  doi: 10.1007/978-3-030-01252-6_7
– ident: e_1_2_11_128_2
  doi: 10.1145/3306346.3323027
– ident: e_1_2_11_176_2
  doi: 10.1109/CVPR.2017.723
– ident: e_1_2_11_164_2
  doi: 10.1111/cgf.13633
– ident: e_1_2_11_178_2
  doi: 10.1561/0600000023
– ident: e_1_2_11_162_2
  doi: 10.1109/ICCV.2019.00009
– volume: 1
  issue: 8
  year: 2019
  ident: e_1_2_11_166_2
  article-title: Language models are unsupervised multitask learners
  publication-title: OpenAI Blog
– ident: e_1_2_11_146_2
– ident: e_1_2_11_104_2
  doi: 10.1109/CVPR.2019.00882
– ident: e_1_2_11_150_2
– ident: e_1_2_11_24_2
– ident: e_1_2_11_145_2
– ident: e_1_2_11_65_2
  doi: 10.1109/CVMP.2009.18
– ident: e_1_2_11_91_2
  doi: 10.1111/j.1467-8659.2006.00960.x
– ident: e_1_2_11_189_2
– ident: e_1_2_11_49_2
– ident: e_1_2_11_51_2
– ident: e_1_2_11_138_2
  doi: 10.1007/978-3-030-01219-9_8
– ident: e_1_2_11_201_2
  doi: 10.1007/978-3-319-46478-7_51
– ident: e_1_2_11_194_2
– ident: e_1_2_11_117_2
  doi: 10.1145/3197517.3201401
– ident: e_1_2_11_35_2
  doi: 10.21437/Interspeech.2018-1929
– ident: e_1_2_11_179_2
  doi: 10.1109/CVPR.2018.00359
– ident: e_1_2_11_187_2
  doi: 10.1007/978-3-030-01231-1_36
– ident: e_1_2_11_42_2
– volume: 30
  start-page: 365
  year: 2017
  ident: e_1_2_11_102_2
  article-title: Learning a multi‐view stereo machine
  publication-title: Advances in Neural Information Processing Systems
– ident: e_1_2_11_66_2
  doi: 10.1109/CVPR.2016.265
– ident: e_1_2_11_87_2
– ident: e_1_2_11_174_2
– ident: e_1_2_11_125_2
  doi: 10.1109/CVPR.2018.00013
– ident: e_1_2_11_219_2
  doi: 10.1109/ICCV.2017.310
– ident: e_1_2_11_120_2
– ident: e_1_2_11_165_2
– ident: e_1_2_11_7_2
  doi: 10.1109/ICCV.2019.00466
– ident: e_1_2_11_127_2
– ident: e_1_2_11_33_2
– ident: e_1_2_11_63_2
  doi: 10.1109/CVPR.2016.595
– ident: e_1_2_11_212_2
  doi: 10.1145/3306346.3323007
– ident: e_1_2_11_136_2
  doi: 10.1007/BFb0094775
– ident: e_1_2_11_153_2
  doi: 10.1145/360825.360839
– ident: e_1_2_11_64_2
  doi: 10.1145/3306346.3323028
– ident: e_1_2_11_52_2
– ident: e_1_2_11_111_2
– ident: e_1_2_11_22_2
  doi: 10.1109/CVPR.2019.00242
– ident: e_1_2_11_184_2
– ident: e_1_2_11_2_2
  doi: 10.1207/s15516709cog0901_7
– ident: e_1_2_11_29_2
  doi: 10.1145/1618452.1618470
– ident: e_1_2_11_39_2
– ident: e_1_2_11_205_2
– ident: e_1_2_11_8_2
  doi: 10.1109/TCOM.1983.1095851
– ident: e_1_2_11_170_2
  doi: 10.1109/CVPR.2008.4587842
– ident: e_1_2_11_48_2
  doi: 10.1145/3130800.3130801
– ident: e_1_2_11_31_2
  doi: 10.1145/2487228.2487238
– ident: e_1_2_11_114_2
  doi: 10.1109/CVPR.2015.7298965
– ident: e_1_2_11_157_2
  doi: 10.1109/CVPR.2019.00994
– ident: e_1_2_11_183_2
  doi: 10.1109/CVPR.2017.578
– ident: e_1_2_11_210_2
  doi: 10.1109/ICCV.2019.01017
– ident: e_1_2_11_163_2
  doi: 10.1007/978-3-319-24574-4_28
– ident: e_1_2_11_96_2
– ident: e_1_2_11_121_2
  doi: 10.1145/3333002
– ident: e_1_2_11_45_2
  doi: 10.1145/3197517.3201378
– ident: e_1_2_11_137_2
  doi: 10.1111/cgf.13225
– ident: e_1_2_11_203_2
  doi: 10.1007/978-3-030-01261-8_41
– ident: e_1_2_11_19_2
  doi: 10.1145/1576246.1531330
– ident: e_1_2_11_16_2
  doi: 10.1109/CVPR.2018.00652
– ident: e_1_2_11_68_2
  doi: 10.1145/3355089.3356571
– ident: e_1_2_11_107_2
– ident: e_1_2_11_135_2
  doi: 10.1109/CVPR.2018.00018
– ident: e_1_2_11_90_2
  doi: 10.1007/978-3-319-46475-6_43
– ident: e_1_2_11_10_2
  doi: 10.1111/1467-8659.t01-1-00712
– ident: e_1_2_11_230_2
  doi: 10.1145/3240508.3240536
– ident: e_1_2_11_28_2
– ident: e_1_2_11_119_2
  doi: 10.1109/CVPR.2017.19
– ident: e_1_2_11_154_2
  doi: 10.1109/CVPR.2019.00023
– ident: e_1_2_11_38_2
  doi: 10.1109/TIFS.2019.2916364
– ident: e_1_2_11_151_2
  doi: 10.1145/882262.882269
– ident: e_1_2_11_177_2
  doi: 10.1109/CVPR.2017.241
– ident: e_1_2_11_14_2
– volume: 38
  issue: 4
  year: 2019
  ident: e_1_2_11_46_2
  article-title: Flexible svbrdf capture with a multi‐image deep network
  publication-title: Computer Graphics Forum (Proceedings of the Eurographics Symposium on Rendering)
– ident: e_1_2_11_77_2
  doi: 10.1007/978-3-030-01219-9_11
– volume: 38
  issue: 4
  year: 2019
  ident: e_1_2_11_20_2
  article-title: Semantic photo manipulation with a generative image prior
  publication-title: ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH)
– ident: e_1_2_11_105_2
– ident: e_1_2_11_53_2
  doi: 10.1109/CVPR.2015.7298761
– ident: e_1_2_11_220_2
  doi: 10.1016/j.image.2003.07.001
– ident: e_1_2_11_172_2
– ident: e_1_2_11_167_2
  doi: 10.1146/annurev-statistics-010814-020120
– ident: e_1_2_11_206_2
– ident: e_1_2_11_207_2
  doi: 10.1109/TPAMI.2007.60
– ident: e_1_2_11_97_2
  doi: 10.1109/TMM.2014.2368696
– ident: e_1_2_11_204_2
– ident: e_1_2_11_188_2
– ident: e_1_2_11_101_2
  doi: 10.1145/3197517.3201283
– ident: e_1_2_11_25_2
  doi: 10.1109/CVPR.2018.00870
– ident: e_1_2_11_6_2
– ident: e_1_2_11_171_2
  doi: 10.1109/ICCV.2019.00467
– ident: e_1_2_11_200_2
  doi: 10.1109/TIP.2003.819861
– ident: e_1_2_11_83_2
  doi: 10.1109/TPAMI.2012.213
– ident: e_1_2_11_197_2
  doi: 10.1109/CVPR.2018.00984
– ident: e_1_2_11_44_2
  doi: 10.1007/978-3-319-46484-8_38
– ident: e_1_2_11_69_2
– ident: e_1_2_11_41_2
– ident: e_1_2_11_223_2
  doi: 10.1109/CVPR.2018.00068
– ident: e_1_2_11_100_2
– volume: 3
  start-page: 73
  issue: 4
  year: 2014
  ident: e_1_2_11_131_2
  article-title: Efficient GPU screen‐space ray tracing
  publication-title: Journal of Computer Graphics Techniques (JCGT)
– ident: e_1_2_11_113_2
  doi: 10.1007/978-3-030-01219-9_5
– ident: e_1_2_11_198_2
– ident: e_1_2_11_47_2
– ident: e_1_2_11_62_2
  doi: 10.1109/CVPR.2019.00247
– volume: 27
  start-page: 2672
  year: 2014
  ident: e_1_2_11_71_2
  article-title: Generative adversarial nets
  publication-title: Advances in Neural Information Processing Systems
– ident: e_1_2_11_88_2
  doi: 10.1145/2897824.2925974
– ident: e_1_2_11_80_2
  doi: 10.1145/3272127.3275084
– ident: e_1_2_11_12_2
– ident: e_1_2_11_57_2
  doi: 10.1145/383259.383296
– ident: e_1_2_11_58_2
  doi: 10.1109/ICCV.1999.790383
– ident: e_1_2_11_208_2
  doi: 10.1145/3306346.3323030
– ident: e_1_2_11_109_2
– ident: e_1_2_11_37_2
– ident: e_1_2_11_132_2
– ident: e_1_2_11_232_2
– ident: e_1_2_11_36_2
  doi: 10.1145/357290.357293
– ident: e_1_2_11_115_2
– ident: e_1_2_11_15_2
  doi: 10.1109/TPAMI.2014.2377712
– ident: e_1_2_11_84_2
– ident: e_1_2_11_129_2
– ident: e_1_2_11_4_2
– ident: e_1_2_11_152_2
  doi: 10.1145/3306346.3323013
– ident: e_1_2_11_27_2
  doi: 10.1109/CVPR.2018.00916
– ident: e_1_2_11_231_2
– ident: e_1_2_11_50_2
  doi: 10.1145/2897824.2925969
– ident: e_1_2_11_3_2
– ident: e_1_2_11_192_2
  doi: 10.1145/3306346.3323035
– ident: e_1_2_11_133_2
  doi: 10.1109/CVPR.2019.00459
– ident: e_1_2_11_143_2
  doi: 10.1109/ISACV.2018.8354080
– ident: e_1_2_11_161_2
– ident: e_1_2_11_92_2
  doi: 10.1109/TVCG.2010.233
– ident: e_1_2_11_180_2
– ident: e_1_2_11_213_2
  doi: 10.1109/CVPR.2018.00882
– ident: e_1_2_11_40_2
– ident: e_1_2_11_148_2
– ident: e_1_2_11_73_2
  doi: 10.1007/978-1-4842-4427-2
– ident: e_1_2_11_173_2
  doi: 10.1109/ICCV.2019.00239
– ident: e_1_2_11_181_2
  doi: 10.1109/CVPR.2019.00254
– ident: e_1_2_11_54_2
– ident: e_1_2_11_81_2
  doi: 10.1145/2980179.2982420
– ident: e_1_2_11_217_2
– ident: e_1_2_11_149_2
– ident: e_1_2_11_222_2
  doi: 10.1007/978-3-319-46487-9_40
– ident: e_1_2_11_13_2
  doi: 10.1109/ICCV.2003.1238630
– ident: e_1_2_11_124_2
  doi: 10.1145/3272127.3275099
– ident: e_1_2_11_190_2
– ident: e_1_2_11_218_2
  doi: 10.1109/TIFS.2017.2699942
– ident: e_1_2_11_26_2
– ident: e_1_2_11_158_2
– ident: e_1_2_11_126_2
  doi: 10.1145/1597990.1598050
– ident: e_1_2_11_11_2
– ident: e_1_2_11_18_2
  doi: 10.1109/CVPR.2017.18
– ident: e_1_2_11_202_2
  doi: 10.1145/358876.358882
– ident: e_1_2_11_43_2
  doi: 10.1109/MSP.2017.2765202
– ident: e_1_2_11_196_2
– ident: e_1_2_11_21_2
  doi: 10.1109/ICCV.2019.00282
– ident: e_1_2_11_17_2
– ident: e_1_2_11_226_2
  doi: 10.1109/ICCV.2017.244
– ident: e_1_2_11_55_2
– ident: e_1_2_11_116_2
  doi: 10.1145/3306346.3323020
– ident: e_1_2_11_193_2
  doi: 10.1109/CVPR.2016.262
– ident: e_1_2_11_139_2
– ident: e_1_2_11_76_2
  doi: 10.1145/383259.383295
– ident: e_1_2_11_72_2
  doi: 10.1145/3272127.3275043
– ident: e_1_2_11_216_2
  doi: 10.1109/CVPR.2017.728
– ident: e_1_2_11_93_2
– ident: e_1_2_11_99_2
– ident: e_1_2_11_228_2
  doi: 10.1145/3197517.3201323
– ident: e_1_2_11_95_2
  doi: 10.1145/15886.15902
SSID ssj0004765
Score 2.697214
Snippet Efficient rendering of photo‐realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing...
SourceID proquest
crossref
wiley
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 701
SubjectTerms Algorithms
Augmented reality
Avatars
Computer graphics
Computer vision
Machine learning
Rendering
Stability
Synthesis
Virtual reality
Title State of the Art on Neural Rendering
URI https://onlinelibrary.wiley.com/doi/abs/10.1111%2Fcgf.14022
https://www.proquest.com/docview/2422960650
Volume 39
WOSCitedRecordID wos000548709600054&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVWIB
  databaseName: Wiley Online Library Full Collection 2020
  customDbUrl:
  eissn: 1467-8659
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0004765
  issn: 0167-7055
  databaseCode: DRFUL
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://onlinelibrary.wiley.com
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3LSsNAFL3U1oUufIvVKkG6cBOYPCYPXEk1uihFipXuwsxkRgRJpa1-v_dOk7aCguAuiztJZjIn95x5nAHoykDGhYkQaVyjQAmEdIX0uGsSmuaLUxnZGfznfjwYJONx-tiA63ovzMIfYjngRsiw_2sCuJCzNZCrF4MwxxS0AS0f-23YhNbtMBv1V9si44jX1t5kGlMZC9FCnmXh7-loxTHXmapNNdnuv15yD3YqhuncLLrEPjR0eQDba76Dh9C1FNOZGAfpH0U6k9Ihmw4sN7Rny2HYEYyyu6feg1sdl-CqIEWeLDl5jfn4SZjQColYxDWqKcUTlZDwSbUnTFikoVGe1FxJhKoXacY0DSgiEI-hWU5KfQIOZ6JgAtObiYowZEIkBoVNnBaCFSqWog1XdavlqvISpyMt3vJaU2DFc1vxNlwuQ98XBho_BXXqps8rDM1yJA8-6SvO8HG2kX-_Qd67z-zF6d9Dz2DLJ_FsVy92oDmffuhz2FSf89fZ9KLqTF8RPMjh
linkProvider Wiley-Blackwell
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1bS8MwGP2Ym6A-eBfntcgefCmkW9ML-CLTOrEOGZvsLSRpIoJ0sk1_v1-ydpugIPjWhy9tk-b0OyeXE4CGaIkw0wEijSoUKC0uXC486urITPOFsQjsDP5zGna70XAYP1XgqtwLM_OHmA-4GWTY_7UBuBmQXkK5fNGIc8xBK1DzsRvRKtRueskgXeyLDANaensb15jCWcis5JkX_p6PFiRzmaraXJNs_e8tt2Gz4JjO9axT7EBF5buwseQ8uAcNSzKdkXaQAJpIZ5Q7xqgDy_Xs6XIYtg-D5Lbf7rjFgQmubMXIlAU1bmNN_CiEK4lULKAK9ZSkkYyM9ImVx7Wfxb6WnlBUCgSrFyhClBlSRCgeQDUf5eoQHEp4RjgmOB1kvk84jzRKmzDOOMlkKHgdLstmY7JwEzeHWryxUlVgxZmteB0u5qHvMwuNn4JOyrZnBYomDOlD0ygsSvBxtpV_vwFr3yX24ujvoeew1uk_piy97z4cw3rTSGm7lvEEqtPxhzqFVfk5fZ2Mz4qe9QUwSMzR
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1dS8MwFL3MTUQf_BanU4vswZdCujX9AF9ksyqOMYaTvYUkTYYg3dimv9-brN0mKAi-9eGmbdKc3nPycQJQF00RpjpApFGFAqXJhcuFR10dmWm-MBaBncF_7YTdbjQcxr0S3BZ7YRb-EMsBN4MM-782AFeTVK-hXI404hxz0AZUfBoHCMtKu58MOqt9kWFAC29v4xqTOwuZlTzLwt_z0YpkrlNVm2uSvf-95T7s5hzTuVt0igMoqewQdtacB4-gbkmmM9YOEkAT6Ywzxxh1YLm-PV0Ow45hkNy_tB7d_MAEVzZjZMqCGrexBn4UwpVEKhZQhXpK0khGRvrEyuPaT2NfS08oKgWC1QsUIcoMKSIUT6CcjTN1Cg4lPCUcE5wOUt8nnEcapU0Yp5ykMhS8CjdFszGZu4mbQy3eWaEqsOLMVrwK18vQycJC46egWtH2LEfRjCF9aBiFRQk-zrby7zdgrYfEXpz9PfQKtnrthHWeus_nsN0wStouZaxBeT79UBewKT_nb7PpZd6xvgAE1MxM
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=State+of+the+Art+on+Neural+Rendering&rft.jtitle=Computer+graphics+forum&rft.au=Tewari%2C+A.&rft.au=Fried%2C+O.&rft.au=Thies%2C+J.&rft.au=Sitzmann%2C+V.&rft.date=2020-05-01&rft.issn=0167-7055&rft.eissn=1467-8659&rft.volume=39&rft.issue=2&rft.spage=701&rft.epage=727&rft_id=info:doi/10.1111%2Fcgf.14022&rft.externalDBID=n%2Fa&rft.externalDocID=10_1111_cgf_14022
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-7055&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-7055&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-7055&client=summon