HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs

Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training. They are hence largely limited to static models as training each frame is infeasible. We present HumanNeRF - a neural representation with efficient general...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online) pp. 7733 - 7743
Main Authors: Zhao, Fuqiang, Yang, Wei, Zhang, Jiakai, Lin, Pei, Zhang, Yingliang, Yu, Jingyi, Xu, Lan
Format: Conference Proceeding
Language:English
Published: IEEE 01.06.2022
Subjects:
ISSN:1063-6919
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training. They are hence largely limited to static models as training each frame is infeasible. We present HumanNeRF - a neural representation with efficient generalization ability - for high-fidelity free-view synthesis of dynamic humans. Analogous to how IBRNet assists NeRF by avoiding perscene training, HumanNeRF employs an aggregated pixel-alignment feature across multi-view inputs along with a pose embedded non-rigid deformation field for tackling dynamic motions. The raw Human-NeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings. To further improve the rendering quality, we augment our solution with in-hour scene-specific fine-tuning, and an appearance blending module for combining the benefits of both neural volumetric rendering and neural texture blending. Extensive experiments on various multi-view dynamic hu-man datasets demonstrate effectiveness of our approach in synthesizing photo-realistic free-view humans under challenging motions and with very sparse camera view inputs.
AbstractList Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training. They are hence largely limited to static models as training each frame is infeasible. We present HumanNeRF - a neural representation with efficient generalization ability - for high-fidelity free-view synthesis of dynamic humans. Analogous to how IBRNet assists NeRF by avoiding perscene training, HumanNeRF employs an aggregated pixel-alignment feature across multi-view inputs along with a pose embedded non-rigid deformation field for tackling dynamic motions. The raw Human-NeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings. To further improve the rendering quality, we augment our solution with in-hour scene-specific fine-tuning, and an appearance blending module for combining the benefits of both neural volumetric rendering and neural texture blending. Extensive experiments on various multi-view dynamic hu-man datasets demonstrate effectiveness of our approach in synthesizing photo-realistic free-view humans under challenging motions and with very sparse camera view inputs.
Author Zhang, Jiakai
Yu, Jingyi
Zhang, Yingliang
Lin, Pei
Zhao, Fuqiang
Yang, Wei
Xu, Lan
Author_xml – sequence: 1
  givenname: Fuqiang
  surname: Zhao
  fullname: Zhao, Fuqiang
  organization: ShanghaiTech University
– sequence: 2
  givenname: Wei
  surname: Yang
  fullname: Yang, Wei
  organization: Huazhong University of Science and Technology
– sequence: 3
  givenname: Jiakai
  surname: Zhang
  fullname: Zhang, Jiakai
  organization: ShanghaiTech University
– sequence: 4
  givenname: Pei
  surname: Lin
  fullname: Lin, Pei
  organization: ShanghaiTech University
– sequence: 5
  givenname: Yingliang
  surname: Zhang
  fullname: Zhang, Yingliang
  organization: DGene
– sequence: 6
  givenname: Jingyi
  surname: Yu
  fullname: Yu, Jingyi
  organization: ShanghaiTech University
– sequence: 7
  givenname: Lan
  surname: Xu
  fullname: Xu, Lan
  organization: ShanghaiTech University
BookMark eNotjktOwzAUAA0Cibb0BLDwBRLesx1_2KGobSpVgMpnWznJs2TUulGSLnp7ELCazWg0U3aVjokYu0fIEcE9lJ-v20Joa3MBQuQApnAXbIpaF0o7peUlmyBomWmH7obNh-ELAKRA1M5OWFWdDj4903b5yBchxCZSGvdnvqJEvR-p5b8C3_o2-tQQX0batzz0xwN_63w_EF-n7jQOt-w6-P1A83_O2Mdy8V5W2eZltS6fNlkUIMdMgpGkjfLQkDV1gYRGS9XallSNZLxvm9r9_EItUIFRWoSggq6pCMFZL2fs7q8biWjX9fHg-_POWQsISn4DsGpOLQ
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/CVPR52688.2022.00759
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISBN 1665469463
9781665469463
EISSN 1063-6919
EndPage 7743
ExternalDocumentID 9880104
Genre orig-research
GrantInformation_xml – fundername: National Key Research and Development Program
  grantid: 2018YFB2100500
  funderid: 10.13039/501100012166
– fundername: STCSM
  grantid: 2015F0203-000-06
  funderid: 10.13039/501100003399
GroupedDBID 6IE
6IH
6IL
6IN
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IJVOP
OCL
RIE
RIL
RIO
ID FETCH-LOGICAL-i203t-3073e674a0ce87b51e17634d8de4b1e7aadcb90630b21407462ff4f6be5ff98a3
IEDL.DBID RIE
ISICitedReferencesCount 78
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000870759100057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:15:09 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i203t-3073e674a0ce87b51e17634d8de4b1e7aadcb90630b21407462ff4f6be5ff98a3
PageCount 11
ParticipantIDs ieee_primary_9880104
PublicationCentury 2000
PublicationDate 2022-June
PublicationDateYYYYMMDD 2022-06-01
PublicationDate_xml – month: 06
  year: 2022
  text: 2022-June
PublicationDecade 2020
PublicationTitle Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online)
PublicationTitleAbbrev CVPR
PublicationYear 2022
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0003211698
Score 2.5801084
Snippet Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training. They are hence...
SourceID ieee
SourceType Publisher
StartPage 7733
SubjectTerms Cameras
Computer vision
Dynamics
Entertainment industry
Image and video synthesis and generation; 3D from multi-view and sensors; Face and gestures; Motion and tracking; Pose estimation and tracking
Rendering (computer graphics)
Telepresence
Training
Title HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs
URI https://ieeexplore.ieee.org/document/9880104
WOSCitedRecordID wos000870759100057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlZ3PS8MwFMfDHB48Td1E5w9y8Ghd26RJ43WsTJAxpo7dRn68wEC6sXWC_71JWqYHL95KKSm8JM17r-_7eQjdG20I01a4GVBxRMFmkbBCRiZ23jIHSEXosTR_4ZNJvliIaQs9HLQwABCKz-DRX4Z_-Wat9z5VNhBusSUe_nnEOau1Wod8CnGRDBN5o45LYjEYzqczDzPxBVypx3JyDyT91UMlHCFF538vP0W9Hy0enh5OmTPUgvIcdRrnETdbc9dF45COn8CseMKjwIVww3184Zor7fxKHB7AswAjcIMWvnYNe3kJft248Bbwc7nZV7seei9Gb8Nx1LRJiFZpTCqfPSLAOJWxhpyrLIHEfTSoyQ1QlQCX0mglPFtLpS6c4pSl1lLLFGTWilySC9Qu1yVcIpxQzWhMcka4ojLLpNuvhIOQVttUanOFut4wy01Nwlg2Nun_ffsanXjL14VVN6hdbfdwi471Z7Xabe_C9H0DdsmbSA
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlZ3PS8MwFMfDmIKepm7ib3PwaF3bpE3jdaxsOMuYc-w20uQFBtKNtRP8702yMj148VZKSSEvad57fd_PQ-hBSUViqbmxQO57FHTkcc2Fp3zjLTOAkLseS7MRy7JkPufjBnrca2EAwBWfwZO9dP_y1Upubaqsy81iCyz888B2zqrVWvuMCjGxTMyTWh8X-Lzbm40nFmdiS7hCC-ZkFkn6q4uKO0TS1v9ef4I6P2o8PN6fM6eoAcUZatXuI643Z9lGA5eQz2CSPuO-I0OY4T6-8I4sbTxL7B7AE4cjMIOmtnoNW4EJflubABfwsFhvq7KD3tP-tDfw6kYJ3jL0SWXzRwRiRoUvIWF5FEBgPhtUJQpoHgATQsmcW7pWHpqAitE41JrqOIdIa54Ico6axaqAC4QDKmPqkyQmLKciioTZsYQBF1rqUEh1idp2YhbrHQtjUc_J1d-379HRYPo6WoyG2cs1OrZW2JVZ3aBmtdnCLTqUn9Wy3Nw5U34Dv1uekQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+%28IEEE+Computer+Society+Conference+on+Computer+Vision+and+Pattern+Recognition.+Online%29&rft.atitle=HumanNeRF%3A+Efficiently+Generated+Human+Radiance+Field+from+Sparse+Inputs&rft.au=Zhao%2C+Fuqiang&rft.au=Yang%2C+Wei&rft.au=Zhang%2C+Jiakai&rft.au=Lin%2C+Pei&rft.date=2022-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=7733&rft.epage=7743&rft_id=info:doi/10.1109%2FCVPR52688.2022.00759&rft.externalDocID=9880104