HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs

Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training. They are hence largely limited to static models as training each frame is infeasible. We present HumanNeRF - a neural representation with efficient general...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online) pp. 7733 - 7743
Main Authors: Zhao, Fuqiang, Yang, Wei, Zhang, Jiakai, Lin, Pei, Zhang, Yingliang, Yu, Jingyi, Xu, Lan
Format: Conference Proceeding
Language:English
Published: IEEE 01.06.2022
Subjects:
ISSN:1063-6919
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training. They are hence largely limited to static models as training each frame is infeasible. We present HumanNeRF - a neural representation with efficient generalization ability - for high-fidelity free-view synthesis of dynamic humans. Analogous to how IBRNet assists NeRF by avoiding perscene training, HumanNeRF employs an aggregated pixel-alignment feature across multi-view inputs along with a pose embedded non-rigid deformation field for tackling dynamic motions. The raw Human-NeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings. To further improve the rendering quality, we augment our solution with in-hour scene-specific fine-tuning, and an appearance blending module for combining the benefits of both neural volumetric rendering and neural texture blending. Extensive experiments on various multi-view dynamic hu-man datasets demonstrate effectiveness of our approach in synthesizing photo-realistic free-view humans under challenging motions and with very sparse camera view inputs.
AbstractList Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training. They are hence largely limited to static models as training each frame is infeasible. We present HumanNeRF - a neural representation with efficient generalization ability - for high-fidelity free-view synthesis of dynamic humans. Analogous to how IBRNet assists NeRF by avoiding perscene training, HumanNeRF employs an aggregated pixel-alignment feature across multi-view inputs along with a pose embedded non-rigid deformation field for tackling dynamic motions. The raw Human-NeRF can already produce reasonable rendering on sparse video inputs of unseen subjects and camera settings. To further improve the rendering quality, we augment our solution with in-hour scene-specific fine-tuning, and an appearance blending module for combining the benefits of both neural volumetric rendering and neural texture blending. Extensive experiments on various multi-view dynamic hu-man datasets demonstrate effectiveness of our approach in synthesizing photo-realistic free-view humans under challenging motions and with very sparse camera view inputs.
Author Zhang, Jiakai
Yu, Jingyi
Zhang, Yingliang
Lin, Pei
Zhao, Fuqiang
Yang, Wei
Xu, Lan
Author_xml – sequence: 1
  givenname: Fuqiang
  surname: Zhao
  fullname: Zhao, Fuqiang
  organization: ShanghaiTech University
– sequence: 2
  givenname: Wei
  surname: Yang
  fullname: Yang, Wei
  organization: Huazhong University of Science and Technology
– sequence: 3
  givenname: Jiakai
  surname: Zhang
  fullname: Zhang, Jiakai
  organization: ShanghaiTech University
– sequence: 4
  givenname: Pei
  surname: Lin
  fullname: Lin, Pei
  organization: ShanghaiTech University
– sequence: 5
  givenname: Yingliang
  surname: Zhang
  fullname: Zhang, Yingliang
  organization: DGene
– sequence: 6
  givenname: Jingyi
  surname: Yu
  fullname: Yu, Jingyi
  organization: ShanghaiTech University
– sequence: 7
  givenname: Lan
  surname: Xu
  fullname: Xu, Lan
  organization: ShanghaiTech University
BookMark eNotjktOwzAUAA0Cibb0BLDwBRLesx1_2KGobSpVgMpnWznJs2TUulGSLnp7ELCazWg0U3aVjokYu0fIEcE9lJ-v20Joa3MBQuQApnAXbIpaF0o7peUlmyBomWmH7obNh-ELAKRA1M5OWFWdDj4903b5yBchxCZSGvdnvqJEvR-p5b8C3_o2-tQQX0batzz0xwN_63w_EF-n7jQOt-w6-P1A83_O2Mdy8V5W2eZltS6fNlkUIMdMgpGkjfLQkDV1gYRGS9XallSNZLxvm9r9_EItUIFRWoSggq6pCMFZL2fs7q8biWjX9fHg-_POWQsISn4DsGpOLQ
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/CVPR52688.2022.00759
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISBN 1665469463
9781665469463
EISSN 1063-6919
EndPage 7743
ExternalDocumentID 9880104
Genre orig-research
GrantInformation_xml – fundername: National Key Research and Development Program
  grantid: 2018YFB2100500
  funderid: 10.13039/501100012166
– fundername: STCSM
  grantid: 2015F0203-000-06
  funderid: 10.13039/501100003399
GroupedDBID 6IE
6IH
6IL
6IN
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IJVOP
OCL
RIE
RIL
RIO
ID FETCH-LOGICAL-i203t-3073e674a0ce87b51e17634d8de4b1e7aadcb90630b21407462ff4f6be5ff98a3
IEDL.DBID RIE
ISICitedReferencesCount 78
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000870759100057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:15:09 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i203t-3073e674a0ce87b51e17634d8de4b1e7aadcb90630b21407462ff4f6be5ff98a3
PageCount 11
ParticipantIDs ieee_primary_9880104
PublicationCentury 2000
PublicationDate 2022-June
PublicationDateYYYYMMDD 2022-06-01
PublicationDate_xml – month: 06
  year: 2022
  text: 2022-June
PublicationDecade 2020
PublicationTitle Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online)
PublicationTitleAbbrev CVPR
PublicationYear 2022
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0003211698
Score 2.5801084
Snippet Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training. They are hence...
SourceID ieee
SourceType Publisher
StartPage 7733
SubjectTerms Cameras
Computer vision
Dynamics
Entertainment industry
Image and video synthesis and generation; 3D from multi-view and sensors; Face and gestures; Motion and tracking; Pose estimation and tracking
Rendering (computer graphics)
Telepresence
Training
Title HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs
URI https://ieeexplore.ieee.org/document/9880104
WOSCitedRecordID wos000870759100057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlZ05a8MwFMdFGjp06pGU3mjoWDe2dXcNMekSQnqQLUjyEwSKExKn0G9fSTZphy7djDEyPMt-h9__9xC659p7xdJCkkqwCYXMJYpYlzCjZWpySjSzcdiEmEzkfK6mHfSw18IAQGw-g8dwGP_llyu7C6WygfKbLQvwzwMheKPV2tdTiM9kuJKtOi5L1WD4Pp0FmElo4MoDllMEIOmvGSrRhRTH_7v5Cer_aPHwdO9lTlEHqjN03AaPuH01tz00juX4CcyKJzyKXAi_3McXbrjSPq7E8QI8izACv2gRetdwkJfgl7VPbwE_V-tdve2jt2L0Ohwn7ZiEZJmnpA7VIwJcUJ1akMKwDDL_0aClLIGaDITWpTUqsLVM7tMpQXnuHHXcAHNOSU3OUbdaVXCBMC9d5gNC4ggDyrnSjBGmOVBKHLVMX6JeMMxi3ZAwFq1Nrv4-fY2OguWbxqob1K03O7hFh_azXm43d_HxfQP5MZtM
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlZ3PS8MwFMfDmIKepm7ib3PwaF3b_GjjdaxsOMuYU3YbafoCg9GNrRP8703SMj148VZKSSBp8370fT8PoQcujVXMFXh-DMqjEGhPEKU9lsnYz0JKJFOu2USUpvFsJsYN9LjXwgCAKz6DJ3vp_uXnK7WzqbKuMC9bYOGfB4zS0K_UWvuMCjGxDBdxrY8LfNHtfYwnFmdiS7hCC-aMLJL0VxcVZ0SS1v-mP0GdHzUeHu_tzClqQHGGWrX7iOuPc9tGA5eQT2GSPOO-I0OY4ZZfuCJLG88SuwfwxOEIzKCJrV7DVmCC39YmwAU8LNa7cttB70l_2ht4daMEbxH6pLT5IwI8otJXEEcZCyAwxwbN4xxoFkAkZa4yYelaWWgCqojyUGuqeQZMaxFLco6axaqAC4R5rgPjEhJNGFDOhWSMMMmBUqKpYvISte3CzNcVC2Ner8nV37fv0dFg-jqaj4bpyzU6trtQlVndoGa52cEtOlSf5WK7uXNb-Q0-8Z6T
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+%28IEEE+Computer+Society+Conference+on+Computer+Vision+and+Pattern+Recognition.+Online%29&rft.atitle=HumanNeRF%3A+Efficiently+Generated+Human+Radiance+Field+from+Sparse+Inputs&rft.au=Zhao%2C+Fuqiang&rft.au=Yang%2C+Wei&rft.au=Zhang%2C+Jiakai&rft.au=Lin%2C+Pei&rft.date=2022-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=7733&rft.epage=7743&rft_id=info:doi/10.1109%2FCVPR52688.2022.00759&rft.externalDocID=9880104