Part-Aware Shape Generation With Latent 3D Diffusion of Neural Voxel Fields

This article introduces a novel latent 3D diffusion model for generating neural voxel fields with precise part-aware structures and high-quality textures. In comparison to existing methods, this approach incorporates two key designs to guarantee high-quality and accurate part-aware generation. On on...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on visualization and computer graphics Vol. 31; no. 10; pp. 8057 - 8069
Main Authors: Huang, Yuhang, Zou, Shilong, Liu, Xinwang, Xu, Kai
Format: Journal Article
Language:English
Published: United States IEEE 01.10.2025
Subjects:
ISSN:1077-2626, 1941-0506, 1941-0506
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract This article introduces a novel latent 3D diffusion model for generating neural voxel fields with precise part-aware structures and high-quality textures. In comparison to existing methods, this approach incorporates two key designs to guarantee high-quality and accurate part-aware generation. On one hand, we introduce a latent 3D diffusion process for neural voxel fields, incorporating part-aware information into the diffusion process and allowing generation at significantly higher resolutions to capture rich textural and geometric details accurately. On the other hand, a part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding accurate part decomposition and producing high-quality rendering results. Importantly, part-aware learning establishes structural relationships to generate texture information for similar regions, thereby facilitating high-quality rendering results. We evaluate our approach across eight different data classes through extensive experimentation and comparisons with state-of-the-art methods. The results demonstrate that our proposed method has superior generative capabilities in part-aware shape generation, outperforming existing state-of-the-art methods. Moreover, we have conducted image- and text-guided shape generation via the conditioned diffusion process, showcasing the advanced potential in multi-modal guided shape generation.
AbstractList This article introduces a novel latent 3D diffusion model for generating neural voxel fields with precise part-aware structures and high-quality textures. In comparison to existing methods, this approach incorporates two key designs to guarantee high-quality and accurate part-aware generation. On one hand, we introduce a latent 3D diffusion process for neural voxel fields, incorporating part-aware information into the diffusion process and allowing generation at significantly higher resolutions to capture rich textural and geometric details accurately. On the other hand, a part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding accurate part decomposition and producing high-quality rendering results. Importantly, part-aware learning establishes structural relationships to generate texture information for similar regions, thereby facilitating high-quality rendering results. We evaluate our approach across eight different data classes through extensive experimentation and comparisons with state-of-the-art methods. The results demonstrate that our proposed method has superior generative capabilities in part-aware shape generation, outperforming existing state-of-the-art methods. Moreover, we have conducted image- and text-guided shape generation via the conditioned diffusion process, showcasing the advanced potential in multi-modal guided shape generation.
This article introduces a novel latent 3D diffusion model for generating neural voxel fields with precise part-aware structures and high-quality textures. In comparison to existing methods, this approach incorporates two key designs to guarantee high-quality and accurate part-aware generation. On one hand, we introduce a latent 3D diffusion process for neural voxel fields, incorporating part-aware information into the diffusion process and allowing generation at significantly higher resolutions to capture rich textural and geometric details accurately. On the other hand, a part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding accurate part decomposition and producing high-quality rendering results. Importantly, part-aware learning establishes structural relationships to generate texture information for similar regions, thereby facilitating high-quality rendering results. We evaluate our approach across eight different data classes through extensive experimentation and comparisons with state-of-the-art methods. The results demonstrate that our proposed method has superior generative capabilities in part-aware shape generation, outperforming existing state-of-the-art methods. Moreover, we have conducted image- and text-guided shape generation via the conditioned diffusion process, showcasing the advanced potential in multi-modal guided shape generation.This article introduces a novel latent 3D diffusion model for generating neural voxel fields with precise part-aware structures and high-quality textures. In comparison to existing methods, this approach incorporates two key designs to guarantee high-quality and accurate part-aware generation. On one hand, we introduce a latent 3D diffusion process for neural voxel fields, incorporating part-aware information into the diffusion process and allowing generation at significantly higher resolutions to capture rich textural and geometric details accurately. On the other hand, a part-aware shape decoder is introduced to integrate the part codes into the neural voxel fields, guiding accurate part decomposition and producing high-quality rendering results. Importantly, part-aware learning establishes structural relationships to generate texture information for similar regions, thereby facilitating high-quality rendering results. We evaluate our approach across eight different data classes through extensive experimentation and comparisons with state-of-the-art methods. The results demonstrate that our proposed method has superior generative capabilities in part-aware shape generation, outperforming existing state-of-the-art methods. Moreover, we have conducted image- and text-guided shape generation via the conditioned diffusion process, showcasing the advanced potential in multi-modal guided shape generation.
Author Zou, Shilong
Huang, Yuhang
Xu, Kai
Liu, Xinwang
Author_xml – sequence: 1
  givenname: Yuhang
  orcidid: 0000-0001-9350-7713
  surname: Huang
  fullname: Huang, Yuhang
  organization: School of Computer, National University of Defense Technology, Changsha, China
– sequence: 2
  givenname: Shilong
  orcidid: 0009-0004-1124-3830
  surname: Zou
  fullname: Zou, Shilong
  organization: School of Computer, National University of Defense Technology, Changsha, China
– sequence: 3
  givenname: Xinwang
  orcidid: 0000-0001-9066-1475
  surname: Liu
  fullname: Liu, Xinwang
  organization: School of Computer, National University of Defense Technology, Changsha, China
– sequence: 4
  givenname: Kai
  orcidid: 0000-0002-9054-0216
  surname: Xu
  fullname: Xu, Kai
  email: kevin.kai.xu@gmail.com
  organization: School of Computer, National University of Defense Technology, Changsha, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/40261778$$D View this record in MEDLINE/PubMed
BookMark eNpNkE1Lw0AURQepWFv9AYLILN2kzvdMlqW1VSwqWOsyTCcvNJImdSZB_fcmtIqrd-GdexdngHplVQJCF5SMKCXxzXI1mY8YYXLEpWJG0yN0SmNBIyKJ6rWZaB0xxVQfDUJ4J4QKYeIT1BeEKaq1OUUPz9bX0fjTesAvG7sDPIcSvK3zqsRveb3BC1tDWWM-xdM8y5rQPaoMP0LjbYFX1RcUeJZDkYYzdJzZIsD54Q7R6-x2ObmLFk_z-8l4ETmmVR1la8PTWGTccCEgpcwK6yiVjpi1NE4pINYSF1spLFMpaO6EyJiLjdZUGsmH6Hq_u_PVRwOhTrZ5cFAUtoSqCQmnMdetIMla9OqANustpMnO51vrv5NfAS1A94DzVQgesj-EkqSTnHSSk05ycpDcdi73nRwA_vGx5pxR_gPHxHWP
CODEN ITVGEA
Cites_doi 10.1109/CVPR42600.2020.00765
10.1109/TVCG.2024.3357568
10.1109/ICCV48922.2021.01404
10.1145/3635304
10.1007/978-3-030-20893-6_7
10.1109/ICCV51070.2023.00215
10.1109/ICCV51070.2023.01328
10.1109/TVCG.2018.2865317
10.1007/978-3-319-46723-8_49
10.1007/978-3-031-20062-5_5
10.1109/CVPR46437.2021.00574
10.1109/cvpr52729.2023.00434
10.1109/TVCG.2023.3265306
10.48550/ARXIV.1706.03762
10.1109/CVPR52729.2023.00421
10.1109/ICCV.2019.01008
10.1109/TVCG.2021.3051251
10.1109/ICCV48922.2021.00986
10.1109/CVPR42600.2020.00591
10.1145/3528223.3530084
10.1109/ICCV51070.2023.02100
10.1109/TVCG.2021.3126659
10.1145/3592442
10.1145/3450626.3459766
10.1109/ICCV51070.2023.01311
10.1109/CVPR42600.2020.00091
10.1109/CVPR42600.2020.00857
10.1109/cvpr52733.2024.00951
10.1109/CVPR52688.2022.01042
10.1109/TVCG.2020.3029759
10.1007/978-3-030-58452-8_24
10.1109/CVPR52688.2022.00538
10.1145/3618312
10.1109/CVPR52729.2023.01211
10.1109/CVPR52688.2022.01565
10.1007/978-3-030-58571-6_16
10.1109/ICCV51070.2023.00229
10.1109/TVCG.2021.3131712
10.1109/CVPR.2019.00100
10.1145/325334.325242
10.1609/aaai.v33i01.33018279
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7X8
DOI 10.1109/TVCG.2025.3562871
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList PubMed

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1941-0506
EndPage 8069
ExternalDocumentID 40261778
10_1109_TVCG_2025_3562871
10973321
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Major Program of Xiangjiang Laboratory
  grantid: 23XJ01009
– fundername: Young Elite Scientists Sponsorship Program by CAST
  grantid: 2023QNRC001
– fundername: NUDT Research
  grantid: ZK22-52
– fundername: Natural Science Foundation of Hainan Province; Natural Science Foundation of Hunan Province of China
  grantid: 2022RC1104
  funderid: 10.13039/501100004761
– fundername: National Natural Science Foundation of China; NSFC
  grantid: 62325211; 62132021; 62372457
  funderid: 10.13039/501100001809
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RZB
TN5
VH1
AAYXX
CITATION
NPM
7X8
ID FETCH-LOGICAL-c276t-fb83d94f38344ed12a4ac115c08b58c66e0aa0c9a54a26de73c44f2c987715853
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001567023900013&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1077-2626
1941-0506
IngestDate Sat Nov 01 14:34:53 EDT 2025
Tue Sep 09 02:32:15 EDT 2025
Sat Nov 29 07:34:05 EST 2025
Wed Sep 17 06:32:17 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c276t-fb83d94f38344ed12a4ac115c08b58c66e0aa0c9a54a26de73c44f2c987715853
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-9054-0216
0000-0001-9350-7713
0000-0001-9066-1475
0009-0004-1124-3830
PMID 40261778
PQID 3193711052
PQPubID 23479
PageCount 13
ParticipantIDs ieee_primary_10973321
crossref_primary_10_1109_TVCG_2025_3562871
proquest_miscellaneous_3193711052
pubmed_primary_40261778
PublicationCentury 2000
PublicationDate 2025-10-01
PublicationDateYYYYMMDD 2025-10-01
PublicationDate_xml – month: 10
  year: 2025
  text: 2025-10-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transactions on visualization and computer graphics
PublicationTitleAbbrev TVCG
PublicationTitleAlternate IEEE Trans Vis Comput Graph
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
ref12
ref56
ref59
Gu (ref14)
ref53
Huang (ref49) 2023
Achlioptas (ref22)
Chang (ref57) 2015
ref10
Song (ref50)
ref17
ref16
ref19
ref18
Schwarz (ref26)
Dupont (ref54)
Sohl-Dickstein (ref33)
ref51
Qian (ref37) 2023
ref46
Skorokhodov (ref25)
ref45
ref48
Dhariwal (ref34)
ref47
ref42
ref41
ref44
Denninger (ref58)
ref43
ref8
ref7
ref9
ref4
ref3
ref6
Radford (ref20)
ref5
ref40
Ho (ref21)
ref35
Wang (ref11)
ref31
ref30
ref32
Zhou (ref15) 2021
Ho (ref52) 2022
ref39
ref38
Bautista (ref55)
Gao (ref1)
ref24
ref23
Poole (ref36)
ref28
ref27
Wu (ref2)
ref29
ref60
ref62
ref61
References_xml – start-page: 8780
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref34
  article-title: Diffusion models beat GANs on image synthesis
– start-page: 2256
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref33
  article-title: Deep unsupervised learning using nonequilibrium thermodynamics
– ident: ref44
  doi: 10.1109/CVPR42600.2020.00765
– ident: ref35
  doi: 10.1109/TVCG.2024.3357568
– start-page: 3000
  volume-title: Proc. Int. Conf. Learn. Representations
  ident: ref50
  article-title: Denoising diffusion implicit models
– ident: ref27
  doi: 10.1109/ICCV48922.2021.01404
– ident: ref29
  doi: 10.1145/3635304
– ident: ref59
  doi: 10.1007/978-3-030-20893-6_7
– ident: ref24
  doi: 10.1109/ICCV51070.2023.00215
– ident: ref45
  doi: 10.1109/ICCV51070.2023.01328
– volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref1
  article-title: GET3D: A generative model of high quality 3D textured shapes learned from images
– ident: ref5
  doi: 10.1109/TVCG.2018.2865317
– ident: ref56
  doi: 10.1007/978-3-319-46723-8_49
– ident: ref3
  doi: 10.1007/978-3-031-20062-5_5
– start-page: 5694
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref54
  article-title: From data to functa: Your data point is a function and you can treat it like one
– ident: ref12
  doi: 10.1109/CVPR46437.2021.00574
– ident: ref17
  doi: 10.1109/cvpr52729.2023.00434
– ident: ref42
  doi: 10.1109/TVCG.2023.3265306
– ident: ref53
  doi: 10.48550/ARXIV.1706.03762
– year: 2021
  ident: ref15
  article-title: CIPS-3D: A 3D-aware generator of GANs based on conditionally-independent pixel synthesis
– volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref55
  article-title: GAUDI: A neural architect for immersive 3D scene generation
– ident: ref18
  doi: 10.1109/CVPR52729.2023.00421
– start-page: 8003
  volume-title: Proc. 11th Int. Conf. Learn. Representations
  ident: ref36
  article-title: DreamFusion: Text-to-3D using 2D diffusion
– ident: ref4
  doi: 10.1109/ICCV.2019.01008
– start-page: 50
  volume-title: Proc. Int. Conf. Robot.: Sci. Syst.
  ident: ref58
  article-title: BlenderProc: Reducing the reality gap with photorealistic rendering
– volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref25
  article-title: EpiGRAF: Rethinking training of 3D GANs
– ident: ref8
  doi: 10.1109/TVCG.2021.3051251
– ident: ref61
  doi: 10.1109/ICCV48922.2021.00986
– year: 2022
  ident: ref52
  article-title: Classifier-free diffusion guidance
– ident: ref28
  doi: 10.1109/CVPR42600.2020.00591
– ident: ref16
  doi: 10.1145/3528223.3530084
– ident: ref19
  doi: 10.1109/ICCV51070.2023.02100
– ident: ref7
  doi: 10.1109/TVCG.2021.3126659
– ident: ref30
  doi: 10.1145/3592442
– ident: ref41
  doi: 10.1145/3450626.3459766
– ident: ref31
  doi: 10.1109/ICCV51070.2023.01311
– ident: ref43
  doi: 10.1109/CVPR42600.2020.00091
– year: 2023
  ident: ref49
  article-title: Decoupled diffusion models: Simultaneous image to zero and zero to noise
– start-page: 33999
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref26
  article-title: VoxGRAF: Fast 3D-aware image synthesis with sparse voxel grids
– ident: ref46
  doi: 10.1109/CVPR42600.2020.00857
– ident: ref38
  doi: 10.1109/cvpr52733.2024.00951
– ident: ref51
  doi: 10.1109/CVPR52688.2022.01042
– year: 2015
  ident: ref57
  article-title: ShapeNet: An information-rich 3D model repository
– start-page: 40
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref22
  article-title: Learning representations and generative models for 3D point clouds
– ident: ref6
  doi: 10.1109/TVCG.2020.3029759
– ident: ref47
  doi: 10.1007/978-3-030-58452-8_24
– ident: ref48
  doi: 10.1109/CVPR52688.2022.00538
– ident: ref32
  doi: 10.1145/3618312
– ident: ref39
  doi: 10.1109/CVPR52729.2023.01211
– ident: ref13
  doi: 10.1109/CVPR52688.2022.01565
– ident: ref23
  doi: 10.1007/978-3-030-58571-6_16
– ident: ref40
  doi: 10.1109/ICCV51070.2023.00229
– start-page: 8748
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref20
  article-title: Learning transferable visual models from natural language supervision
– ident: ref9
  doi: 10.1109/TVCG.2021.3131712
– volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref21
  article-title: Denoising diffusion probabilistic models
– ident: ref10
  doi: 10.1109/CVPR.2019.00100
– start-page: 82
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref2
  article-title: Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling
– start-page: 1005
  volume-title: Proc. 10th Int. Conf. Learn. Representations
  ident: ref14
  article-title: StyleNeRF: A style-based 3D aware generator for high-resolution image synthesis
– ident: ref62
  doi: 10.1145/325334.325242
– volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref11
  article-title: PIE-NET: Parametric inference of point cloud edges
– year: 2023
  ident: ref37
  article-title: Magic123: One image to high-quality 3D object generation using both 2D and 3D diffusion priors
– ident: ref60
  doi: 10.1609/aaai.v33i01.33018279
SSID ssj0014489
Score 2.466253
Snippet This article introduces a novel latent 3D diffusion model for generating neural voxel fields with precise part-aware structures and high-quality textures. In...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 8057
SubjectTerms 3D diffusion models
Codes
Decoding
Diffusion models
Diffusion processes
Image color analysis
Noise reduction
part-aware generation
Rendering (computer graphics)
Shape
shape generation
Three-dimensional displays
Training
Title Part-Aware Shape Generation With Latent 3D Diffusion of Neural Voxel Fields
URI https://ieeexplore.ieee.org/document/10973321
https://www.ncbi.nlm.nih.gov/pubmed/40261778
https://www.proquest.com/docview/3193711052
Volume 31
WOSCitedRecordID wos001567023900013&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0506
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014489
  issn: 1077-2626
  databaseCode: RIE
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dSxwxEB-slGIf7JdtT6uk0KdCdC8fm-RRtKdQEaH2em9LNpngQdmVu722f75Jdk_uxYe-5WEzG-Y3ZGYyXwBfAnpppbIUUUsqxuioDj5QUwhvpNfe5fkp0yt1fa1nM3MzFKvnWhhEzMlneJyWOZbvW7dKT2UnKVrKeSobf6aU6ou1HkMG0c8wfYKhoiya6UMIM-45uZ2eXURXkMljHtV9dBF24IXIvcjTdLUNfZQHrDxta2adM3n1n6d9DbuDcUlOe2l4A1vYvIWXGy0H38H3mygr9PSvXSD5cWfvkfSNpxM-5Ne8uyNX0fpsOsLPyfk8hFV6TSNtIKmLR6Q9bf_hbzJJeW_LPfg5-XZ7dkmHgQrUMVV2NNSaeyMCT8M10I-ZFdZFk9AVupbalSUW1hbOWCksKz0q7oQIzBmt1Dj6Ffw9bDdtgx-BMF56zWuhXO2F47VxoUBUQZWeBWnKEXxds7W67_tmVNnfKEyV4KgSHNUAxwj2Evs2Puw5N4LPaySqKPUplGEbbFfLKl4cXEViko3gQw_R4-41svtPUD2AnfTzPiPvE2x3ixUewnP3p5svF0dRtGb6KIvWAxqyx6o
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LbxMxEB6hgqA9lFdLQ3kYiROS240f6_WxahuKGqJKhNDbymuP1Uhot0o2wM_H9m6qXHrozQd7ZM038sx4XgCfPTpppDIUsZBUDNHSwjtPdSaclq5wNs1PmY3VZFJcX-urvlg91cIgYko-w6O4TLF819hV_Co7jtFSzmPZ-GMpBBt25Vp3QYPgaeguxVBRFgz1PogZTh1PZ6dfgzPI5BEPCj84CdvwVKRu5HG-2oZGSiNW7rc2k9YZPX_gfV_Abm9ekpNOHl7CI6xfwc5G08HXcHkVpIWe_DULJD9uzC2SrvV0RIj8mrc3ZBzsz7ol_Iyczb1fxf800ngS-3gE2rPmH_4mo5j5ttyDn6Pz6ekF7UcqUMtU3lJfFdxp4Xkcr4FuyIwwNhiFNisqWdg8x8yYzGojhWG5Q8WtEJ5ZXSg1DJ4F34etuqnxAAjjuSt4JZStnLC80tZniMqr3DEvdT6AL2u2lrdd54wyeRyZLiMcZYSj7OEYwF5k38bGjnMD-LRGogxyH4MZpsZmtSzD08FVICbZAN50EN2dXiP79h6qH-HZxfT7uBx_m1wewna8SJef9w622sUK38MT-6edLxcfkoD9B_cVygk
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Part-Aware+Shape+Generation+With+Latent+3D+Diffusion+of+Neural+Voxel+Fields&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Huang%2C+Yuhang&rft.au=Zou%2C+Shilong&rft.au=Liu%2C+Xinwang&rft.au=Xu%2C+Kai&rft.date=2025-10-01&rft.eissn=1941-0506&rft.volume=31&rft.issue=10&rft.spage=8057&rft_id=info:doi/10.1109%2FTVCG.2025.3562871&rft_id=info%3Apmid%2F40261778&rft.externalDocID=40261778
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon