Laser: Efficient Language-Guided Segmentation in Neural Radiance Fields

In this work, we propose a method that leverages CLIP feature distillation, achieving efficient 3D segmentation through language guidance. Unlike previous methods that rely on multi-scale CLIP features and are limited by processing speed and storage requirements, our approach aims to streamline the...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on pattern analysis and machine intelligence Ročník 47; číslo 5; s. 3922 - 3934
Hlavní autori: Miao, Xingyu, Duan, Haoran, Bai, Yang, Shah, Tejal, Song, Jun, Long, Yang, Ranjan, Rajiv, Shao, Ling
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 01.05.2025
Predmet:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract In this work, we propose a method that leverages CLIP feature distillation, achieving efficient 3D segmentation through language guidance. Unlike previous methods that rely on multi-scale CLIP features and are limited by processing speed and storage requirements, our approach aims to streamline the workflow by directly and effectively distilling dense CLIP features, thereby achieving precise segmentation of 3D scenes using text. To achieve this, we introduce an adapter module and mitigate the noise issue in the dense CLIP feature distillation process through a self-cross-training strategy. Moreover, to enhance the accuracy of segmentation edges, this work presents a low-rank transient query attention mechanism. To ensure the consistency of segmentation for similar colors under different viewpoints, we convert the segmentation task into a classification task through label volume, which significantly improves the consistency of segmentation in color-similar areas. We also propose a simplified text augmentation strategy to alleviate the issue of ambiguity in the correspondence between CLIP features and text. Extensive experimental results show that our method surpasses current state-of-the-art technologies in both training speed and performance.
AbstractList In this work, we propose a method that leverages CLIP feature distillation, achieving efficient 3D segmentation through language guidance. Unlike previous methods that rely on multi-scale CLIP features and are limited by processing speed and storage requirements, our approach aims to streamline the workflow by directly and effectively distilling dense CLIP features, thereby achieving precise segmentation of 3D scenes using text. To achieve this, we introduce an adapter module and mitigate the noise issue in the dense CLIP feature distillation process through a self-cross-training strategy. Moreover, to enhance the accuracy of segmentation edges, this work presents a low-rank transient query attention mechanism. To ensure the consistency of segmentation for similar colors under different viewpoints, we convert the segmentation task into a classification task through label volume, which significantly improves the consistency of segmentation in color-similar areas. We also propose a simplified text augmentation strategy to alleviate the issue of ambiguity in the correspondence between CLIP features and text. Extensive experimental results show that our method surpasses current state-of-the-art technologies in both training speed and performance. Our code is available on: https://github.com/xingy038/Laser.git.In this work, we propose a method that leverages CLIP feature distillation, achieving efficient 3D segmentation through language guidance. Unlike previous methods that rely on multi-scale CLIP features and are limited by processing speed and storage requirements, our approach aims to streamline the workflow by directly and effectively distilling dense CLIP features, thereby achieving precise segmentation of 3D scenes using text. To achieve this, we introduce an adapter module and mitigate the noise issue in the dense CLIP feature distillation process through a self-cross-training strategy. Moreover, to enhance the accuracy of segmentation edges, this work presents a low-rank transient query attention mechanism. To ensure the consistency of segmentation for similar colors under different viewpoints, we convert the segmentation task into a classification task through label volume, which significantly improves the consistency of segmentation in color-similar areas. We also propose a simplified text augmentation strategy to alleviate the issue of ambiguity in the correspondence between CLIP features and text. Extensive experimental results show that our method surpasses current state-of-the-art technologies in both training speed and performance. Our code is available on: https://github.com/xingy038/Laser.git.
In this work, we propose a method that leverages CLIP feature distillation, achieving efficient 3D segmentation through language guidance. Unlike previous methods that rely on multi-scale CLIP features and are limited by processing speed and storage requirements, our approach aims to streamline the workflow by directly and effectively distilling dense CLIP features, thereby achieving precise segmentation of 3D scenes using text. To achieve this, we introduce an adapter module and mitigate the noise issue in the dense CLIP feature distillation process through a self-cross-training strategy. Moreover, to enhance the accuracy of segmentation edges, this work presents a low-rank transient query attention mechanism. To ensure the consistency of segmentation for similar colors under different viewpoints, we convert the segmentation task into a classification task through label volume, which significantly improves the consistency of segmentation in color-similar areas. We also propose a simplified text augmentation strategy to alleviate the issue of ambiguity in the correspondence between CLIP features and text. Extensive experimental results show that our method surpasses current state-of-the-art technologies in both training speed and performance.
Author Shao, Ling
Shah, Tejal
Song, Jun
Miao, Xingyu
Bai, Yang
Duan, Haoran
Long, Yang
Ranjan, Rajiv
Author_xml – sequence: 1
  givenname: Xingyu
  orcidid: 0000-0003-1203-8279
  surname: Miao
  fullname: Miao, Xingyu
  email: xingyu.miao@durham.ac.uk
  organization: Department of Computer Science, Durham University, Durham, U.K
– sequence: 2
  givenname: Haoran
  orcidid: 0000-0001-9956-7020
  surname: Duan
  fullname: Duan, Haoran
  email: haoran.duan@ieee.org
  organization: School of Computing, Newcastle University, Newcastle upon Tyne, U.K
– sequence: 3
  givenname: Yang
  surname: Bai
  fullname: Bai, Yang
  email: bai_yang@ihpc.a-star.edu.sg
  organization: Institute of High Performance Computing (IHPC), ASTAR, Singapore
– sequence: 4
  givenname: Tejal
  surname: Shah
  fullname: Shah, Tejal
  email: tejal.shah@newcastle.ac.uk
  organization: School of Computing, Newcastle University, Newcastle upon Tyne, U.K
– sequence: 5
  givenname: Jun
  orcidid: 0000-0003-3820-7632
  surname: Song
  fullname: Song, Jun
  email: songjun@cug.edu.cn
  organization: School of Computer Science, China University of Geosciences, Wuhan, China
– sequence: 6
  givenname: Yang
  orcidid: 0000-0002-2445-6112
  surname: Long
  fullname: Long, Yang
  email: yang.long@ieee.org
  organization: Department of Computer Science, Durham University, Durham, U.K
– sequence: 7
  givenname: Rajiv
  orcidid: 0000-0002-6610-1328
  surname: Ranjan
  fullname: Ranjan, Rajiv
  email: rranjans@gmail.com
  organization: School of Computing, Newcastle University, Newcastle upon Tyne, U.K
– sequence: 8
  givenname: Ling
  orcidid: 0000-0002-8264-6117
  surname: Shao
  fullname: Shao, Ling
  email: ling.shao@ieee.org
  organization: UCAS-Terminus AI Lab, University of Chinese Academy of Sciences, Beijing, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/40031329$$D View this record in MEDLINE/PubMed
BookMark eNpNkMtOwzAQRS0Eog_4AYRQlmxSxnacxOyqqi2VykNQ1pYTjyujPEqcLPh7UloQq5GuzrnS3BE5reoKCbmiMKEU5N3mZfq4mjBgYsIFF5LGJ2TIaAyhZJKdkiHQmIVpytIBGXn_AUAjAfycDCIATjmTQ7Jca4_NfTC31uUOqzZY62rb6S2Gy84ZNMEbbss-162rq8BVwRN2jS6CV22crnIMFg4L4y_ImdWFx8vjHZP3xXwzewjXz8vVbLoOcy5pG1qKAMaAocxEELEcLUAqMiGwDyObArecZplGqiGPI5uIXsBE6syw_jU-JreH3l1Tf3boW1U6n2NR6ArrzitOE94XQ8x69OaIdlmJRu0aV-rmS_0-3wPsAORN7X2D9g-hoPYLq5-F1X5hdVy4l64PkkPEf0IqEiEZ_wZ-z3W4
CODEN ITPIDJ
Cites_doi 10.1109/CVPR52733.2024.01895
10.1109/ICCV48922.2021.00570
10.1109/CVPR52729.2023.00682
10.1109/3DV57658.2022.00056
10.1007/978-3-031-19824-3_20
10.1109/CVPR.2019.00845
10.1145/237170.237199
10.1109/CVPR46437.2021.01204
10.1109/ICCV51070.2023.01807
10.1007/978-3-031-19824-3_7
10.1109/CVPR52688.2022.00538
10.1109/CVPR52688.2022.00542
10.1145/237170.237200
10.1109/3DV57658.2022.00042
10.1109/CVPR52733.2024.02048
10.1109/CVPR46437.2021.00455
10.1145/237170.237191
10.1109/CVPR46437.2021.00930
10.1109/ICCVW60793.2023.00105
10.1145/3528223.3530127
10.1007/978-3-031-72664-4_18
10.1109/CVPR52729.2023.00873
10.1109/CVPR52733.2024.00510
10.1145/3272127.3275084
10.1109/CVPR52688.2022.00536
10.1109/CVPR52729.2023.00289
10.1145/2674559
10.1109/CVPR52688.2022.01571
10.1145/3130800.3130855
10.1145/344779.344932
10.1145/3503250
10.1109/ICCV48922.2021.00951
10.1109/JSTSP.2017.2747126
10.1109/CVPR52688.2022.01760
10.1145/2601097.2601195
10.1109/ICCV51070.2023.00371
10.1109/CVPR52733.2024.00394
10.1007/978-3-030-58529-7_37
10.1109/ICCV48922.2021.01554
10.1145/383259.383309
10.1145/2980179.2982420
10.1007/978-3-031-20059-5_31
10.1145/2980179.2980251
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7X8
DOI 10.1109/TPAMI.2025.3535916
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
PubMed

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 3934
ExternalDocumentID 40031329
10_1109_TPAMI_2025_3535916
10857592
Genre orig-research
Journal Article
GrantInformation_xml – fundername: International Exchanges 2022
  grantid: IEC\NSFC\223523
– fundername: U.K. Medical Research Council
  grantid: MR/S003916/2
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
9M8
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ADRHT
AENEX
AETEA
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
FA8
HZ~
H~9
IBMZZ
ICLAB
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RXW
RZB
TAE
TN5
UHB
VH1
XJT
~02
AAYXX
CITATION
NPM
RIG
7X8
ID FETCH-LOGICAL-c391t-f1e00dd0d12d4042cef0085b55ed0d4f803f31bbae1a0c64f75e00e79abd22163
IEDL.DBID RIE
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001465416300041&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0162-8828
1939-3539
IngestDate Thu Oct 02 05:54:08 EDT 2025
Mon Jul 21 05:20:23 EDT 2025
Sat Nov 29 08:01:37 EST 2025
Wed Aug 27 02:04:40 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 5
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c391t-f1e00dd0d12d4042cef0085b55ed0d4f803f31bbae1a0c64f75e00e79abd22163
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-6610-1328
0000-0003-3820-7632
0000-0002-2445-6112
0000-0002-8264-6117
0000-0001-9956-7020
0000-0003-1203-8279
OpenAccessLink https://doi.org/10.1109/TPAMI.2025.3535916
PMID 40031329
PQID 3173404062
PQPubID 23479
PageCount 13
ParticipantIDs proquest_miscellaneous_3173404062
pubmed_primary_40031329
crossref_primary_10_1109_TPAMI_2025_3535916
ieee_primary_10857592
PublicationCentury 2000
PublicationDate 2025-05-01
PublicationDateYYYYMMDD 2025-05-01
PublicationDate_xml – month: 05
  year: 2025
  text: 2025-05-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References Ding (ref13)
ref57
ref12
ref56
ref15
ref59
ref14
Kobayashi (ref7)
ref53
ref52
ref55
ref54
ref17
ref16
Devlin (ref11) 2018
ref19
ref18
ref51
ref50
ref46
ref47
ref42
ref44
ref49
ref3
Bucher (ref41); 32
ref5
ref40
ref35
Liu (ref6)
ref34
Bhalgat (ref4)
ref37
ref36
ref31
ref33
Li (ref9) 2023
ref32
Kerbl (ref2) 2023; 42
Liu (ref30)
Flynn (ref23)
ref1
ref39
ref38
Mikolov (ref43) 2013
Shen (ref48) 2023
Li (ref10)
ref24
ref26
ref25
ref20
ref22
ref21
ref28
ref27
ref29
Li (ref45) 2022
Radford (ref8)
Straub (ref58) 2019
References_xml – ident: ref53
  doi: 10.1109/CVPR52733.2024.01895
– ident: ref39
  doi: 10.1109/ICCV48922.2021.00570
– ident: ref57
  doi: 10.1109/CVPR52729.2023.00682
– ident: ref51
  doi: 10.1109/3DV57658.2022.00056
– start-page: 15651
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref30
  article-title: Neural sparse voxel fields
– ident: ref36
  doi: 10.1007/978-3-031-19824-3_20
– year: 2019
  ident: ref58
  article-title: The replica dataset: A digital replica of indoor spaces
– volume: 42
  issue: 4
  volume-title: ACM Trans. Graph.
  year: 2023
  ident: ref2
  article-title: 3D Gaussian splatting for real-time radiance field rendering
– ident: ref42
  doi: 10.1109/CVPR.2019.00845
– ident: ref19
  doi: 10.1145/237170.237199
– ident: ref29
  doi: 10.1109/CVPR46437.2021.01204
– year: 2023
  ident: ref9
  article-title: BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models
– ident: ref5
  doi: 10.1109/ICCV51070.2023.01807
– ident: ref33
  doi: 10.1007/978-3-031-19824-3_7
– volume-title: Proc. 37th Conf. Neural Inf. Process. Syst.
  ident: ref6
  article-title: Weakly supervised 3D open-vocabulary segmentation
– ident: ref38
  doi: 10.1109/CVPR52688.2022.00538
– ident: ref40
  doi: 10.1109/CVPR52688.2022.00542
– start-page: 8748
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref8
  article-title: Learning transferable visual models from natural language supervision
– ident: ref18
  doi: 10.1145/237170.237200
– ident: ref47
  doi: 10.1109/3DV57658.2022.00042
– ident: ref55
  doi: 10.1109/CVPR52733.2024.02048
– ident: ref35
  doi: 10.1109/CVPR46437.2021.00455
– ident: ref22
  doi: 10.1145/237170.237191
– ident: ref32
  doi: 10.1109/CVPR46437.2021.00930
– start-page: 8090
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref13
  article-title: Open-vocabulary universal image segmentation with MaskCLIP
– ident: ref59
  doi: 10.1109/ICCVW60793.2023.00105
– ident: ref37
  doi: 10.1145/3528223.3530127
– ident: ref15
  doi: 10.1007/978-3-031-72664-4_18
– start-page: 12888
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref10
  article-title: BLIP: Bootstrapping language-image pre-training for unified vision-language understanding and generation
– start-page: 5515
  volume-title: Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
  ident: ref23
  article-title: DeepStereo: Learning to predict new views from the world’s imagery
– ident: ref3
  doi: 10.1109/CVPR52729.2023.00873
– ident: ref54
  doi: 10.1109/CVPR52733.2024.00510
– ident: ref24
  doi: 10.1145/3272127.3275084
– ident: ref34
  doi: 10.1109/CVPR52688.2022.00536
– ident: ref49
  doi: 10.1109/CVPR52729.2023.00289
– ident: ref56
  doi: 10.1145/2674559
– ident: ref31
  doi: 10.1109/CVPR52688.2022.01571
– ident: ref27
  doi: 10.1145/3130800.3130855
– start-page: 23311
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref7
  article-title: Decomposing NERF for editing via feature field distillation
– ident: ref16
  doi: 10.1145/344779.344932
– ident: ref1
  doi: 10.1145/3503250
– ident: ref12
  doi: 10.1109/ICCV48922.2021.00951
– ident: ref20
  doi: 10.1109/JSTSP.2017.2747126
– year: 2023
  ident: ref48
  article-title: Anything-3D: Towards single-view anything reconstruction in the wild
– volume: 32
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref41
  article-title: Zero-shot semantic segmentation
– year: 2022
  ident: ref45
  article-title: Language-driven semantic segmentation
– year: 2018
  ident: ref11
  article-title: BERT: Pre-training of deep bidirectional transformers for language understanding
– ident: ref44
  doi: 10.1109/CVPR52688.2022.01760
– ident: ref26
  doi: 10.1145/2601097.2601195
– ident: ref52
  doi: 10.1109/ICCV51070.2023.00371
– ident: ref14
  doi: 10.1109/CVPR52733.2024.00394
– ident: ref28
  doi: 10.1007/978-3-030-58529-7_37
– ident: ref50
  doi: 10.1109/ICCV48922.2021.01554
– year: 2013
  ident: ref43
  article-title: Efficient estimation of word representations in vector space
– ident: ref21
  doi: 10.1145/383259.383309
– ident: ref25
  doi: 10.1145/2980179.2982420
– ident: ref46
  doi: 10.1007/978-3-031-20059-5_31
– ident: ref17
  doi: 10.1145/2980179.2980251
– volume-title: Proc. 37th Conf. Neural Inf. Process. Syst.
  ident: ref4
  article-title: Contrastive lift: 3D object instance segmentation by slow-fast contrastive fusion
SSID ssj0014503
Score 2.5100396
Snippet In this work, we propose a method that leverages CLIP feature distillation, achieving efficient 3D segmentation through language guidance. Unlike previous...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 3922
SubjectTerms 3D segmentation
Accuracy
CLIP
Feature extraction
Image segmentation
NeRF
Neural radiance field
Rendering (computer graphics)
Semantics
Solid modeling
Three-dimensional displays
Training
Visualization
Title Laser: Efficient Language-Guided Segmentation in Neural Radiance Fields
URI https://ieeexplore.ieee.org/document/10857592
https://www.ncbi.nlm.nih.gov/pubmed/40031329
https://www.proquest.com/docview/3173404062
Volume 47
WOSCitedRecordID wos001465416300041&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB5RhCp6KJTSdstDrtRbFbATO7F7Q4gFpC1CLZX2Fjn2uNoDWbSP_v6OnQTBgUNvkeVH4s8Tf_a8AL5aKi-1CpltZMhkbl2mDcrMFMp61KJ0IWUtmVQ3N3o6Nbe9s3ryhUHEZHyGJ_Ex6fL93K3jVdmp6PJJ0h_3VVVVnbPWo8pAqpQGmSgMiTidIwYPGW5O727PflzTWTBXJ4UqFDGibXgtu7CF5tmGlDKsvEw206Yz3vnP192Ftz27ZGfdcngHG9juwc6QuYH1grwHb56EIXwPlxPayhbf2UUKJ0F9skl_i5ldrmcePfuFf-57J6WWzVoWQ3rQOD9TYAOHbBzt4Jb78Ht8cXd-lfUJFjJXGLHKgkDOvede5F6S9DoMkYI1SiEVyqB5EQrRNBaF5a6UoVLUACtjG5_nxOQ-wGY7b_ETMKmw0dGJNrdellpYpb3w3DXc8CaEfATfhlmuH7o4GnU6f3BTJ3jqCE_dwzOC_TidT2p2MzmCLwMyNYlB1G3YFufrZU00qKBP4CXV-dhB9th6QPrzC70ewHYcvDNjPITN1WKNR7Dl_q5my8UxrbWpPk5r7R8Xns25
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4hilp6gEKh3dIWI_VWBezEzsbcUMUCalihspW4RY49RnsgW-2jv79jJ0H0wIFbZNmOM-PJfPa8AL4Zas8L5RNTS5_I1Nik0CgTnSnjsBC59bFqSTkcj4u7O33TBavHWBhEjM5neBweoy3fzewqXJWdiLaeJP1xXykpU9GGaz0aDaSKhZAJxJCQ00mij5Hh-mRyc3Z9RafBVB1nKlOEiTbhtWwTF-r_VFKssfI83IxqZ7T9wgW_g60OX7KzdkPswBo2u7Dd125gnSjvwtsniQjfw0VJymx-ys5jQgmak5XdPWZysZo6dOwW7x-6MKWGTRsWknrQe37F1AYW2Sh4wi324PfofPLjMulKLCQ202KZeIGcO8edSJ0k-bXoAwirlUJqlL7gmc9EXRsUhttc-qGiATjUpnZpSlhuH9abWYMfgUmFdRHCaFPjZF4IowonHLc117z2Ph3A957K1Z82k0YVTyBcV5E9VWBP1bFnAHuBnE96tpQcwFHPmYoEIVg3TIOz1aIiIJTRJ_Cc-nxoWfY4uuf0p2dmPYQ3l5Prsiqvxj8PYDMspHVq_Azry_kKv8CG_bucLuZf4477Bx5h0Bg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Laser%3A+Efficient+Language-Guided+Segmentation+in+Neural+Radiance+Fields&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Miao%2C+Xingyu&rft.au=Duan%2C+Haoran&rft.au=Bai%2C+Yang&rft.au=Shah%2C+Tejal&rft.date=2025-05-01&rft.eissn=1939-3539&rft.volume=47&rft.issue=5&rft.spage=3922&rft_id=info:doi/10.1109%2FTPAMI.2025.3535916&rft_id=info%3Apmid%2F40031329&rft.externalDocID=40031329
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon