Pixel-Perfect Structure-From-Motion With Featuremetric Refinement

Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects keypoints per-image once and for all, which can yield poorly-localized features and propagate large errors to the final geometry. In this artic...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence Vol. 47; no. 5; pp. 3298 - 3309
Main Authors: Sarlin, Paul-Edouard, Lindenberger, Philipp, Larsson, Viktor, Pollefeys, Marc
Format: Journal Article
Language:English
Published: United States IEEE 01.05.2025
Subjects:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects keypoints per-image once and for all, which can yield poorly-localized features and propagate large errors to the final geometry. In this article, we refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views: we first adjust the initial keypoint locations prior to any geometric estimation, and subsequently refine points and camera poses as a post-processing. This refinement is robust to large detection noise and appearance changes, as it optimizes a featuremetric error based on dense features predicted by a neural network. This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors, challenging viewing conditions, and off-the-shelf deep features. Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale.
AbstractList Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects keypoints per-image once and for all, which can yield poorly-localized features and propagate large errors to the final geometry. In this article, we refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views: we first adjust the initial keypoint locations prior to any geometric estimation, and subsequently refine points and camera poses as a post-processing. This refinement is robust to large detection noise and appearance changes, as it optimizes a featuremetric error based on dense features predicted by a neural network. This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors, challenging viewing conditions, and off-the-shelf deep features. Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale.
Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects keypoints per-image once and for all, which can yield poorly-localized features and propagate large errors to the final geometry. In this article, we refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views: we first adjust the initial keypoint locations prior to any geometric estimation, and subsequently refine points and camera poses as a post-processing. This refinement is robust to large detection noise and appearance changes, as it optimizes a featuremetric error based on dense features predicted by a neural network. This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors, challenging viewing conditions, and off-the-shelf deep features. Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale.Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects keypoints per-image once and for all, which can yield poorly-localized features and propagate large errors to the final geometry. In this article, we refine two key steps of structure-from-motion by a direct alignment of low-level image information from multiple views: we first adjust the initial keypoint locations prior to any geometric estimation, and subsequently refine points and camera poses as a post-processing. This refinement is robust to large detection noise and appearance changes, as it optimizes a featuremetric error based on dense features predicted by a neural network. This significantly improves the accuracy of camera poses and scene geometry for a wide range of keypoint detectors, challenging viewing conditions, and off-the-shelf deep features. Our system easily scales to large image collections, enabling pixel-perfect crowd-sourced localization at scale.
Author Pollefeys, Marc
Sarlin, Paul-Edouard
Larsson, Viktor
Lindenberger, Philipp
Author_xml – sequence: 1
  givenname: Paul-Edouard
  orcidid: 0000-0001-6230-266X
  surname: Sarlin
  fullname: Sarlin, Paul-Edouard
  email: psarlin@inf.ethz.ch
  organization: Department of Computer Science, ETH Zurich, Zürich, Switzerland
– sequence: 2
  givenname: Philipp
  orcidid: 0000-0003-4112-6542
  surname: Lindenberger
  fullname: Lindenberger, Philipp
  email: plindenbe@ethz.ch
  organization: Department of Mathematics, ETH Zurich, Zürich, Switzerland
– sequence: 3
  givenname: Viktor
  orcidid: 0000-0001-8427-7215
  surname: Larsson
  fullname: Larsson, Viktor
  email: viktor.larsson@math.lth.se
  organization: Lund University, Lund, Sweden
– sequence: 4
  givenname: Marc
  surname: Pollefeys
  fullname: Pollefeys, Marc
  email: marc.pollefeys@inf.ethz.ch
  organization: Department of Computer Science, ETH Zurich, Zürich, Switzerland
BackLink https://www.ncbi.nlm.nih.gov/pubmed/37021895$$D View this record in MEDLINE/PubMed
BookMark eNp9kUFv1DAQhS1URLeFP4AQ2iOXLPY4duLjqmKhUitWUMRxNPFOVKNsstiOgH9P0t0ixIHTyJr3zTzPuxBn_dCzEC-VXCkl3du77fr2egUS9EqDrsC6J2IBysrCgYMzsZDKQlHXUJ-Li5S-SalKI_Uzca4rCap2ZiHW2_CTu2LLsWWfl59zHH0eIxebOOyL2yGHoV9-Dfl-uWGaG3vOMfjlJ25DPz36_Fw8balL_OJUL8WXzbu7qw_Fzcf311frm8KXtc0F6KZ1lpVWjfdlK6myWnuljTHNrnEVgdq1jZlctdYbMhZcaUsCXzPIxjX6UtBxbvrBh7HBQwx7ir9woICHIWbqMHJiiv4euxET46Tqgqf5Cwm11tYzMeodlVhaY5FK7xFAEZnWgtJy2vHmuOMQh-8jp4z7kDx3HfU8jAmhctV8w6qepK9P0rHZ8-6PncfTToL6KPBxSClyiz7kBzM5UuhQSZxTxIcUcU4RTylOKPyDPk7_L_TqCAVm_guQqi6l078BHLGpAw
CODEN ITPIDJ
CitedBy_id crossref_primary_10_3390_s24206595
Cites_doi 10.1109/TPAMI.1986.4767838
10.1007/978-3-642-15552-9_3
10.1109/TPAMI.2011.256
10.1109/CVPR46437.2021.00326
10.1109/CVPR.2019.00863
10.1109/CVPR42600.2020.00629
10.1109/TPAMI.2017.2658577
10.1007/978-3-319-54190-7_20
10.1007/978-3-030-58452-8_39
10.1007/978-3-030-01237-3_47
10.1109/ICRA.2017.7989525
10.5244/C.2.23
10.1023/B:VISI.0000011205.11775.fd
10.1109/LRA.2020.2965031
10.1109/ICCV.2013.70
10.1109/CVPR.2019.00471
10.1109/CVPR46437.2021.00566
10.1109/CVPR.2017.272
10.1109/cvpr.1994.323831
10.1109/TIP.2015.2431445
10.1109/CVPR46437.2021.01431
10.1109/TPAMI.2009.77
10.1007/s11263-007-0107-3
10.1109/CVPR.2019.01300
10.1109/CVPR.2018.00897
10.1007/978-3-030-58545-7_35
10.1023/A:1014554110407
10.1007/978-3-319-46466-4_28
10.1109/3DV50981.2020.00107
10.1109/IROS.2013.6696650
10.1109/TPAMI.2010.147
10.1007/978-3-030-58548-8_36
10.1007/11744023_34
10.1109/CVPR46437.2021.00048
10.1109/CVPR.2019.00828
10.1109/CVPR.2007.383115
10.1109/CVPR.2009.5206587
10.1109/CVPR.2017.179
10.1080/03610927708827533
10.1109/tpami.2021.3103980
10.1109/CVPR.2016.445
10.5244/C.26.76
10.1023/b:visi.0000029664.99615.94
10.1007/978-3-642-15561-1_27
10.1007/s11263-020-01385-0
10.1007/978-3-642-33718-5_2
10.1007/978-3-030-58536-5_42
10.1007/11744023_32
10.5244/C.22.14
10.1109/3DV.2016.48
10.1109/CVPR.2014.193
10.1007/s11263-020-01399-8
10.1023/A:1014573219977
10.1145/2001269.2001293
10.1109/cvpr.2017.402
10.1109/CVPR46437.2021.00881
10.1109/CVPR42600.2020.00499
10.1007/978-3-540-45243-0_31
10.1109/CVPR42600.2020.00202
10.1002/9781118186435
10.1109/CVPR.2005.354
10.1109/ICCVW.2017.105
10.1109/CVPRW.2018.00060
10.1109/CVPR.2018.00931
10.1007/978-3-319-10605-2_54
10.1109/CVPR46437.2021.00464
10.1109/CVPR.2016.592
10.1090/qam/10666
10.1109/CVPR.2019.00022
10.1007/978-3-030-01237-3_18
10.1109/LRA.2020.3039216
10.1109/CVPR.2009.5206848
10.1109/TPAMI.2019.2952114
10.1109/CVPR.2019.00567
10.1109/ICCV.2017.260
10.1109/ICCV48922.2021.00593
10.1007/3-540-44480-7_21
ContentType Journal Article
CorporateAuthor Computer Vision and Machine Learning
Lunds universitets profilområden
Naturvetenskapliga fakulteten
Faculty of Engineering, LTH
Lunds Tekniska Högskola
LU Profile Area: Natural and Artificial Cognition
LTH Profile areas
ELLIIT: the Linköping-Lund initiative on IT and mobile communication
Strategiska forskningsområden (SFO)
Matematikcentrum
LTH profilområde: AI och digitalisering
Matematik LTH
Research groups at the Centre for Mathematical Sciences
Lunds universitet
LTH profilområden
Profile areas and other strong research environments
LU profilområde: Naturlig och artificiell kognition
Faculty of Science
Lund University
Lund University Profile areas
Datorseende och maskininlärning
LTH Profile Area: AI and Digitalization
Centre for Mathematical Sciences
Forskargrupper vid Matematikcentrum
Mathematical Imaging Group
Strategic research areas (SRA)
Mathematics (Faculty of Engineering)
Profilområden och andra starka forskningsmiljöer
CorporateAuthor_xml – name: Naturvetenskapliga fakulteten
– name: Strategiska forskningsområden (SFO)
– name: LTH Profile Area: AI and Digitalization
– name: Research groups at the Centre for Mathematical Sciences
– name: Strategic research areas (SRA)
– name: Lunds Tekniska Högskola
– name: Datorseende och maskininlärning
– name: Faculty of Engineering, LTH
– name: Lund University Profile areas
– name: Lund University
– name: LTH profilområde: AI och digitalisering
– name: Matematik LTH
– name: Computer Vision and Machine Learning
– name: Profile areas and other strong research environments
– name: Matematikcentrum
– name: ELLIIT: the Linköping-Lund initiative on IT and mobile communication
– name: Mathematics (Faculty of Engineering)
– name: LU Profile Area: Natural and Artificial Cognition
– name: Lunds universitet
– name: Faculty of Science
– name: Lunds universitets profilområden
– name: Profilområden och andra starka forskningsmiljöer
– name: LTH Profile areas
– name: LTH profilområden
– name: Centre for Mathematical Sciences
– name: LU profilområde: Naturlig och artificiell kognition
– name: Forskargrupper vid Matematikcentrum
– name: Mathematical Imaging Group
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7X8
ADTPV
AGCHP
AOWAS
D8T
D95
ZZAVC
DOI 10.1109/TPAMI.2023.3237269
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
MEDLINE - Academic
SwePub
SWEPUB Lunds universitet full text
SwePub Articles
SWEPUB Freely available online
SWEPUB Lunds universitet
SwePub Articles full text
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList PubMed


MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 3309
ExternalDocumentID oai_portal_research_lu_se_publications_3336ceae_3da4_4656_a4cc_221aa5f62130
37021895
10_1109_TPAMI_2023_3237269
10018409
Genre orig-research
Journal Article
GrantInformation_xml – fundername: ETH Zurich Postdoctoral Fellowship
– fundername: Huawei
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
9M8
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ADRHT
AENEX
AETEA
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
FA8
HZ~
H~9
IBMZZ
ICLAB
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RXW
RZB
TAE
TN5
UHB
VH1
XJT
~02
AAYXX
CITATION
NPM
RIG
7X8
ADTPV
AGCHP
AOWAS
D8T
D95
ZZAVC
ID FETCH-LOGICAL-c486t-23bf96e131bcc4f0a7633c13555bdb97a21dfb5189f6c5a5629464a2c8e20b9b3
IEDL.DBID RIE
ISICitedReferencesCount 6
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001465416300050&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0162-8828
1939-3539
IngestDate Sun Nov 23 03:11:00 EST 2025
Sat Sep 27 22:52:52 EDT 2025
Mon Jul 21 05:19:24 EDT 2025
Tue Nov 18 22:17:20 EST 2025
Sat Nov 29 08:02:39 EST 2025
Wed Aug 27 02:04:40 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 5
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c486t-23bf96e131bcc4f0a7633c13555bdb97a21dfb5189f6c5a5629464a2c8e20b9b3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0001-8427-7215
0000-0001-6230-266X
0000-0003-4112-6542
PMID 37021895
PQID 2797145078
PQPubID 23479
PageCount 12
ParticipantIDs proquest_miscellaneous_2797145078
pubmed_primary_37021895
crossref_citationtrail_10_1109_TPAMI_2023_3237269
crossref_primary_10_1109_TPAMI_2023_3237269
swepub_primary_oai_portal_research_lu_se_publications_3336ceae_3da4_4656_a4cc_221aa5f62130
ieee_primary_10018409
PublicationCentury 2000
PublicationDate 2025-05-01
PublicationDateYYYYMMDD 2025-05-01
PublicationDate_xml – month: 05
  year: 2025
  text: 2025-05-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
ref12
ref15
ref14
ref58
ref53
Tang (ref81)
ref52
ref96
ref11
ref10
ref54
ref17
ref16
Li (ref48)
Revaud (ref6)
ref92
ref51
ref95
ref94
ref91
Ono (ref57); 31
ref46
ref45
ref89
ref47
Simonyan (ref93)
ref42
ref41
ref85
ref44
ref88
ref43
ref87
Förstner (ref50)
Tang (ref56)
ref49
ref8
ref7
ref4
ref3
Woodford (ref69)
ref5
Lucas (ref24)
Christiansen (ref55) 2019
ref82
ref40
ref84
ref83
ref80
ref35
ref79
ref34
ref78
ref37
Rocco (ref39)
ref36
ref31
ref75
ref30
ref74
ref33
ref77
ref76
ref2
ref1
Choy (ref38)
ref71
ref70
ref73
ref72
ref68
ref23
ref67
ref26
ref25
ref20
ref64
ref63
ref22
ref66
ref21
ref65
Germain (ref19)
Tyszkiewicz (ref18)
ref28
ref27
ref29
Eichhardt (ref59)
Heinly (ref9)
ref60
ref62
ref61
References_xml – ident: ref51
  doi: 10.1109/TPAMI.1986.4767838
– ident: ref21
  doi: 10.1007/978-3-642-15552-9_3
– ident: ref22
  doi: 10.1109/TPAMI.2011.256
– ident: ref82
  doi: 10.1109/CVPR46437.2021.00326
– ident: ref54
  doi: 10.1109/CVPR.2019.00863
– ident: ref41
  doi: 10.1109/CVPR42600.2020.00629
– volume-title: Proc. Int. Conf. Learn. Representations
  ident: ref56
  article-title: Neural outlier rejection for self-supervised keypoint learning
– start-page: 3287
  volume-title: Proc. Conf. Comput. Vis. Pattern Recognit.
  ident: ref9
  article-title: Reconstructing the world* in six days *(as captured by the Yahoo 100 million image dataset)
– ident: ref25
  doi: 10.1109/TPAMI.2017.2658577
– ident: ref63
  doi: 10.1007/978-3-319-54190-7_20
– ident: ref23
  doi: 10.1007/978-3-030-58452-8_39
– ident: ref45
  doi: 10.1007/978-3-030-01237-3_47
– start-page: 2414
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref38
  article-title: Universal correspondence network
– ident: ref72
  doi: 10.1109/ICRA.2017.7989525
– volume-title: Proc. Int. Conf. Learn. Representations
  ident: ref81
  article-title: BA-Net: Dense bundle adjustment network
– ident: ref1
  doi: 10.5244/C.2.23
– start-page: 626
  volume-title: Proc. Eur. Conf. Comput. Vis.
  ident: ref19
  article-title: S2DNet: Learning accurate correspondences for sparse-to-dense feature matching
– ident: ref60
  doi: 10.1023/B:VISI.0000011205.11775.fd
– ident: ref76
  doi: 10.1109/LRA.2020.2965031
– ident: ref30
  doi: 10.1109/ICCV.2013.70
– volume: 31
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref57
  article-title: LF-Net: Learning local features from images
– ident: ref73
  doi: 10.1109/CVPR.2019.00471
– ident: ref92
  doi: 10.1109/CVPR46437.2021.00566
– ident: ref68
  doi: 10.1109/CVPR.2017.272
– start-page: 281
  volume-title: Proc. ISPRS Intercommission Conf. Fast Process. Photogrammetric Data
  ident: ref50
  article-title: A fast operator for detection and precise location of distinct points, corners and centres of circular features
– ident: ref65
  doi: 10.1109/cvpr.1994.323831
– ident: ref78
  doi: 10.1109/TIP.2015.2431445
– ident: ref31
  doi: 10.1109/CVPR46437.2021.01431
– ident: ref37
  doi: 10.1109/TPAMI.2009.77
– ident: ref27
  doi: 10.1007/s11263-007-0107-3
– start-page: 674
  volume-title: Proc. Int. Joint Conf. Artif. Intell.
  ident: ref24
  article-title: An iterative image registration technique with an application to stereo vision
– ident: ref33
  doi: 10.1109/CVPR.2019.01300
– ident: ref34
  doi: 10.1109/CVPR.2018.00897
– ident: ref47
  doi: 10.1007/978-3-030-58545-7_35
– ident: ref53
  doi: 10.1023/A:1014554110407
– start-page: 367
  volume-title: Proc. Brit. Mach. Vis. Conf.
  ident: ref69
  article-title: Large scale photometric bundle adjustment
– ident: ref17
  doi: 10.1007/978-3-319-46466-4_28
– ident: ref77
  doi: 10.1109/3DV50981.2020.00107
– ident: ref61
  doi: 10.1109/IROS.2013.6696650
– ident: ref36
  doi: 10.1109/TPAMI.2010.147
– ident: ref40
  doi: 10.1007/978-3-030-58548-8_36
– ident: ref16
  doi: 10.1007/11744023_34
– ident: ref84
  doi: 10.1109/CVPR46437.2021.00048
– ident: ref5
  doi: 10.1109/CVPR.2019.00828
– ident: ref29
  doi: 10.1109/CVPR.2007.383115
– ident: ref11
  doi: 10.1109/CVPR.2009.5206587
– start-page: 17346
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref48
  article-title: Dual-resolution correspondence networks
– volume-title: Proc. Int. Conf. Learn. Representations
  ident: ref93
  article-title: Very deep convolutional networks for large-scale image recognition
– start-page: 12414
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref6
  article-title: R2D2: Repeatable and reliable detector and descriptor
– ident: ref43
  doi: 10.1109/CVPR.2017.179
– ident: ref87
  doi: 10.1080/03610927708827533
– start-page: 1658
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref39
  article-title: Neighbourhood consensus networks
– ident: ref58
  doi: 10.1109/tpami.2021.3103980
– ident: ref28
  doi: 10.1109/CVPR.2016.445
– ident: ref95
  doi: 10.5244/C.26.76
– ident: ref2
  doi: 10.1023/b:visi.0000029664.99615.94
– ident: ref15
  doi: 10.1007/978-3-642-15561-1_27
– ident: ref89
  doi: 10.1007/s11263-020-01385-0
– ident: ref10
  doi: 10.1007/978-3-642-33718-5_2
– ident: ref7
  doi: 10.1007/978-3-030-58536-5_42
– ident: ref3
  doi: 10.1007/11744023_32
– ident: ref70
  doi: 10.5244/C.22.14
– ident: ref71
  doi: 10.1109/3DV.2016.48
– ident: ref66
  doi: 10.1109/CVPR.2014.193
– ident: ref96
  doi: 10.1007/s11263-020-01399-8
– ident: ref52
  doi: 10.1023/A:1014573219977
– ident: ref14
  doi: 10.1145/2001269.2001293
– ident: ref79
  doi: 10.1109/cvpr.2017.402
– ident: ref42
  doi: 10.1109/CVPR46437.2021.00881
– ident: ref8
  doi: 10.1109/CVPR42600.2020.00499
– ident: ref88
  doi: 10.1007/978-3-540-45243-0_31
– start-page: 14254
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref18
  article-title: DISK: Learning local features with policy gradient
– ident: ref80
  doi: 10.1109/CVPR42600.2020.00202
– year: 2019
  ident: ref55
  article-title: UnsuperPoint: End-to-end unsupervised interest point detector and descriptor
– volume-title: Proc. Brit. Mach. Vis. Conf.
  ident: ref59
  article-title: Optimal multi-view correction of local affine frames
– ident: ref83
  doi: 10.1002/9781118186435
– ident: ref91
  doi: 10.1109/CVPR.2005.354
– ident: ref26
  doi: 10.1109/ICCVW.2017.105
– ident: ref4
  doi: 10.1109/CVPRW.2018.00060
– ident: ref44
  doi: 10.1109/CVPR.2018.00931
– ident: ref62
  doi: 10.1007/978-3-319-10605-2_54
– ident: ref49
  doi: 10.1109/CVPR46437.2021.00464
– ident: ref13
  doi: 10.1109/CVPR.2016.592
– ident: ref85
  doi: 10.1090/qam/10666
– ident: ref64
  doi: 10.1109/CVPR.2019.00022
– ident: ref74
  doi: 10.1007/978-3-030-01237-3_18
– ident: ref75
  doi: 10.1109/LRA.2020.3039216
– ident: ref94
  doi: 10.1109/CVPR.2009.5206848
– ident: ref46
  doi: 10.1109/TPAMI.2019.2952114
– ident: ref67
  doi: 10.1109/CVPR.2019.00567
– ident: ref12
  doi: 10.1109/ICCV.2017.260
– ident: ref35
  doi: 10.1109/ICCV48922.2021.00593
– ident: ref20
  doi: 10.1007/3-540-44480-7_21
SSID ssj0014503
Score 2.5274084
Snippet Finding local features that are repeatable across multiple views is a cornerstone of sparse 3D reconstruction. The classical image matching paradigm detects...
SourceID swepub
proquest
pubmed
crossref
ieee
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 3298
SubjectTerms Bundle adjustment
Computer and Information Sciences
Computer graphics and computer vision
Costs
Data- och informationsvetenskap (Datateknik)
Datorgrafik och datorseende
Estimation
Feature extraction
feature matching
featuremetric optimization
Geometry
Image reconstruction
Location awareness
Natural Sciences
Naturvetenskap
Optimization
structure-from-Motion
visual localization
Title Pixel-Perfect Structure-From-Motion With Featuremetric Refinement
URI https://ieeexplore.ieee.org/document/10018409
https://www.ncbi.nlm.nih.gov/pubmed/37021895
https://www.proquest.com/docview/2797145078
Volume 47
WOSCitedRecordID wos001465416300050&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3di9QwEB-8wwd98PQ8tX4cFXyT7rVJmzSPi7jowx2Lnrj4EibplFtYdmU_jvvzzaTtsogKvhXaNGFmkplJMr8fwDvnvCpdhVlDiFlZ5ZjVqskznaM3lS5KLV0km9BXV_VsZqZ9sXqshSGiePmMRvwYz_Kbld_xVtkF4wVxQnIER1rrrlhrf2QQ-pEdkHeY4iGPGCpkcnNxPR1ffh4xUfhICqmFYqxQqdm9Ma_EgUOKDCt_CjZ_QxKN3mdy8p_jfgyP-jAzHXd28QTu0fIUTgYKh7Sf0afw8ACP8CmMp_M7WmRTWvMlj_RrxJbdrSmbrMNSehkJf9Lv8-1NypFj3FpkgP_0C7XhJzyIM_g2-Xj94VPWcyxkvqzVNhPStUZRIQvnfdnmGNYb6YsQhVSucUajKJrWVUFQrfIVhmjJlKpE4WsSuTNOPoPj5WpJLyClwjlUKPO2Cn7fSMRWkmlEXeS1b0gnUAyCtr4HIGcejIWNiUhubNSTZT3ZXk8JvN-3-dnBb_zz6zMW_sGXndwTeDso1IbZw0ciuKTVbmOFNpotR9cJPO80vW89GEgCPzrV798wJHeXHdkekunGLnZ2E3o-2Gu1UkrlCcnKBkvLwHQWS--tEAVi1SoR4oeXfxnyK3ggmGk4Xq18DcdB4fQG7vvb7XyzPg_2P6vPo_3_ApVQ_9w
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3di9QwEB_0FNQHT89T62cF36R7bZImzeMiLnd4uyy64uFLSNIpt7Dsyn6If76ZtF0WUcG3QpsmzGQyM0nm9wN465yXwpU2q9HaTJS5zSpZ55nKrdelKoTiLpJNqMmkurrS065YPdbCIGK8fIYDeoxn-fXK72ir7IzwgighuQm3SiFY0ZZr7Q8NQk-8hfIORh4yib5GJtdns-lwfDEgqvABZ1wxSWihXJGDI2aJA5cUOVb-FG7-hiUa_c_o-D9H_gDud4FmOmxnxkO4gcsTOO5JHNLOpk_g3gEi4SMYTuc_cZFNcU3XPNLPEV12t8ZstA6L6ThS_qRf59vrlGLHuLlIEP_pJ2zCT2gQp_Bl9GH2_jzrWBYyLyq5zRh3jZZY8MJ5L5rchhWH-yLEIaWrnVaWFXXjyiCoRvrShnhJCyks8xWy3GnHH8PRcrXEp5Bi4ZyVludNGTy_5tY2HHXNqiKvfI0qgaIXtPEdBDkxYSxMTEVybaKeDOnJdHpK4N2-zfcWgOOfX5-S8A--bOWewJteoSbYDx2K2CWudhvDlFY0c1SVwJNW0_vW_QRJ4Fur-v0bAuVu8yPTgTJdm8XObELPB7uthnMuPVo0vLbCEDSdscJ7w1hhbdlIFiKIZ38Z8mu4cz4bX5rLi8nH53CXEe9wvGj5Ao6C8vEl3PY_tvPN-lW0gl_9pAJK
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Pixel-Perfect+Structure-From-Motion+With+Featuremetric+Refinement&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Sarlin%2C+Paul-Edouard&rft.au=Lindenberger%2C+Philipp&rft.au=Larsson%2C+Viktor&rft.au=Pollefeys%2C+Marc&rft.date=2025-05-01&rft.issn=0162-8828&rft.eissn=2160-9292&rft.volume=47&rft.issue=5&rft.spage=3298&rft.epage=3309&rft_id=info:doi/10.1109%2FTPAMI.2023.3237269&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPAMI_2023_3237269
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon