Coarse-to-fine vision-based localization by indexing scale-Invariant features

This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases:...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on systems, man and cybernetics. Part B, Cybernetics Ročník 36; číslo 2; s. 413 - 422
Hlavní autori: Junqiu Wang, Hongbin Zha, Cipolla, R.
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 01.04.2006
Predmet:
ISSN:1083-4419
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary, the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable.
Bibliografia:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:1083-4419
DOI:10.1109/TSMCB.2005.859085