Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model

Homography estimation between multiple aerial images can provide relative pose estimation for collaborative autonomous exploration and monitoring. The usage on a robotic system requires a fast and robust homography estimation algorithm. In this letter, we propose an unsupervised learning algorithm t...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE robotics and automation letters Ročník 3; číslo 3; s. 2346 - 2353
Hlavní autoři: Ty Nguyen, Chen, Steven W., Shivakumar, Shreyas S., Taylor, Camillo Jose, Kumar, Vijay
Médium: Journal Article
Jazyk:angličtina
Vydáno: Piscataway IEEE 01.07.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2377-3766, 2377-3766
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Homography estimation between multiple aerial images can provide relative pose estimation for collaborative autonomous exploration and monitoring. The usage on a robotic system requires a fast and robust homography estimation algorithm. In this letter, we propose an unsupervised learning algorithm that trains a deep convolutional neural network to estimate planar homographies. We compare the proposed algorithm to traditional-feature-based and direct methods, as well as a corresponding supervised learning algorithm. Our empirical results demonstrate that compared to traditional approaches, the unsupervised algorithm achieves faster inference speed, while maintaining comparable or better accuracy and robustness to illumination variation. In addition, our unsupervised method has superior adaptability and performance compared to the corresponding supervised deep learning method. Our image dataset and a Tensorflow implementation of our work are available at https://github.com/tynguyen/unsupervisedDeepHomographyRAL2018.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2018.2809549