Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model

Homography estimation between multiple aerial images can provide relative pose estimation for collaborative autonomous exploration and monitoring. The usage on a robotic system requires a fast and robust homography estimation algorithm. In this letter, we propose an unsupervised learning algorithm t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters Jg. 3; H. 3; S. 2346 - 2353
Hauptverfasser: Ty Nguyen, Chen, Steven W., Shivakumar, Shreyas S., Taylor, Camillo Jose, Kumar, Vijay
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Piscataway IEEE 01.07.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2377-3766, 2377-3766
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Homography estimation between multiple aerial images can provide relative pose estimation for collaborative autonomous exploration and monitoring. The usage on a robotic system requires a fast and robust homography estimation algorithm. In this letter, we propose an unsupervised learning algorithm that trains a deep convolutional neural network to estimate planar homographies. We compare the proposed algorithm to traditional-feature-based and direct methods, as well as a corresponding supervised learning algorithm. Our empirical results demonstrate that compared to traditional approaches, the unsupervised algorithm achieves faster inference speed, while maintaining comparable or better accuracy and robustness to illumination variation. In addition, our unsupervised method has superior adaptability and performance compared to the corresponding supervised deep learning method. Our image dataset and a Tensorflow implementation of our work are available at https://github.com/tynguyen/unsupervisedDeepHomographyRAL2018.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2018.2809549