DeepSLAM: A Robust Monocular SLAM System With Unsupervised Deep Learning

In this article, we propose DeepSLAM, a novel unsupervised deep learning based visual simultaneous localization and mapping (SLAM) system. The DeepSLAM training is fully unsupervised since it only requires stereo imagery instead of annotating ground-truth poses. Its testing takes a monocular image s...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on industrial electronics (1982) Ročník 68; číslo 4; s. 3577 - 3587
Hlavní autori: Li, Ruihao, Wang, Sen, Gu, Dongbing
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 01.04.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:0278-0046, 1557-9948
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this article, we propose DeepSLAM, a novel unsupervised deep learning based visual simultaneous localization and mapping (SLAM) system. The DeepSLAM training is fully unsupervised since it only requires stereo imagery instead of annotating ground-truth poses. Its testing takes a monocular image sequence as the input. Therefore, it is a monocular SLAM paradigm. DeepSLAM consists of several essential components, including Mapping-Net, Tracking-Net, Loop-Net, and a graph optimization unit. Specifically, the Mapping-Net is an encoder and decoder architecture for describing the 3-D structure of environment, whereas the Tracking-Net is a recurrent convolutional neural network architecture for capturing the camera motion. The Loop-Net is a pretrained binary classifier for detecting loop closures. DeepSLAM can simultaneously generate pose estimate, depth map, and outlier rejection mask. In this article, we evaluate its performance on various datasets, and find that DeepSLAM achieves good performance in terms of pose estimation accuracy, and is robust in some challenging scenes.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0278-0046
1557-9948
DOI:10.1109/TIE.2020.2982096