Multi-Modal Loop Closing in Unstructured Planetary Environments with Visually Enriched Submaps

Uloženo v:
Podrobná bibliografie
Název: Multi-Modal Loop Closing in Unstructured Planetary Environments with Visually Enriched Submaps
Autoři: Giubilato, Riccardo, Vayugundla, Mallikarjuna, Stürzl, Wolfgang, Schuster, Martin, Wedler, Armin, Triebel, Rudolph
Zdroj: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :8758-8765
Publication Status: Preprint
Informace o vydavateli: IEEE, 2021.
Rok vydání: 2021
Témata: FOS: Computer and information sciences, Computer Science - Robotics, 0209 industrial biotechnology, Localization, 02 engineering and technology, Multi-modal Perception, Robotics (cs.RO), Space Robotics and Automation
Popis: Future planetary missions will rely on rovers that can autonomously explore and navigate in unstructured environments. An essential element is the ability to recognize places that were already visited or mapped. In this work, we leverage the ability of stereo cameras to provide both visual and depth information, guiding the search and validation of loop closures from a multi-modal perspective. We propose to augment submaps that are created by aggregating stereo point clouds, with visual keyframes. Point clouds matches are found by comparing CSHOT descriptors and validated by clustering, while visual matches are established by comparing keyframes using Bag-of-Words (BoW) and ORB descriptors. The relative transformations resulting from both keyframe and point cloud matches are then fused to provide pose constraints between submaps in our graph-based SLAM framework. Using the LRU rover, we performed several tests in both an indoor laboratory environment as well as a challenging planetary analog environment on Mount Etna, Italy. These environments consist of areas where either keyframes or point clouds alone failed to provide adequate matches demonstrating the benefit of the proposed multi-modal approach.
Accepted at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)
Druh dokumentu: Article
Conference object
DOI: 10.1109/iros51168.2021.9635915
DOI: 10.48550/arxiv.2105.02020
Přístupová URL adresa: http://arxiv.org/pdf/2105.02020
http://arxiv.org/abs/2105.02020
https://elib.dlr.de/143410/
https://doi.org/10.1109/iros51168.2021.9635915
Rights: IEEE Copyright
CC BY NC SA
Přístupové číslo: edsair.doi.dedup.....d3ba54c27c461182a61d06913d3f805c
Databáze: OpenAIRE
Popis
Abstrakt:Future planetary missions will rely on rovers that can autonomously explore and navigate in unstructured environments. An essential element is the ability to recognize places that were already visited or mapped. In this work, we leverage the ability of stereo cameras to provide both visual and depth information, guiding the search and validation of loop closures from a multi-modal perspective. We propose to augment submaps that are created by aggregating stereo point clouds, with visual keyframes. Point clouds matches are found by comparing CSHOT descriptors and validated by clustering, while visual matches are established by comparing keyframes using Bag-of-Words (BoW) and ORB descriptors. The relative transformations resulting from both keyframe and point cloud matches are then fused to provide pose constraints between submaps in our graph-based SLAM framework. Using the LRU rover, we performed several tests in both an indoor laboratory environment as well as a challenging planetary analog environment on Mount Etna, Italy. These environments consist of areas where either keyframes or point clouds alone failed to provide adequate matches demonstrating the benefit of the proposed multi-modal approach.<br />Accepted at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)
DOI:10.1109/iros51168.2021.9635915