Visual SLAM based on mask-fusion and motion consistency verification

Simultaneously Localization and Mapping (SLAM) is the process of a subject equipped with a specific sensor moving in an unknown environment, obtaining information through the sensor, and combining relevant mathematical methods to establish a corresponding environmental model while estimating its own...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of physics. Conference series Jg. 3062; H. 1; S. 12004 - 12011
Hauptverfasser: Li, Yanfan, Song, Meng
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Bristol IOP Publishing 01.07.2025
Schlagworte:
ISSN:1742-6588, 1742-6596
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Simultaneously Localization and Mapping (SLAM) is the process of a subject equipped with a specific sensor moving in an unknown environment, obtaining information through the sensor, and combining relevant mathematical methods to establish a corresponding environmental model while estimating its own motion. Currently, many excellent visual SLAM algorithms perform well in static environments with accurate pose estimation. However, the changes in objects in dynamic environments can have a significant impact on the pose estimation of these SLAM algorithms, leading to a decrease in accuracy. To address this issue, we propose an MFMCV-SLAM framework based on mask fusion and motion consistency verification in dynamic scenes. We use multiple methods to generate semantic masks for dynamic objects, fuse multiple masks, and finally use motion consistency verification to select feature points from the masks that are consistent with camera motion for subsequent pose estimation. In the experiment, we validate our proposed algorithm on the TUM dataset, and compare it with the current classic visual SLAM algorithm. The results show our method has good accuracy.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/3062/1/012004