DeepLabV3+-Based Semantic Annotation Refinement for SLAM in Indoor Environments
Visual SLAM systems frequently encounter challenges in accurately reconstructing three-dimensional scenes from monocular imagery in semantically deficient environments, which significantly compromises robotic operational efficiency. While conventional manual annotation approaches can provide supplem...
Uložené v:
| Vydané v: | Sensors (Basel, Switzerland) Ročník 25; číslo 11; s. 3344 |
|---|---|
| Hlavní autori: | , , , , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Switzerland
MDPI AG
26.05.2025
MDPI |
| Predmet: | |
| ISSN: | 1424-8220, 1424-8220 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | Visual SLAM systems frequently encounter challenges in accurately reconstructing three-dimensional scenes from monocular imagery in semantically deficient environments, which significantly compromises robotic operational efficiency. While conventional manual annotation approaches can provide supplemental semantic information, they are inherently inefficient, procedurally complex, and labor-intensive. This paper presents an optimized DeepLabV3+-based framework for visual SLAM that integrates image semantic segmentation with automated point cloud semantic annotation. The proposed method utilizes MobileNetV3 as the backbone network for DeepLabV3+ to maintain segmentation accuracy while reducing computational demands. In this paper, we introduce a parameter-adaptive Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm incorporating K-nearest neighbors and accelerated by KD-tree structures, effectively addressing the limitations of manual parameter tuning and erroneous annotations in conventional methods. Furthermore, a novel point cloud processing strategy featuring dynamic radius thresholding is developed to enhance annotation completeness and boundary precision. Experimental results demonstrate that our approach achieves significant improvements in annotation efficiency while preserving high accuracy, thereby providing reliable technical support for enhanced environmental understanding and navigation capabilities in indoor robotic applications. |
|---|---|
| Bibliografia: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 1424-8220 1424-8220 |
| DOI: | 10.3390/s25113344 |