Vision‐audio fusion SLAM in dynamic environments

Moving humans, agents, and subjects bring many challenges to robot self‐localisation and environment perception. To adapt to dynamic environments, SLAM researchers typically apply several deep learning image segmentation models to eliminate these moving obstacles. However, these moving obstacle segm...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:CAAI Transactions on Intelligence Technology Ročník 8; číslo 4; s. 1364 - 1373
Hlavní autoři: Zhang, Tianwei, Zhang, Huayan, Li, Xiaofei
Médium: Journal Article
Jazyk:angličtina
Vydáno: Beijing John Wiley & Sons, Inc 01.12.2023
Wiley
Témata:
ISSN:2468-2322, 2468-6557, 2468-2322
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Moving humans, agents, and subjects bring many challenges to robot self‐localisation and environment perception. To adapt to dynamic environments, SLAM researchers typically apply several deep learning image segmentation models to eliminate these moving obstacles. However, these moving obstacle segmentation methods cost too much computation resource for the onboard processing of mobile robots. In the current industrial environment, mobile robot collaboration scenario, the noise of mobile robots could be easily found by on‐board audio‐sensing processors and the direction of sound sources can be effectively acquired by sound source estimation algorithms, but the distance estimation of sound sources is difficult. However, in the field of visual perception, the 3D structure information of the scene is relatively easy to obtain, but the recognition and segmentation of moving objects is more difficult. To address these problems, a novel vision‐audio fusion method that combines sound source localisation methods with a visual SLAM scheme is proposed, thereby eliminating the effect of dynamic obstacles on multi‐agent systems. Several heterogeneous robots experiments in different dynamic scenes indicate very stable self‐localisation and environment reconstruction performance of our method.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2468-2322
2468-6557
2468-2322
DOI:10.1049/cit2.12206