The Foreseeable Future: Self-Supervised Learning to Predict Dynamic Scenes for Indoor Navigation

We present a method for generating, predicting, and using spatiotemporal occupancy grid maps (SOGM), which embed future semantic information of real dynamic scenes. We present an autolabeling process that creates SOGMs from noisy real navigation data. We use a 3-D-2-D feedforward architecture, train...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on robotics Jg. 39; H. 6; S. 4581 - 4599
Hauptverfasser: Thomas, Hugues, Zhang, Jian, Barfoot, Timothy D.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1552-3098, 1941-0468
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present a method for generating, predicting, and using spatiotemporal occupancy grid maps (SOGM), which embed future semantic information of real dynamic scenes. We present an autolabeling process that creates SOGMs from noisy real navigation data. We use a 3-D-2-D feedforward architecture, trained to predict the future time steps of SOGMs, given 3-D Lidar frames as input. Our pipeline is entirely self-supervised, thus enabling lifelong learning for real robots. The network is composed of a 3-D back-end that extracts rich features and enables the semantic segmentation of the lidar frames, and a 2-D front-end that predicts the future information embedded in the SOGM representation, potentially capturing the complexities and uncertainties of real-world multiagent interactions. We also design a navigation system that uses these predicted SOGMs within planning, after they have been transformed into spatiotemporal risk maps. We verify our navigation system's abilities in simulation, validate it on a real robot, study SOGM predictions on real data in various circumstances, and provide a novel indoor 3-D lidar dataset, collected during our experiments, which includes our automated annotations.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2023.3304239