Action-Inclusive Multi-Future Prediction Using a Generative Model in Human-Related Scenes for Mobile Robots

Mobility in daily unstructured environments, particularly in human-centered scenarios, remains a fundamental challenge for mobile robots. While traditional prediction-based approaches primarily estimate partial features for robot decision making, such as position and velocity, recent world models en...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access Jg. 13; S. 167034 - 167044
Hauptverfasser: Xu, Chenfei, Ahmad, Huthaifa, Okadome, Yuya, Ishiguro, Hiroshi, Nakamura, Yutaka
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2169-3536, 2169-3536
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Mobility in daily unstructured environments, particularly in human-centered scenarios, remains a fundamental challenge for mobile robots. While traditional prediction-based approaches primarily estimate partial features for robot decision making, such as position and velocity, recent world models enable direct prediction of future sensory data. However, their potentials in human-inclusive environments remain underexplored. To assess the feasibility of world models in facilitating human-robot interactions, we propose a robot framework using a deep generative model that jointly predicts multiple future observations and actions. Our approach leverages first-person-view (FPV) raw sensor data, integrating both observations and actions to enhance predictive capabilities in dynamic human-populated settings. Experimental results demonstrate that our method is capable of generating a range of candidate futures for one condition and planning actions based on observation guidance. These findings highlight the potential of our approach for facilitating autonomous robots' coexistence with human.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2025.3611812