Action-Inclusive Multi-Future Prediction Using a Generative Model in Human-Related Scenes for Mobile Robots

Mobility in daily unstructured environments, particularly in human-centered scenarios, remains a fundamental challenge for mobile robots. While traditional prediction-based approaches primarily estimate partial features for robot decision making, such as position and velocity, recent world models en...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access Vol. 13; pp. 167034 - 167044
Main Authors: Xu, Chenfei, Ahmad, Huthaifa, Okadome, Yuya, Ishiguro, Hiroshi, Nakamura, Yutaka
Format: Journal Article
Language:English
Published: Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:2169-3536, 2169-3536
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Mobility in daily unstructured environments, particularly in human-centered scenarios, remains a fundamental challenge for mobile robots. While traditional prediction-based approaches primarily estimate partial features for robot decision making, such as position and velocity, recent world models enable direct prediction of future sensory data. However, their potentials in human-inclusive environments remain underexplored. To assess the feasibility of world models in facilitating human-robot interactions, we propose a robot framework using a deep generative model that jointly predicts multiple future observations and actions. Our approach leverages first-person-view (FPV) raw sensor data, integrating both observations and actions to enhance predictive capabilities in dynamic human-populated settings. Experimental results demonstrate that our method is capable of generating a range of candidate futures for one condition and planning actions based on observation guidance. These findings highlight the potential of our approach for facilitating autonomous robots' coexistence with human.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2025.3611812