DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles

This article proposes a novel learning-based control policy with strong generalizability to new environments that enables a mobile robot to navigate autonomously through spaces filled with both static obstacles and dense crowds of pedestrians. The policy uses a unique combination of input data to ge...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on robotics Ročník 39; číslo 4; s. 2700 - 2719
Hlavní autoři: Xie, Zhanteng, Dames, Philip
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.08.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1552-3098, 1941-0468
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This article proposes a novel learning-based control policy with strong generalizability to new environments that enables a mobile robot to navigate autonomously through spaces filled with both static obstacles and dense crowds of pedestrians. The policy uses a unique combination of input data to generate the desired steering angle and forward velocity: a short history of lidar data, kinematic data about nearby pedestrians, and a subgoal point. The policy is trained in a reinforcement learning setting using a reward function that contains a novel term based on velocity obstacles to guide the robot to actively avoid pedestrians and move toward the goal. Through a series of 3-D simulated experiments with up to 55 pedestrians, this control policy is able to achieve a better balance between collision avoidance and speed (i.e., higher success rate and faster average speed) than state-of-the-art model-based and learning-based policies, and it also generalizes better to different crowd sizes and unseen environments. An extensive series of hardware experiments demonstrate the ability of this policy to directly work in different real-world environments with different crowd sizes with zero retraining. Furthermore, a series of simulated and hardware experiments show that the control policy also works in highly constrained static environments on a different robot platform without any additional training. Lastly, several important lessons that can be applied to other robot learning systems are summarized.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2023.3257549