Improved Multi-Agent Deep Deterministic Policy Gradient for Path Planning-Based Crowd Simulation

Deep reinforcement learning (DRL) has been proved to be more suitable than reinforcement learning for path planning in large-scale scenarios. In order to more effectively complete the DRL-based collaborative path planning in crowd evacuation, it is necessary to consider the space expansion problem b...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE access Ročník 7; s. 147755 - 147770
Hlavní autori: Zheng, Shangfei, Liu, Hong
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway IEEE 2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:2169-3536, 2169-3536
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Deep reinforcement learning (DRL) has been proved to be more suitable than reinforcement learning for path planning in large-scale scenarios. In order to more effectively complete the DRL-based collaborative path planning in crowd evacuation, it is necessary to consider the space expansion problem brought by the increase of the number of agents. In addition, it is often faced with complicated circumstances, such as exit selection and congestion in crowd evacuation. However, few existing works have integrated these two aspects jointly. To solve this problem, we propose a planning approach for crowd evacuation based on the improved DRL algorithm, which will improve evacuation efficiency for large-scale crowd path planning. First, we propose a framework of congestion detection-based multi-agent reinforcement learning, the framework divides the crowd into leaders and followers and simulates leaders with a multi-agent system, it considers the congestion detection area is set up to evaluate the degree of congestion at each exit. Next, under the specification of this framework, we propose the improved Multi-Agent Deep Deterministic Policy Gradient (IMADDPG) algorithm, which adds the mean field network to maximize the returns of other agents, enables all agents to maximize the performance of a collaborative planning task in our training period. Then, we implement the hierarchical path planning method, which upper layer is based on the IMADDPG algorithm to solve the global path, and lower layer uses the reciprocal velocity obstacles method to avoid collisions in crowds. Finally, we simulate the proposed method with the crowd simulation system. The experimental results show the effectiveness of our method.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2946659