Target-driven obstacle avoidance algorithm based on DDPG for connected autonomous vehicles

In the field of autonomous driving, obstacle avoidance is of great significance for safe driving. At present, in addition to traditional obstacle avoidance algorithms including VFH algorithm, artificial potential field method, a large number of related researches are focused on algorithms based on v...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:EURASIP journal on advances in signal processing Jg. 2022; H. 1; S. 1 - 22
Hauptverfasser: Chen, Yu, Han, Wei, Zhu, Qinghua, Liu, Yong, Zhao, Jingya
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Cham Springer International Publishing 12.07.2022
Springer
Springer Nature B.V
SpringerOpen
Schlagworte:
ISSN:1687-6180, 1687-6172, 1687-6180
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the field of autonomous driving, obstacle avoidance is of great significance for safe driving. At present, in addition to traditional obstacle avoidance algorithms including VFH algorithm, artificial potential field method, a large number of related researches are focused on algorithms based on vision and neural networks. Researches on these algorithms have achieved some results, and some of which have completed real road tests. However, most of algorithms consider only local environmental information which may cause local optimum in complex driving environments. Therefore, it is necessary to consider the environmental information beyond the sensor's perceptual ability for autonomous driving in complex environment. In the network-assisted automated driving system, networked vehicles can obtain road obstacles’ and condition information through roadside sensors and mobile network, so as to gain extra sensing ability. Therefore, network-assisted automated driving is of great significance in obstacle avoidance. Under this background, this paper presents an automatic driving obstacle avoidance strategy combining path planning and reinforcement learning. At first, a global optimal path is planned through global information, then merge the global optimal path and vehicle information into a vector. Use this vector as input of reinforcement learning neural network and output vehicle control signals to follow optimal path while avoiding obstacles.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1687-6180
1687-6172
1687-6180
DOI:10.1186/s13634-022-00872-5