Improved deep reinforcement learning for car-following decision-making

Accuracy improvement of Car-following (CF) model has attracted much attention in recent years. Although a few studies incorporate deep reinforcement learning (DRL) to describe CF behaviors, proper design of reward function is still an intractable problem. This study improves the deep deterministic p...

Full description

Saved in:
Bibliographic Details
Published in:Physica A Vol. 624; p. 128912
Main Authors: Yang, Xiaoxue, Zou, Yajie, Zhang, Hao, Qu, Xiaobo, Chen, Lei
Format: Journal Article
Language:English
Published: Elsevier B.V 15.08.2023
Subjects:
ISSN:0378-4371, 1873-2119
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accuracy improvement of Car-following (CF) model has attracted much attention in recent years. Although a few studies incorporate deep reinforcement learning (DRL) to describe CF behaviors, proper design of reward function is still an intractable problem. This study improves the deep deterministic policy gradient (DDPG) car-following model with stacked denoising autoencoders (SDAE), and proposes a data-driven reward representation function, which quantifies the implicit interaction between ego vehicle and preceding vehicle in car-following process. The experimental results demonstrate that DDPG-SDAE model has superior ability of imitating driving behavior: (1) validating effectiveness of the reward representation method with low deviation of trajectory; (2) demonstrating generalization ability on two different trajectory datasets (HighD and SPMD); (3) adapting to three traffic scenarios clustered by a dynamic time warping distance based k-medoids method. Compared with Recurrent Neural Networks (RNN) and intelligent driver model (IDM), DDPG-SDAE model shows better performance on the deviation of speed and relative distance. This study demonstrates superiority of a novel reward extraction method fusing SDAE into DDPG algorithm and provides inspiration for developing driving decision-making model.
ISSN:0378-4371
1873-2119
DOI:10.1016/j.physa.2023.128912