Research on Deep Reinforcement Learning Control Algorithm for Active Suspension Considering Uncertain Time Delay

The uncertain delay characteristic of actuators is a critical factor that affects the control effectiveness of the active suspension system. Therefore, it is crucial to develop a control algorithm that takes into account this uncertain delay in order to ensure stable control performance. This study...

Full description

Saved in:
Bibliographic Details
Published in:Sensors (Basel, Switzerland) Vol. 23; no. 18; p. 7827
Main Authors: Wang, Yang, Wang, Cheng, Zhao, Shijie, Guo, Konghui
Format: Journal Article
Language:English
Published: Basel MDPI AG 01.09.2023
MDPI
Subjects:
ISSN:1424-8220, 1424-8220
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The uncertain delay characteristic of actuators is a critical factor that affects the control effectiveness of the active suspension system. Therefore, it is crucial to develop a control algorithm that takes into account this uncertain delay in order to ensure stable control performance. This study presents a novel active suspension control algorithm based on deep reinforcement learning (DRL) that specifically addresses the issue of uncertain delay. In this approach, a twin-delayed deep deterministic policy gradient (TD3) algorithm with system delay is employed to obtain the optimal control policy by iteratively solving the dynamic model of the active suspension system, considering the delay. Furthermore, three different operating conditions were designed for simulation to evaluate the control performance: deterministic delay, semi-regular delay, and uncertain delay. The experimental results demonstrate that the proposed algorithm achieves excellent control performance under various operating conditions. Compared to passive suspension, the optimization of body vertical acceleration is improved by more than 30%, and the proposed algorithm effectively mitigates body vibration in the low frequency range. It consistently maintains a more than 30% improvement in ride comfort optimization even under the most severe operating conditions and at different speeds, demonstrating the algorithm’s potential for practical application.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s23187827