Distributed Deep Reinforcement Learning-Based Energy and Emission Management Strategy for Hybrid Electric Vehicles
Advanced algorithms can promote the development of energy management strategies (EMSs) as a key technology in hybrid electric vehicles (HEVs). Reinforcement learning (RL) with distributed structure can significantly improve training efficiency in complex environments, and multi-threaded parallel com...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on vehicular technology Jg. 70; H. 10; S. 9922 - 9934 |
|---|---|
| Hauptverfasser: | , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
New York
IEEE
01.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Schlagworte: | |
| ISSN: | 0018-9545, 1939-9359 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Advanced algorithms can promote the development of energy management strategies (EMSs) as a key technology in hybrid electric vehicles (HEVs). Reinforcement learning (RL) with distributed structure can significantly improve training efficiency in complex environments, and multi-threaded parallel computing provides a reliable algorithm basis for promoting adaptability. Dedicated to trying more efficient deep reinforcement learning (DRL) algorithms, this paper proposed a deep q-network (DQN)-based energy and emission management strategy (E&EMS) at first. Then, two distributed DRL algorithms, namely asynchronous advantage actor-critic (A3C) and distributed proximal policy optimization (DPPO), were adopted to propose EMSs, respectively. Finally, emission optimization was taken into account and then distributed DRL-based E&EMSs were proposed. Regarding dynamic programming (DP) as the optimal benchmark, simulation results show that three DRL-based control strategies can achieve near-optimal fuel economy and outstanding computational efficiency, and compared with DQN, two distributed DRL algorithms have improved the learning efficiency by four times. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0018-9545 1939-9359 |
| DOI: | 10.1109/TVT.2021.3107734 |