Energy Management in Electric Vehicles Using Improved Swarm Optimized Deep Reinforcement Learning Algorithm

The internal combustion engine-based transportation system is causing severe problems such as rising levels of pollution, rising petroleum prices, and the depletion of natural resources. To divide power between the engine and the battery in an effective manner, a sophisticated energy management syst...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Nano- and Electronic Physics Jg. 15; H. 3; S. 3004 - 3004-6
Hauptverfasser: Jawale, M. A., Pawar, A. B., Korde, Sachin K., Rakshe, Dhananjay S., William, P., Deshpande, Neeta
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Sumy Ukraine Sumy State University, Journal of Nano - and Electronic Physics 2023
Schlagworte:
ISSN:2077-6772, 2306-4277
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The internal combustion engine-based transportation system is causing severe problems such as rising levels of pollution, rising petroleum prices, and the depletion of natural resources. To divide power between the engine and the battery in an effective manner, a sophisticated energy management system is required to be put into place. A power split strategy that is efficient may result in higher fuel economy and performance of Electric Vehicles (EVs). In this paper, we propose the reinforcement learning method using Deep Q learning (DQL), which is a novel Improved Swarm optimized Deep Reinforcement Learning Algorithm (IS-DRLA) designed for energy management control. To perform an update on the weights of the neural network, this method computes the use of a modified version of the swarm optimization technique. After that, the suggested IS-DRLA system goes through training and verification using high-precision realistic driving conditions, after which it is contrasted with the standard approach. The performance indices such as State of Charge (SOC) and fuel consumption and loss function are analyzed for the efficiency of the proposed method (IS-DRLA). According to the findings, the newly proposed IS-DRLA is capable of achieving a higher training pace with a lower overall fuel consumption than the conventional policy, and its fuel economy comes very close to matching that of the worldwide optimal. In addition to this, the adaptability of the suggested strategy is demonstrated by utilizing a different driving schedule.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2077-6772
2306-4277
DOI:10.21272/jnep.15(3).03004