Energy Management in Electric Vehicles Using Improved Swarm Optimized Deep Reinforcement Learning Algorithm

The internal combustion engine-based transportation system is causing severe problems such as rising levels of pollution, rising petroleum prices, and the depletion of natural resources. To divide power between the engine and the battery in an effective manner, a sophisticated energy management syst...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Journal of Nano- and Electronic Physics Ročník 15; číslo 3; s. 3004 - 3004-6
Hlavní autori: Jawale, M. A., Pawar, A. B., Korde, Sachin K., Rakshe, Dhananjay S., William, P., Deshpande, Neeta
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Sumy Ukraine Sumy State University, Journal of Nano - and Electronic Physics 2023
Predmet:
ISSN:2077-6772, 2306-4277
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The internal combustion engine-based transportation system is causing severe problems such as rising levels of pollution, rising petroleum prices, and the depletion of natural resources. To divide power between the engine and the battery in an effective manner, a sophisticated energy management system is required to be put into place. A power split strategy that is efficient may result in higher fuel economy and performance of Electric Vehicles (EVs). In this paper, we propose the reinforcement learning method using Deep Q learning (DQL), which is a novel Improved Swarm optimized Deep Reinforcement Learning Algorithm (IS-DRLA) designed for energy management control. To perform an update on the weights of the neural network, this method computes the use of a modified version of the swarm optimization technique. After that, the suggested IS-DRLA system goes through training and verification using high-precision realistic driving conditions, after which it is contrasted with the standard approach. The performance indices such as State of Charge (SOC) and fuel consumption and loss function are analyzed for the efficiency of the proposed method (IS-DRLA). According to the findings, the newly proposed IS-DRLA is capable of achieving a higher training pace with a lower overall fuel consumption than the conventional policy, and its fuel economy comes very close to matching that of the worldwide optimal. In addition to this, the adaptability of the suggested strategy is demonstrated by utilizing a different driving schedule.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2077-6772
2306-4277
DOI:10.21272/jnep.15(3).03004