Knowledge Assisted Deep Reinforcement Learning for Electric Vehicle Charging Control

Deep reinforcement learning (DRL) is a promising data-driven approach to solve the electric vehicle (EV) charging problem. However, DRL based charging strategies maybe not always meet the requirements of users. In this paper, a knowledge-assisted algorithm combining twin delayed deep deterministic p...

Full description

Saved in:
Bibliographic Details
Published in:2022 IEEE 6th Conference on Energy Internet and Energy System Integration (EI2) pp. 1882 - 1887
Main Authors: Zai, Rui, Guo, Ye, Liu, Qiong, Sun, Hongbin, Wu, Qiuwei, Xiao, Li
Format: Conference Proceeding
Language:English
Published: IEEE 11.11.2022
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Deep reinforcement learning (DRL) is a promising data-driven approach to solve the electric vehicle (EV) charging problem. However, DRL based charging strategies maybe not always meet the requirements of users. In this paper, a knowledge-assisted algorithm combining twin delayed deep deterministic policy gradient (TD3) algorithm and imitation learning is proposed to control the charging process of EVs. The purpose is to minimize the cost while charging to the desired value. With the assistance of knowledge, the out-of-limit actions are corrected, so only the reward function to constrain the cost needs to be set, speeding up the convergence speed of the algorithm. To make the actor network acquire the capability of charging to the desired value, imitation learning is used to correct the actor network. The simulation results demonstrate the superiority of the knowledge-assisted TD3 algorithm with imitation learning.
AbstractList Deep reinforcement learning (DRL) is a promising data-driven approach to solve the electric vehicle (EV) charging problem. However, DRL based charging strategies maybe not always meet the requirements of users. In this paper, a knowledge-assisted algorithm combining twin delayed deep deterministic policy gradient (TD3) algorithm and imitation learning is proposed to control the charging process of EVs. The purpose is to minimize the cost while charging to the desired value. With the assistance of knowledge, the out-of-limit actions are corrected, so only the reward function to constrain the cost needs to be set, speeding up the convergence speed of the algorithm. To make the actor network acquire the capability of charging to the desired value, imitation learning is used to correct the actor network. The simulation results demonstrate the superiority of the knowledge-assisted TD3 algorithm with imitation learning.
Author Liu, Qiong
Sun, Hongbin
Guo, Ye
Xiao, Li
Zai, Rui
Wu, Qiuwei
Author_xml – sequence: 1
  givenname: Rui
  surname: Zai
  fullname: Zai, Rui
  email: zair20@mails.tsinghua.edu.cn
  organization: Tsinghua University,Tsinghua-Berkeley Shenzhen Institute,Shenzhen,China
– sequence: 2
  givenname: Ye
  surname: Guo
  fullname: Guo, Ye
  email: guo-ye@sz.tsinghua.edu.cn
  organization: Tsinghua University,Tsinghua-Berkeley Shenzhen Institute,Shenzhen,China
– sequence: 3
  givenname: Qiong
  surname: Liu
  fullname: Liu, Qiong
  email: liuqiong_yl@outlook.com
  organization: Tsinghua University,Tsinghua-Berkeley Shenzhen Institute,Shenzhen,China
– sequence: 4
  givenname: Hongbin
  surname: Sun
  fullname: Sun, Hongbin
  email: shb@sz.tsinghua.edu.cn
  organization: Tsinghua University,Department of Electrical Engineering,Beijing,China
– sequence: 5
  givenname: Qiuwei
  surname: Wu
  fullname: Wu, Qiuwei
  email: qiuwu@sz.tsinghua.edu.cn
  organization: Tsinghua University,Tsinghua-Berkeley Shenzhen Institute,Shenzhen,China
– sequence: 6
  givenname: Li
  surname: Xiao
  fullname: Xiao, Li
  email: xiaoli@sz.tsinghua.edu.cn
  organization: Tsinghua University,Tsinghua-Berkeley Shenzhen Institute,Shenzhen,China
BookMark eNo1j9FKwzAUhiPohc69gUheoDMnp0may1GrGxYEmd6OJj3tAl060oL49k7Uqw_-D374bthlHCMxdg9iBSDsQ7WVSksNKymkXIEA0NLABVtaYwtUAnMDyl6z3UscPwdqe-LraQrTTC1_JDrxNwqxG5OnI8WZ19SkGGLPzxOvBvJzCp5_0CH4gXh5aFL_Y8sxzmkcbtlV1wwTLf-4YO9P1a7cZPXr87Zc11kAsHOGiMblmOeorPeFt0WuXCeUU_qcIMmgQydylECttrLTwhrnyRDothXU4YLd_f4GItqfUjg26Wv_H4vfnmJOPA
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/EI256261.2022.10116271
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Xplore POP ALL
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350347159
EndPage 1887
ExternalDocumentID 10116271
Genre orig-research
GroupedDBID 6IE
6IL
CBEJK
RIE
RIL
ID FETCH-LOGICAL-i119t-3337b4344359cc8c9845bf05b561092e73b3b04321ed692f6097bce7e16dd0ef3
IEDL.DBID RIE
IngestDate Thu Jan 18 11:14:52 EST 2024
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i119t-3337b4344359cc8c9845bf05b561092e73b3b04321ed692f6097bce7e16dd0ef3
PageCount 6
ParticipantIDs ieee_primary_10116271
PublicationCentury 2000
PublicationDate 2022-Nov.-11
PublicationDateYYYYMMDD 2022-11-11
PublicationDate_xml – month: 11
  year: 2022
  text: 2022-Nov.-11
  day: 11
PublicationDecade 2020
PublicationTitle 2022 IEEE 6th Conference on Energy Internet and Energy System Integration (EI2)
PublicationTitleAbbrev EI2
PublicationYear 2022
Publisher IEEE
Publisher_xml – name: IEEE
Score 1.8128506
Snippet Deep reinforcement learning (DRL) is a promising data-driven approach to solve the electric vehicle (EV) charging problem. However, DRL based charging...
SourceID ieee
SourceType Publisher
StartPage 1882
SubjectTerms Charging control
Costs
Deep learning
Electric vehicle charging
imitation learning
Process control
Reinforcement learning
Simulation
System integration
twin delayed deep deterministic policy gradient algorithm (TD3)
Title Knowledge Assisted Deep Reinforcement Learning for Electric Vehicle Charging Control
URI https://ieeexplore.ieee.org/document/10116271
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NSwMxEA1aPHhSseI3OXhN3Wx2k825tihCKVKlt7KZTLSXttTW399JulU8ePAWQiDwJmE-kjePsTsHKKX1hQBKkkVBVha1KbUIXgYgnxKUDklswgwG1Xhshw1ZPXFhEDF9PsNOHKa3fD-HdSyV0Q2XUueRMb5vjNmStRrWr8zsfe-J_DdlBJT15Xlnt_iXbEryGv2jf-53zNo__Ds-_PYsJ2wPZ6ds9Lwrf3HCNFrH8wfEBX_B1P0UUqGPNw1T3zlN8V4SuZkCf8OPeEB4fF2PskS8u_2i3mav_d6o-ygaTQQxJUxXQillXKEKinIsQAW2KkoXstLFOMjmaJRTLrbZk-i1zYPOrCF7GJTa-wyDOmOt2XyG54zX0gcnSyhQU9gW6joYXwFWKqstaOMuWDtCMlls215Mdmhc_jF_xQ4j8JGoJ-U1a62Wa7xhB_C1mn4ub5OxNk66l1E
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LTgMhFCWmmuhKjTW-ZeGWOgwMDOvapk1r05jRdNcMcNFu2qYPv19gphoXLtwRQkJyD-Q-4NyD0IM2QKmynBifJBPuUSalzARxljrjfYpjwkWxCTka5ZOJGtdk9ciFAYD4-QxaYRjf8u3CbEOpzN9wSkUaGOP7GecprehaNe-XJuqx0_ce3OcEPu9L09Zu-S_hlOg3usf_3PEENX8YeHj87VtO0R7Mz1Ax2BXAsLdqwMfiJ4AlfoHY_9TEUh-uW6a-Yz-FO1HmZmbwG3yEI4LD-3oQJsLt6pN6E712O0W7R2pVBDLzVt0QxpjUnHEf5yhjcqNynmmXZDpEQioFyTTTodEeBStU6kSipEdEAhXWJuDYOWrMF3O4QLik1mmaGQ7CB26uLJ20uYGcJaUyQupL1AwmmS6rxhfTnTWu_pi_R4e94nk4HfZHg2t0FEAItD1Kb1Bjs9rCLTown5vZenUXgfsCXXOamA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2022+IEEE+6th+Conference+on+Energy+Internet+and+Energy+System+Integration+%28EI2%29&rft.atitle=Knowledge+Assisted+Deep+Reinforcement+Learning+for+Electric+Vehicle+Charging+Control&rft.au=Zai%2C+Rui&rft.au=Guo%2C+Ye&rft.au=Liu%2C+Qiong&rft.au=Sun%2C+Hongbin&rft.date=2022-11-11&rft.pub=IEEE&rft.spage=1882&rft.epage=1887&rft_id=info:doi/10.1109%2FEI256261.2022.10116271&rft.externalDocID=10116271