Virtual power plant containing electric vehicles scheduling strategies based on deep reinforcement learning
•VPP agent and EV charging station agent games to obtain electricity price.•The VPP tends to use mixed strategy, while EVs tend to use pure strategies.•Using Stackelberg game to prevent VPP from obtaining excess profit from EV members. Virtual power plants (VPPs), which aggregate customer-side flexi...
Saved in:
| Published in: | Electric power systems research Vol. 205; p. 107714 |
|---|---|
| Main Authors: | , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Amsterdam
Elsevier B.V
01.04.2022
Elsevier Science Ltd |
| Subjects: | |
| ISSN: | 0378-7796, 1873-2046 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | •VPP agent and EV charging station agent games to obtain electricity price.•The VPP tends to use mixed strategy, while EVs tend to use pure strategies.•Using Stackelberg game to prevent VPP from obtaining excess profit from EV members.
Virtual power plants (VPPs), which aggregate customer-side flexibility resources, provide an effective way for customers to participate in the electricity market, and provide a variety of flexible technologies and services to the market. Importantly, VPPs can provide services to electric vehicle (EV) charging stations. In this paper, we constructed a deep reinforcement learning (DRL) based Stackelberg game model for a VPP with EV charging stations. Considering the interests of both sides of the game, soft actor-critic (SAC) algorithm is used for the VPP agent and twin delay deep deterministic policy gradient (TD3) algorithm is used for the EV charging station agent. By alternately training the network parameters of the agents, the strategy and solution at the equilibrium of the game are calculated. Results of cases demonstrate that the VPP agent can learn the strategy of selling electricity to EVs, optimize the scheduling of distributed energy resources (DERs), and bidding strategy for participation in the electricity market. Meanwhile, the EV aggregation agent can learn scheduling strategies for charging and discharging EVs. When the EV aggregator uses a deterministic strategy and the virtual power plant uses a stochastic strategy, energy complementarity is achieved and the overall operating economy is improved. |
|---|---|
| AbstractList | Virtual power plants (VPPs), which aggregate customer-side flexibility resources, provide an effective way for customers to participate in the electricity market, and provide a variety of flexible technologies and services to the market. Importantly, VPPs can provide services to electric vehicle (EV) charging stations. In this paper, we constructed a deep reinforcement learning (DRL) based Stackelberg game model for a VPP with EV charging stations. Considering the interests of both sides of the game, soft actor-critic (SAC) algorithm is used for the VPP agent and twin delay deep deterministic policy gradient (TD3) algorithm is used for the EV charging station agent. By alternately training the network parameters of the agents, the strategy and solution at the equilibrium of the game are calculated. Results of cases demonstrate that the VPP agent can learn the strategy of selling electricity to EVs, optimize the scheduling of distributed energy resources (DERs), and bidding strategy for participation in the electricity market. Meanwhile, the EV aggregation agent can learn scheduling strategies for charging and discharging EVs. When the EV aggregator uses a deterministic strategy and the virtual power plant uses a stochastic strategy, energy complementarity is achieved and the overall operating economy is improved. •VPP agent and EV charging station agent games to obtain electricity price.•The VPP tends to use mixed strategy, while EVs tend to use pure strategies.•Using Stackelberg game to prevent VPP from obtaining excess profit from EV members. Virtual power plants (VPPs), which aggregate customer-side flexibility resources, provide an effective way for customers to participate in the electricity market, and provide a variety of flexible technologies and services to the market. Importantly, VPPs can provide services to electric vehicle (EV) charging stations. In this paper, we constructed a deep reinforcement learning (DRL) based Stackelberg game model for a VPP with EV charging stations. Considering the interests of both sides of the game, soft actor-critic (SAC) algorithm is used for the VPP agent and twin delay deep deterministic policy gradient (TD3) algorithm is used for the EV charging station agent. By alternately training the network parameters of the agents, the strategy and solution at the equilibrium of the game are calculated. Results of cases demonstrate that the VPP agent can learn the strategy of selling electricity to EVs, optimize the scheduling of distributed energy resources (DERs), and bidding strategy for participation in the electricity market. Meanwhile, the EV aggregation agent can learn scheduling strategies for charging and discharging EVs. When the EV aggregator uses a deterministic strategy and the virtual power plant uses a stochastic strategy, energy complementarity is achieved and the overall operating economy is improved. |
| ArticleNumber | 107714 |
| Author | Wang, Jianing Yu, Changshu Liang, Yanchang Guo, Chunlin |
| Author_xml | – sequence: 1 givenname: Jianing surname: Wang fullname: Wang, Jianing – sequence: 2 givenname: Chunlin surname: Guo fullname: Guo, Chunlin email: gcl@ncepu.edu.cn – sequence: 3 givenname: Changshu surname: Yu fullname: Yu, Changshu – sequence: 4 givenname: Yanchang surname: Liang fullname: Liang, Yanchang |
| BookMark | eNp9kE1LxDAQhoMouK7-AU8Bz12TZpu04EXELxC8qNeQTaaatSZ1klX896asJw97GpiZ5x3mOSL7IQYg5JSzBWdcnq8XMCZc1KzmpaEUX-6RGW-VqGq2lPtkxoRqK6U6eUiOUlozxmSnmhl5f_GYN2agY_wGpONgQqY2hmx88OGVwgA2o7f0C968HSDRZN_AbYZpmDKaDK--dFcmgaMxUAcwUgQf-ogWPqDEDWBwCjsmB70ZEpz81Tl5vrl-urqrHh5v768uHyor6jZXTq7UittWmsbJZtlKcEY4wZtl00jbc1Z3FhSYvpOiB6Os6JwApSTrmbEgxJycbXNHjJ8bSFmv4wZDOalrqYQsSgo6J_V2y2JMCaHXI_oPgz-aMz1J1Ws9SdWTVL2VWqD2H2R9NtkXYWj8sBu92KJQXv_ygDpZD8GC81gcaxf9LvwX7xGXeg |
| CitedBy_id | crossref_primary_10_1002_tee_70023 crossref_primary_10_1016_j_rser_2022_113052 crossref_primary_10_1016_j_engappai_2022_105612 crossref_primary_10_3389_fenrg_2023_1337205 crossref_primary_10_1016_j_heliyon_2024_e24318 crossref_primary_10_3390_en17112705 crossref_primary_10_1016_j_apenergy_2022_120212 crossref_primary_10_1109_TPWRS_2024_3398019 crossref_primary_10_1016_j_energy_2025_138087 crossref_primary_10_1016_j_est_2024_110474 crossref_primary_10_3390_en17174463 crossref_primary_10_1016_j_est_2024_112151 crossref_primary_10_1109_TSG_2023_3273856 crossref_primary_10_1016_j_apenergy_2024_124706 crossref_primary_10_1016_j_epsr_2023_109285 crossref_primary_10_1109_ACCESS_2024_3494759 crossref_primary_10_1049_rpg2_12650 crossref_primary_10_1016_j_apenergy_2023_121615 crossref_primary_10_1016_j_apenergy_2024_122747 crossref_primary_10_1016_j_segan_2023_101054 crossref_primary_10_1109_TPWRS_2023_3296738 crossref_primary_10_1016_j_epsr_2023_109783 crossref_primary_10_1016_j_crsus_2023_100006 crossref_primary_10_1049_stg2_12173 crossref_primary_10_3390_en18133325 crossref_primary_10_1016_j_energy_2024_131235 crossref_primary_10_1016_j_enconman_2023_117773 crossref_primary_10_1109_TIA_2025_3535847 crossref_primary_10_3390_electronics11193175 crossref_primary_10_1016_j_epsr_2024_110371 crossref_primary_10_3390_su16124959 crossref_primary_10_1016_j_rser_2023_113466 crossref_primary_10_1016_j_asoc_2024_111235 crossref_primary_10_1109_ACCESS_2025_3572345 crossref_primary_10_1016_j_compchemeng_2023_108168 crossref_primary_10_1007_s00521_024_09530_3 crossref_primary_10_4316_AECE_2023_02003 crossref_primary_10_3390_en18174485 crossref_primary_10_3390_electronics12092041 crossref_primary_10_1016_j_epsr_2023_109971 crossref_primary_10_1016_j_est_2022_105579 crossref_primary_10_1109_TSG_2023_3265398 |
| Cites_doi | 10.1109/TSG.2019.2955437 10.1109/TSG.2020.3025082 10.1109/TSG.2019.2936142 10.1109/TPWRS.2020.2999536 10.1109/TSTE.2020.2980317 10.1109/JIOT.2020.2966232 10.1016/j.apenergy.2012.12.077 10.1016/j.apenergy.2016.03.020 10.1109/JIOT.2020.3032162 10.1109/TSG.2021.3098298 10.1109/TPWRS.2010.2070884 10.1287/mnsc.14.3.159 10.1016/j.ijepes.2018.04.011 10.1109/TSG.2015.2419714 10.1109/TPWRS.2010.2070883 10.1109/TVT.2020.3043851 10.1109/JIOT.2020.3015204 10.37478/jpe.v6i1.981 10.1109/TSG.2015.2409121 10.1109/TIA.2020.2984614 10.1109/TIA.2018.2828379 10.1109/TMC.2019.2927314 10.1007/BF01737554 |
| ContentType | Journal Article |
| Copyright | 2021 Copyright Elsevier Science Ltd. Apr 2022 |
| Copyright_xml | – notice: 2021 – notice: Copyright Elsevier Science Ltd. Apr 2022 |
| DBID | AAYXX CITATION 7SP 8FD FR3 KR7 L7M |
| DOI | 10.1016/j.epsr.2021.107714 |
| DatabaseName | CrossRef Electronics & Communications Abstracts Technology Research Database Engineering Research Database Civil Engineering Abstracts Advanced Technologies Database with Aerospace |
| DatabaseTitle | CrossRef Civil Engineering Abstracts Engineering Research Database Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
| DatabaseTitleList | Civil Engineering Abstracts |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISSN | 1873-2046 |
| ExternalDocumentID | 10_1016_j_epsr_2021_107714 S0378779621006957 |
| GroupedDBID | --K --M -~X .~1 0R~ 1B1 1~. 1~5 29G 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN AABNK AACTN AAEDT AAEDW AAHCO AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AARJD AAXUO ABFNM ABMAC ABXDB ABYKQ ACAZW ACDAQ ACGFS ACIWK ACNNM ACRLP ADBBV ADEZE ADHUB ADMUD ADTZH AEBSH AECPX AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHIDL AHJVU AI. AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ ARUGR ASPBG AVWKF AXJTR AZFZN BELTK BJAXD BKOJK BLXMC CS3 DU5 E.L EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q GBLVA HVGLF HZ~ IHE J1W JARJE JJJVA K-O KOM LY6 LY7 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SAC SDF SDG SES SET SEW SPC SPCBC SSR SST SSW SSZ T5K VH1 WUQ ZMT ~G- 9DU AATTM AAXKI AAYWO AAYXX ABJNI ABWVN ACLOT ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AGQPQ AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION EFKBS ~HD 7SP 8FD FR3 KR7 L7M |
| ID | FETCH-LOGICAL-c328t-d6b7b1c86a5d65486eda3d3154556cf1029ce7eaf963fea7c39d3e7760f0ace33 |
| ISICitedReferencesCount | 54 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000793739800007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0378-7796 |
| IngestDate | Sun Nov 09 08:17:34 EST 2025 Tue Nov 18 22:12:49 EST 2025 Sat Nov 29 07:19:40 EST 2025 Fri Feb 23 02:40:36 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Real-time dispatch Deep reinforcement learning Electric vehicle Stackelberg game Stochastic strategy Virtual power plant |
| Language | English |
| LinkModel | OpenURL |
| MergedId | FETCHMERGED-LOGICAL-c328t-d6b7b1c86a5d65486eda3d3154556cf1029ce7eaf963fea7c39d3e7760f0ace33 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| PQID | 2673620496 |
| PQPubID | 2047565 |
| ParticipantIDs | proquest_journals_2673620496 crossref_primary_10_1016_j_epsr_2021_107714 crossref_citationtrail_10_1016_j_epsr_2021_107714 elsevier_sciencedirect_doi_10_1016_j_epsr_2021_107714 |
| PublicationCentury | 2000 |
| PublicationDate | April 2022 2022-04-00 20220401 |
| PublicationDateYYYYMMDD | 2022-04-01 |
| PublicationDate_xml | – month: 04 year: 2022 text: April 2022 |
| PublicationDecade | 2020 |
| PublicationPlace | Amsterdam |
| PublicationPlace_xml | – name: Amsterdam |
| PublicationTitle | Electric power systems research |
| PublicationYear | 2022 |
| Publisher | Elsevier B.V Elsevier Science Ltd |
| Publisher_xml | – name: Elsevier B.V – name: Elsevier Science Ltd |
| References | Nguyen, Le, Wang (bib0011) 2018; 54 Numba, User manual - numba 0.53.1-py3.7-linux-x86_64.egg documentation, 2021, June 1, URL Gao, Xu, Chuanwen (bib0014) 2018; 42 T.S. Ferguson, A course in game theory, 2020. Mashhour, Moghaddas-Tafreshi (bib0023) 2011; 26 Fujimoto, Hoof, Meger (bib0030) 2018 Gu, Yang, Lin, Hu, Alazab, Kharel (bib0020) 2020 Liang, Ding, Ding, Lee (bib0015) 2020; 12 T.P. Lillicrap, J.J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, D. Wierstra, Continuous control with deep reinforcement learning Zhan, Liu, Zhao, Zhang, Tang (bib0019) 2019; 19 Liu, Li, Lian, Tang, Liu, Jiang (bib0010) 2018; 102 Harsanyi (bib0037) 1967; 14 Dogan (bib0027) 2021; 24 Fudenberg, Tirole (bib0028) 1992; 60 Li, Hu, Cao, Dragičević, Huang, Chen, Blaabjerg (bib0003) 2021 Wang, Zhang, Zhang (bib0013) 2019; 43 Lin, Guan, Peng, Wang, Maharjan, Ohtsuki (bib0012) 2020; 7 Ye, Qiu, Sun, Papadaskalopoulos, Strbac (bib0016) 2019; 11 Wang, Ai, Tan, Yan, Liu (bib0006) 2015; 7 Pandžić, Morales, Conejo, Kuzle (bib0024) 2013; 105 Haarnoja, Zhou, Abbeel, Levine (bib0031) 2018 Mashhour, Moghaddas-Tafreshi (bib0022) 2011; 26 Jogunola, Adebisi, Ikpehai, Popoola, Gui, Gačanin, Ci (bib0032) 2021; 8 Yi, Xu, Zhou, Wu, Sun (bib0009) 2020; 11 Li, Cheng, Xue, Yang, Han (bib0021) 2020 Zhang, Yang, An (bib0002) 2021; 8 Ju, Tan, Yuan, Tan, Li, Dong (bib0008) 2016; 171 . Harsanyi (bib0038) 1973; 2 Hongxia, Ha, Lei, Yuli (bib0025) 2015; 30 Gao (bib0026) 2019 Li, Wang, Lin (bib0018) 2019; 39 Li, Wan, He (bib0001) 2019; 11 Liang, Guo, Ding, Hua (bib0017) 2020; 35 Luo, Hu, Song, Yang, Zhan, Wu (bib0034) 2011; 35 Qiu, Ye, Papadaskalopoulos, Strbac (bib0005) 2020; 56 Yan, Chen, Zhou, Chen, Wen (bib0004) 2021 Kardakos, Simoglou, Bakirtzis (bib0007) 2015 Sutton, Barto (bib0033) 2018 Luo (10.1016/j.epsr.2021.107714_bib0034) 2011; 35 Ye (10.1016/j.epsr.2021.107714_bib0016) 2019; 11 Fujimoto (10.1016/j.epsr.2021.107714_bib0030) 2018 Ju (10.1016/j.epsr.2021.107714_bib0008) 2016; 171 Zhang (10.1016/j.epsr.2021.107714_bib0002) 2021; 8 Mashhour (10.1016/j.epsr.2021.107714_bib0022) 2011; 26 Harsanyi (10.1016/j.epsr.2021.107714_bib0038) 1973; 2 Lin (10.1016/j.epsr.2021.107714_bib0012) 2020; 7 Li (10.1016/j.epsr.2021.107714_bib0018) 2019; 39 Jogunola (10.1016/j.epsr.2021.107714_bib0032) 2021; 8 Gu (10.1016/j.epsr.2021.107714_bib0020) 2020 Yi (10.1016/j.epsr.2021.107714_bib0009) 2020; 11 Gao (10.1016/j.epsr.2021.107714_bib0014) 2018; 42 Fudenberg (10.1016/j.epsr.2021.107714_bib0028) 1992; 60 Li (10.1016/j.epsr.2021.107714_bib0021) 2020 10.1016/j.epsr.2021.107714_bib0029 Kardakos (10.1016/j.epsr.2021.107714_bib0007) 2015 Liu (10.1016/j.epsr.2021.107714_bib0010) 2018; 102 10.1016/j.epsr.2021.107714_bib0036 Mashhour (10.1016/j.epsr.2021.107714_bib0023) 2011; 26 Liang (10.1016/j.epsr.2021.107714_bib0015) 2020; 12 Dogan (10.1016/j.epsr.2021.107714_bib0027) 2021; 24 10.1016/j.epsr.2021.107714_bib0035 Liang (10.1016/j.epsr.2021.107714_bib0017) 2020; 35 Gao (10.1016/j.epsr.2021.107714_bib0026) 2019 Li (10.1016/j.epsr.2021.107714_bib0001) 2019; 11 Yan (10.1016/j.epsr.2021.107714_bib0004) 2021 Nguyen (10.1016/j.epsr.2021.107714_bib0011) 2018; 54 Wang (10.1016/j.epsr.2021.107714_bib0013) 2019; 43 Haarnoja (10.1016/j.epsr.2021.107714_bib0031) 2018 Li (10.1016/j.epsr.2021.107714_bib0003) 2021 Pandžić (10.1016/j.epsr.2021.107714_bib0024) 2013; 105 Hongxia (10.1016/j.epsr.2021.107714_bib0025) 2015; 30 Qiu (10.1016/j.epsr.2021.107714_bib0005) 2020; 56 Harsanyi (10.1016/j.epsr.2021.107714_bib0037) 1967; 14 Wang (10.1016/j.epsr.2021.107714_bib0006) 2015; 7 Zhan (10.1016/j.epsr.2021.107714_bib0019) 2019; 19 Sutton (10.1016/j.epsr.2021.107714_bib0033) 2018 |
| References_xml | – volume: 7 start-page: 6288 year: 2020 end-page: 6301 ident: bib0012 article-title: Deep reinforcement learning for economic dispatch of virtual power plant in internet of energy publication-title: IEEE Internet Things J. – reference: T.S. Ferguson, A course in game theory, 2020. – volume: 54 start-page: 3044 year: 2018 end-page: 3055 ident: bib0011 article-title: A bidding strategy for virtual power plants with the Intraday demand response exchange market using the stochastic programming publication-title: IEEE Trans. Ind. Appl. – reference: Numba, User manual - numba 0.53.1-py3.7-linux-x86_64.egg documentation, 2021, June 1, URL – year: 2020 ident: bib0020 article-title: Multi-agent actor-critic network-based incentive mechanism for mobile crowdsensing in industrial systems publication-title: IEEE Trans. Ind. Inf. – volume: 26 start-page: 949 year: 2011 end-page: 956 ident: bib0022 article-title: Bidding strategy of virtual power plant for participating in energy and spinning reserve markets-Part I: Problem formulation publication-title: IEEE Trans. Power Syst. – volume: 14 start-page: 159 year: 1967 end-page: 182 ident: bib0037 article-title: Games with incomplete information played by “Bayesian” players, I–III Part I. The basic model publication-title: Manage. Sci. – volume: 11 start-page: 1343 year: 2019 end-page: 1355 ident: bib0016 article-title: Deep reinforcement learning for strategic bidding in electricity markets publication-title: IEEE Trans. Smart Grid – volume: 11 start-page: 2427 year: 2019 end-page: 2439 ident: bib0001 article-title: Constrained EV charging scheduling based on safe deep reinforcement learning publication-title: IEEE Trans. Smart Grid – volume: 11 start-page: 2855 year: 2020 end-page: 2869 ident: bib0009 article-title: Bi-level programming for optimal operation of an active distribution network with multiple virtual power plants publication-title: IEEE Trans. Sustain. Energy – volume: 43 start-page: 155 year: 2019 end-page: 162 ident: bib0013 article-title: Game model of electricity market involving virtual power plant composed of wind power and electric vehicles publication-title: Autom. Electr. Power Syst. – volume: 35 start-page: 36 year: 2011 end-page: 42 ident: bib0034 article-title: Study on plug-in electric vehicles charging load calculating publication-title: Autom. Electr. Power Syst. – volume: 8 start-page: 3075 year: 2021 end-page: 3087 ident: bib0002 article-title: CDDPG: a deep-reinforcement-learning-based approach for electric vehicle charging control publication-title: IEEE Internet Things J. – year: 2015 ident: bib0007 article-title: Optimal offering strategy of a virtual power plant: a stochastic bi-level approach publication-title: IEEE Trans. Smart Grid – volume: 7 start-page: 510 year: 2015 end-page: 519 ident: bib0006 article-title: Interactive dispatch modes and bidding strategy of multiple virtual power plants based on demand response and game theory publication-title: IEEE Trans. Smart Grid – volume: 26 start-page: 957 year: 2011 end-page: 964 ident: bib0023 article-title: Bidding strategy of virtual power plant for participating in energy and spinning reserve markets-Part II: Numerical analysis publication-title: IEEE Trans. Power Syst. – volume: 24 start-page: 795 year: 2021 end-page: 805 ident: bib0027 article-title: Optimum sitting and sizing of WTs, PVs, ESSs and EVCSs using hybrid soccer league competition-pattern search algorithm publication-title: Eng. Sci. Technol.Int. J. – volume: 171 start-page: 184 year: 2016 end-page: 199 ident: bib0008 article-title: A bi-level stochastic scheduling optimization model for a virtual power plant connected to a wind–photovoltaic–energy storage system considering the uncertainty and demand response publication-title: Appl. Energy – volume: 102 start-page: 235 year: 2018 end-page: 244 ident: bib0010 article-title: Optimal dispatch of virtual power plant using interval and deterministic combined optimization publication-title: Int. J. Electr. Power Energy Syst. – volume: 42 start-page: 48 year: 2018 end-page: 55 ident: bib0014 article-title: Stackelberg game based coordinated dispatch of virtual power plant considering electric vehicle management publication-title: Autom. Electr. Power Syst. – volume: 56 start-page: 5901 year: 2020 end-page: 5912 ident: bib0005 article-title: A deep reinforcement learning method for pricing electric vehicles with discrete charging levels publication-title: IEEE Trans. Ind. Appl. – start-page: 1 year: 2021 end-page: 12 ident: bib0003 article-title: Electric vehicle charging management based on deep reinforcement learning publication-title: J. Mod. Power Syst. Clean Energy – volume: 8 start-page: 4211 year: 2021 end-page: 4227 ident: bib0032 article-title: Consensus algorithms and deep reinforcement learning in energy market: a review publication-title: IEEE Internet Things J. – volume: 39 start-page: 4135 year: 2019 end-page: 4150 ident: bib0018 article-title: A nash game model of multi-agent participation in renewable energy consumption and the solving method via transfer reinforcement learning publication-title: Proc. CSEE – volume: 12 start-page: 1380 year: 2020 end-page: 1393 ident: bib0015 article-title: Mobility-aware charging scheduling for shared on-demand electric vehicle fleet using deep reinforcement learning publication-title: IEEE Trans. Smart Grid – volume: 35 start-page: 4180 year: 2020 end-page: 4192 ident: bib0017 article-title: Agent-based modeling in electricity market using deep deterministic policy gradient algorithm publication-title: IEEE Trans. Power Syst. – volume: 105 start-page: 282 year: 2013 end-page: 292 ident: bib0024 article-title: Offering model for a virtual power plant based on stochastic programming publication-title: Appl. Energy – volume: 2 start-page: 1 year: 1973 end-page: 23 ident: bib0038 article-title: Games with randomly disturbed payoffs: a new rationale for mixed-strategy equilibrium points publication-title: Int. J. Game Theory – start-page: 1587 year: 2018 end-page: 1596 ident: bib0030 article-title: Addressing function approximation error in actor-critic methods publication-title: International Conference on Machine Learning – reference: T.P. Lillicrap, J.J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, D. Wierstra, Continuous control with deep reinforcement learning, – reference: . – year: 2019 ident: bib0026 publication-title: Bidding strategy and coordinated dispatch of virtual power plant with multiple distributed energy resources – volume: 30 start-page: 136 year: 2015 end-page: 145 ident: bib0025 article-title: Optimal scheduling model of virtual power plant in a unified electricity trading market publication-title: Trans. China Electrotech. Soc. – volume: 60 start-page: 841 year: 1992 end-page: 846 ident: bib0028 article-title: Game theory publication-title: Economica – year: 2021 ident: bib0004 article-title: Deep reinforcement learning for continuous electric vehicles charging control with dynamic user behaviors publication-title: IEEE Trans. Smart Grid – start-page: 1861 year: 2018 end-page: 1870 ident: bib0031 article-title: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor publication-title: International Conference on Machine Learning – volume: 19 start-page: 2316 year: 2019 end-page: 2329 ident: bib0019 article-title: Free market of multi-leader multi-follower mobile crowdsensing: an incentive mechanism design by deep reinforcement learning publication-title: IEEE Trans. Mob. Comput. – year: 2020 ident: bib0021 article-title: Downlink transmit power control in ultra-dense UAV network based on mean field game and deep reinforcement learning publication-title: IEEE Trans. Veh. Technol. – year: 2018 ident: bib0033 article-title: Reinforcement Learning: An Introduction – start-page: 1587 year: 2018 ident: 10.1016/j.epsr.2021.107714_bib0030 article-title: Addressing function approximation error in actor-critic methods – start-page: 1 year: 2021 ident: 10.1016/j.epsr.2021.107714_bib0003 article-title: Electric vehicle charging management based on deep reinforcement learning publication-title: J. Mod. Power Syst. Clean Energy – year: 2019 ident: 10.1016/j.epsr.2021.107714_bib0026 – volume: 11 start-page: 2427 issue: 3 year: 2019 ident: 10.1016/j.epsr.2021.107714_bib0001 article-title: Constrained EV charging scheduling based on safe deep reinforcement learning publication-title: IEEE Trans. Smart Grid doi: 10.1109/TSG.2019.2955437 – ident: 10.1016/j.epsr.2021.107714_bib0036 – volume: 12 start-page: 1380 issue: 2 year: 2020 ident: 10.1016/j.epsr.2021.107714_bib0015 article-title: Mobility-aware charging scheduling for shared on-demand electric vehicle fleet using deep reinforcement learning publication-title: IEEE Trans. Smart Grid doi: 10.1109/TSG.2020.3025082 – volume: 60 start-page: 841 issue: 238 year: 1992 ident: 10.1016/j.epsr.2021.107714_bib0028 article-title: Game theory publication-title: Economica – volume: 11 start-page: 1343 issue: 2 year: 2019 ident: 10.1016/j.epsr.2021.107714_bib0016 article-title: Deep reinforcement learning for strategic bidding in electricity markets publication-title: IEEE Trans. Smart Grid doi: 10.1109/TSG.2019.2936142 – volume: 39 start-page: 4135 issue: 14 year: 2019 ident: 10.1016/j.epsr.2021.107714_bib0018 article-title: A nash game model of multi-agent participation in renewable energy consumption and the solving method via transfer reinforcement learning publication-title: Proc. CSEE – volume: 35 start-page: 4180 issue: 6 year: 2020 ident: 10.1016/j.epsr.2021.107714_bib0017 article-title: Agent-based modeling in electricity market using deep deterministic policy gradient algorithm publication-title: IEEE Trans. Power Syst. doi: 10.1109/TPWRS.2020.2999536 – volume: 11 start-page: 2855 issue: 4 year: 2020 ident: 10.1016/j.epsr.2021.107714_bib0009 article-title: Bi-level programming for optimal operation of an active distribution network with multiple virtual power plants publication-title: IEEE Trans. Sustain. Energy doi: 10.1109/TSTE.2020.2980317 – volume: 42 start-page: 48 issue: 11 year: 2018 ident: 10.1016/j.epsr.2021.107714_bib0014 article-title: Stackelberg game based coordinated dispatch of virtual power plant considering electric vehicle management publication-title: Autom. Electr. Power Syst. – volume: 7 start-page: 6288 issue: 7 year: 2020 ident: 10.1016/j.epsr.2021.107714_bib0012 article-title: Deep reinforcement learning for economic dispatch of virtual power plant in internet of energy publication-title: IEEE Internet Things J. doi: 10.1109/JIOT.2020.2966232 – volume: 105 start-page: 282 year: 2013 ident: 10.1016/j.epsr.2021.107714_bib0024 article-title: Offering model for a virtual power plant based on stochastic programming publication-title: Appl. Energy doi: 10.1016/j.apenergy.2012.12.077 – volume: 171 start-page: 184 year: 2016 ident: 10.1016/j.epsr.2021.107714_bib0008 article-title: A bi-level stochastic scheduling optimization model for a virtual power plant connected to a wind–photovoltaic–energy storage system considering the uncertainty and demand response publication-title: Appl. Energy doi: 10.1016/j.apenergy.2016.03.020 – start-page: 1861 year: 2018 ident: 10.1016/j.epsr.2021.107714_bib0031 article-title: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor – volume: 8 start-page: 4211 issue: 6 year: 2021 ident: 10.1016/j.epsr.2021.107714_bib0032 article-title: Consensus algorithms and deep reinforcement learning in energy market: a review publication-title: IEEE Internet Things J. doi: 10.1109/JIOT.2020.3032162 – year: 2021 ident: 10.1016/j.epsr.2021.107714_bib0004 article-title: Deep reinforcement learning for continuous electric vehicles charging control with dynamic user behaviors publication-title: IEEE Trans. Smart Grid doi: 10.1109/TSG.2021.3098298 – volume: 26 start-page: 949 issue: 2 year: 2011 ident: 10.1016/j.epsr.2021.107714_bib0022 article-title: Bidding strategy of virtual power plant for participating in energy and spinning reserve markets-Part I: Problem formulation publication-title: IEEE Trans. Power Syst. doi: 10.1109/TPWRS.2010.2070884 – volume: 14 start-page: 159 issue: 3 year: 1967 ident: 10.1016/j.epsr.2021.107714_bib0037 article-title: Games with incomplete information played by “Bayesian” players, I–III Part I. The basic model publication-title: Manage. Sci. doi: 10.1287/mnsc.14.3.159 – volume: 102 start-page: 235 year: 2018 ident: 10.1016/j.epsr.2021.107714_bib0010 article-title: Optimal dispatch of virtual power plant using interval and deterministic combined optimization publication-title: Int. J. Electr. Power Energy Syst. doi: 10.1016/j.ijepes.2018.04.011 – year: 2018 ident: 10.1016/j.epsr.2021.107714_bib0033 – volume: 43 start-page: 155 issue: 3 year: 2019 ident: 10.1016/j.epsr.2021.107714_bib0013 article-title: Game model of electricity market involving virtual power plant composed of wind power and electric vehicles publication-title: Autom. Electr. Power Syst. – year: 2015 ident: 10.1016/j.epsr.2021.107714_bib0007 article-title: Optimal offering strategy of a virtual power plant: a stochastic bi-level approach publication-title: IEEE Trans. Smart Grid doi: 10.1109/TSG.2015.2419714 – volume: 26 start-page: 957 issue: 2 year: 2011 ident: 10.1016/j.epsr.2021.107714_bib0023 article-title: Bidding strategy of virtual power plant for participating in energy and spinning reserve markets-Part II: Numerical analysis publication-title: IEEE Trans. Power Syst. doi: 10.1109/TPWRS.2010.2070883 – year: 2020 ident: 10.1016/j.epsr.2021.107714_bib0021 article-title: Downlink transmit power control in ultra-dense UAV network based on mean field game and deep reinforcement learning publication-title: IEEE Trans. Veh. Technol. doi: 10.1109/TVT.2020.3043851 – ident: 10.1016/j.epsr.2021.107714_bib0029 – volume: 8 start-page: 3075 issue: 5 year: 2021 ident: 10.1016/j.epsr.2021.107714_bib0002 article-title: CDDPG: a deep-reinforcement-learning-based approach for electric vehicle charging control publication-title: IEEE Internet Things J. doi: 10.1109/JIOT.2020.3015204 – volume: 30 start-page: 136 issue: 23 year: 2015 ident: 10.1016/j.epsr.2021.107714_bib0025 article-title: Optimal scheduling model of virtual power plant in a unified electricity trading market publication-title: Trans. China Electrotech. Soc. – year: 2020 ident: 10.1016/j.epsr.2021.107714_bib0020 article-title: Multi-agent actor-critic network-based incentive mechanism for mobile crowdsensing in industrial systems publication-title: IEEE Trans. Ind. Inf. – ident: 10.1016/j.epsr.2021.107714_bib0035 doi: 10.37478/jpe.v6i1.981 – volume: 7 start-page: 510 issue: 1 year: 2015 ident: 10.1016/j.epsr.2021.107714_bib0006 article-title: Interactive dispatch modes and bidding strategy of multiple virtual power plants based on demand response and game theory publication-title: IEEE Trans. Smart Grid doi: 10.1109/TSG.2015.2409121 – volume: 35 start-page: 36 issue: 14 year: 2011 ident: 10.1016/j.epsr.2021.107714_bib0034 article-title: Study on plug-in electric vehicles charging load calculating publication-title: Autom. Electr. Power Syst. – volume: 56 start-page: 5901 issue: 5 year: 2020 ident: 10.1016/j.epsr.2021.107714_bib0005 article-title: A deep reinforcement learning method for pricing electric vehicles with discrete charging levels publication-title: IEEE Trans. Ind. Appl. doi: 10.1109/TIA.2020.2984614 – volume: 54 start-page: 3044 issue: 4 year: 2018 ident: 10.1016/j.epsr.2021.107714_bib0011 article-title: A bidding strategy for virtual power plants with the Intraday demand response exchange market using the stochastic programming publication-title: IEEE Trans. Ind. Appl. doi: 10.1109/TIA.2018.2828379 – volume: 19 start-page: 2316 issue: 10 year: 2019 ident: 10.1016/j.epsr.2021.107714_bib0019 article-title: Free market of multi-leader multi-follower mobile crowdsensing: an incentive mechanism design by deep reinforcement learning publication-title: IEEE Trans. Mob. Comput. doi: 10.1109/TMC.2019.2927314 – volume: 2 start-page: 1 issue: 1 year: 1973 ident: 10.1016/j.epsr.2021.107714_bib0038 article-title: Games with randomly disturbed payoffs: a new rationale for mixed-strategy equilibrium points publication-title: Int. J. Game Theory doi: 10.1007/BF01737554 – volume: 24 start-page: 795 issue: 3 year: 2021 ident: 10.1016/j.epsr.2021.107714_bib0027 article-title: Optimum sitting and sizing of WTs, PVs, ESSs and EVCSs using hybrid soccer league competition-pattern search algorithm publication-title: Eng. Sci. Technol.Int. J. |
| SSID | ssj0006975 |
| Score | 2.5718927 |
| Snippet | •VPP agent and EV charging station agent games to obtain electricity price.•The VPP tends to use mixed strategy, while EVs tend to use pure strategies.•Using... Virtual power plants (VPPs), which aggregate customer-side flexibility resources, provide an effective way for customers to participate in the electricity... |
| SourceID | proquest crossref elsevier |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 107714 |
| SubjectTerms | Algorithms Customers Deep learning Deep reinforcement learning Distributed generation Electric vehicle Electric vehicle charging Electric vehicle charging stations Electric vehicles Electricity distribution Energy sources Game theory Machine learning Real-time dispatch Resource scheduling Scheduling Stackelberg game Stochastic strategy Strategy Virtual power plant Virtual power plants |
| Title | Virtual power plant containing electric vehicles scheduling strategies based on deep reinforcement learning |
| URI | https://dx.doi.org/10.1016/j.epsr.2021.107714 https://www.proquest.com/docview/2673620496 |
| Volume | 205 |
| WOSCitedRecordID | wos000793739800007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Elsevier SD Freedom Collection Journals 2021 customDbUrl: eissn: 1873-2046 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0006975 issn: 0378-7796 databaseCode: AIEXJ dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1bb9MwFLbKxgM8IK5iMJAfeIsitXFjx48TGgI0TUgMVJ4ix3bWblVWtU21X8Dv5hxfkq4TEyDxElVuY1c-X3wuOec7hLyrQQtUsmLpWFiRjk1epdLqccoAH9YOtcyU61pyIk5Pi8lEfhkMfsZamM1cNE1xfS0X_1XUMAbCxtLZvxB3NykMwGcQOlxB7HD9I8F_ny1dTcgC-59hl-hm7RLSfSuIxPe9melkY6cuJS4B_xb0zdwHFiJzRILqzeCrBGPtIllax7CqXTAxtpo4vxHWj_P6dT1DNL6T2AqXucB9SAEGWMYZMP2n9SHbaYvMHd1R1IaMgOZ8NW3j6MkszPEDEIvfbUcuwOntE15cOO1WSY0v4wK3VggZ-LH9qVwIBjIOscpwbGeuXPu2CvDRiAvQ5ivke81GMCSEr1Tdodb-iovhWuD3DrnMxT2yn4lcwgG_f_TpePK50-lcOsrm7s-F8iufKbi70u9MnB1l7yyYs8fkUXA96JGHzBMysM1T8nCLkPIZuQzgoU6I1IGH9uChETw0gof24KE9eKgDD71qKIKH3gAPjeB5Tr59OD57_zEN7ThSzbJinRpeiWqkC65yw8HR5dYoZhja4DnXNViqUlthVQ1nem2V0EwaZoXgw3qotGXsBdlrrhr7ktCxqlSmRrUc1QYMokrxGjsraAPuMxiR4oCM4g6WOnDVY8uUeRmTEi9K3PUSd730u35Aku6ehWdqufPXeRRMGWxNb0OWgKM77zuMUizDQ78qM0yOBHxK_uofp31NHvRPyCHZWy9b-4bc15v1bLV8G9D4Cwwus30 |
| linkProvider | Elsevier |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Virtual+power+plant+containing+electric+vehicles+scheduling+strategies+based+on+deep+reinforcement+learning&rft.jtitle=Electric+power+systems+research&rft.au=Wang%2C+Jianing&rft.au=Guo%2C+Chunlin&rft.au=Yu%2C+Changshu&rft.au=Liang%2C+Yanchang&rft.date=2022-04-01&rft.pub=Elsevier+B.V&rft.issn=0378-7796&rft.eissn=1873-2046&rft.volume=205&rft_id=info:doi/10.1016%2Fj.epsr.2021.107714&rft.externalDocID=S0378779621006957 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0378-7796&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0378-7796&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0378-7796&client=summon |