Reinforcement Learning for Dynamic Optimization of Eco-Driving in Smart Healthcare Transportation Networks

Smart transportation networks face increasing demands for efficiency and sustainability. This study presents a reinforcement learning approach that optimizes eco-driving strategies for connected and automated vehicles (CAVs) in urban environments, with a particular application to healthcare logistic...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on intelligent transportation systems s. 1 - 12
Hlavní autoři: Cai, Wang, Anwlnkom, Tomley, Zhang, Lingling, Basheer, Shakila, Yang, Jing
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 2025
Témata:
ISSN:1524-9050, 1558-0016
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Smart transportation networks face increasing demands for efficiency and sustainability. This study presents a reinforcement learning approach that optimizes eco-driving strategies for connected and automated vehicles (CAVs) in urban environments, with a particular application to healthcare logistics. Specifically, we propose a novel approach using reinforcement learning, specifically a twin delayed deep deterministic policy gradient (TD3) algorithm, to dynamically optimize CAV trajectories at signalized intersections. The proposed healthcare eco-driving trajectory optimization (TD3-HETO) model incorporates real-time traffic conditions, signal timing information, and healthcare urgency levels to generate optimal acceleration profiles. The reward function is designed to balance energy efficiency, traffic flow, safety, comfort, and healthcare delivery timeliness. Additionally, the model introduces a dynamic exploration strategy that adapts to healthcare task urgency, enabling efficient balancing between energy consumption and delivery timelines. Experimental results show that TD3-HETO reduces energy consumption by up to 28.7% compared to baseline methods while improving average speeds by 3.7% for urgent healthcare deliveries. The model achieves superior safety performance with 98.7% of time steps showing zero conflicts, compared to 95.3% for the best baseline. TD3-HETO also demonstrates remarkable adaptability to varying traffic demands and signal timings, maintaining consistent performance even at high traffic volumes. This research contributes to developing intelligent transportation systems to enhance environmental sustainability and healthcare accessibility in smart cities, potentially improving patient outcomes and operational efficiency in urban healthcare logistics.
AbstractList Smart transportation networks face increasing demands for efficiency and sustainability. This study presents a reinforcement learning approach that optimizes eco-driving strategies for connected and automated vehicles (CAVs) in urban environments, with a particular application to healthcare logistics. Specifically, we propose a novel approach using reinforcement learning, specifically a twin delayed deep deterministic policy gradient (TD3) algorithm, to dynamically optimize CAV trajectories at signalized intersections. The proposed healthcare eco-driving trajectory optimization (TD3-HETO) model incorporates real-time traffic conditions, signal timing information, and healthcare urgency levels to generate optimal acceleration profiles. The reward function is designed to balance energy efficiency, traffic flow, safety, comfort, and healthcare delivery timeliness. Additionally, the model introduces a dynamic exploration strategy that adapts to healthcare task urgency, enabling efficient balancing between energy consumption and delivery timelines. Experimental results show that TD3-HETO reduces energy consumption by up to 28.7% compared to baseline methods while improving average speeds by 3.7% for urgent healthcare deliveries. The model achieves superior safety performance with 98.7% of time steps showing zero conflicts, compared to 95.3% for the best baseline. TD3-HETO also demonstrates remarkable adaptability to varying traffic demands and signal timings, maintaining consistent performance even at high traffic volumes. This research contributes to developing intelligent transportation systems to enhance environmental sustainability and healthcare accessibility in smart cities, potentially improving patient outcomes and operational efficiency in urban healthcare logistics.
Author Cai, Wang
Yang, Jing
Basheer, Shakila
Anwlnkom, Tomley
Zhang, Lingling
Author_xml – sequence: 1
  givenname: Wang
  surname: Cai
  fullname: Cai, Wang
  email: caiwang@jzmu.edu.cn
  organization: Department of Obstetrics and Gynecology, The First Affiliated Hospital of Jinzhou Medical University, Jinzhou, China
– sequence: 2
  givenname: Tomley
  surname: Anwlnkom
  fullname: Anwlnkom, Tomley
  email: smart_6565@protonmail.com
  organization: College of Computer Science, Wichita State University, Wichita, KS, USA
– sequence: 3
  givenname: Lingling
  orcidid: 0009-0008-4425-7613
  surname: Zhang
  fullname: Zhang, Lingling
  email: zhangling_beihua@163.com
  organization: College of Computer Science and Technology, Beihua University, Jilin, China
– sequence: 4
  givenname: Shakila
  orcidid: 0000-0001-9032-9560
  surname: Basheer
  fullname: Basheer, Shakila
  email: sbbasheer@pnu.edu.sa
  organization: College of Computer and Information Systems, Princess Nourah bint Abdulrahman University, P.O. Box 844428, Riyadh, Saudi Arabia
– sequence: 5
  givenname: Jing
  orcidid: 0000-0002-0438-6006
  surname: Yang
  fullname: Yang, Jing
  email: yangjing01@jzmu.edu.cn
  organization: Department of Pathology, The First Affiliated Hospital of Jinzhou Medical University, Jinzhou, China
BookMark eNpNkMFKAzEQhoMo2FYfQPCQF9iaSTbdzVHaagvFgl3PSxonmtpNSjYo9endpT14muHn_4bhG5JLHzwScgdsDMDUQ7WsNmPOuBwLOQEm8gsyACnLjDGYXPY7zzPFJLsmw7bddWkuAQZk94rO2xANNugTXaGO3vkP2kV0dvS6cYauD8k17lcnFzwNls5NyGbRffc95-mm0THRBep9-jQ6Iq2i9u0hxHQiXjD9hPjV3pArq_ct3p7niLw9zavpIlutn5fTx1VmOGcpAylYbqVAPUGjttoKXdqCI2eW8aLkXG45QFFaabcg0BojlRSKG8HZuylBjAic7poY2jairQ_RdT8ea2B1L6vuZdW9rPosq2PuT4xDxH99pXKeF-IPJftpoA
CODEN ITISFG
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
DOI 10.1109/TITS.2025.3561034
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-0016
EndPage 12
ExternalDocumentID 10_1109_TITS_2025_3561034
10994247
Genre orig-research
GrantInformation_xml – fundername: Princess Nourah bint Abdulrahman University Researchers Supporting Project
  grantid: PNURSP2025R195
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
HZ~
IFIPE
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
AAYXX
AETIX
AGSQL
AIBXA
CITATION
EJD
H~9
ZY4
ID FETCH-LOGICAL-c220t-15304f53ea6ec9baf3a8f72e20f0278225b21178f5fb13efcc595392c320dc813
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001484769800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1524-9050
IngestDate Sat Nov 29 07:55:49 EST 2025
Wed Aug 27 01:53:14 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c220t-15304f53ea6ec9baf3a8f72e20f0278225b21178f5fb13efcc595392c320dc813
ORCID 0000-0002-0438-6006
0000-0001-9032-9560
0009-0008-4425-7613
PageCount 12
ParticipantIDs ieee_primary_10994247
crossref_primary_10_1109_TITS_2025_3561034
PublicationCentury 2000
PublicationDate 2025-00-00
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – year: 2025
  text: 2025-00-00
PublicationDecade 2020
PublicationTitle IEEE transactions on intelligent transportation systems
PublicationTitleAbbrev TITS
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0014511
Score 2.431829
Snippet Smart transportation networks face increasing demands for efficiency and sustainability. This study presents a reinforcement learning approach that optimizes...
SourceID crossref
ieee
SourceType Index Database
Publisher
StartPage 1
SubjectTerms Adaptation models
dynamic optimization
eco-driving
Energy efficiency
Heuristic algorithms
Logistics
Medical services
Optimization
Reinforcement learning
smart healthcare transportation networks
Transportation
twin delayed deep deterministic policy gradient algorithm
Urban areas
Vehicle dynamics
Title Reinforcement Learning for Dynamic Optimization of Eco-Driving in Smart Healthcare Transportation Networks
URI https://ieeexplore.ieee.org/document/10994247
WOSCitedRecordID wos001484769800001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Xplore
  customDbUrl:
  eissn: 1558-0016
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014511
  issn: 1524-9050
  databaseCode: RIE
  dateStart: 20000101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA62eNCDz4r1RQ6ehG23eTS7R7EtFaSKrdLbspudSIVupQ9_v5NkW-rBg7cQMhBm8pjnN4Tc5gDtlKkYjRwwaKBkEEQyy4K2kFrkmjPp4Jren9RgEI3H8UtZrO5qYQDAJZ9Bww5dLD-f6ZV1lTVtFEcwoSqkopTyxVqbkIEF2nLgqEwEcSjXIUykaY4eR0M0BZlscKsucPHrE9rqquI-ld7hP7dzRA5K7ZHee3Efkx0oTsj-FqbgKfl8BQeGqp3fj5b4qR8Up2jHt5-nz_hOTMsCTDoztKtnQWc-sb4FOinocIrnifY3mWF0g4DuKQY-dXxRI2-97uihH5QNFQLNWLgM8HULhZEc0jboOEsNTyOjGLDQ2AAkXu0M7UEVGWmyFgejtYwlKlAosjDXUYufkWoxK-Cc0CiSqKq0DKpTIMI8i_GeG6W4Zjy1kHx1crfmcPLlcTMSZ2-EcWLFkVhxJKU46qRmubu10DP24o_5S7Jnyb0n5IpUl_MVXJNd_b2cLOY37lj8ANlatlI
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT8JAEN4omqgHnxjxuQdPJoV2H7Q9GoFAxGoEDbem3c4aTGgND3-_-ygEDx68NZtu08zsY2a-mW8Qus0AmgnxQ-XkgFQOSgpOwNPUaTIuWCYo4Yau6b3vR1EwGoUvZbG6qYUBAJN8BnX9aLD8rBALHSpraBSHEeZvoi3OGPFsudYKNNBUW4YelTAndPkSxFSzGsPecKCcQcLrVBsMlP26htb6qphrpXPwzx86RPul_YjvrcKP0Abkx2hvjVXwBH2-gqFDFSbyh0sG1Q-shnDLNqDHz-qkmJQlmLiQuC0KpzUd6-gCHud4MFErCndXuWF4xYFuZ0Q2eXxWRW-d9vCh65QtFRxBiDt31PnmMskpJE0QYZpImgTSJ0BcqSFItblT5RH6geQy9ShIIXjIlQmllOZmIvDoKarkRQ5nCAcBV8aKJ5VBBczN0lDtdOn7VBCaaFK-GrpbSjj-sswZsfE43DDW6oi1OuJSHTVU1dJde9EK9vyP8Ru00x0-9eN-L3q8QLv6UzYucokq8-kCrtC2-J6PZ9Nrs0R-AINBuZk
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Reinforcement+Learning+for+Dynamic+Optimization+of+Eco-Driving+in+Smart+Healthcare+Transportation+Networks&rft.jtitle=IEEE+transactions+on+intelligent+transportation+systems&rft.au=Cai%2C+Wang&rft.au=Anwlnkom%2C+Tomley&rft.au=Zhang%2C+Lingling&rft.au=Basheer%2C+Shakila&rft.date=2025&rft.issn=1524-9050&rft.eissn=1558-0016&rft.spage=1&rft.epage=12&rft_id=info:doi/10.1109%2FTITS.2025.3561034&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TITS_2025_3561034
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1524-9050&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1524-9050&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1524-9050&client=summon