Automated Speed and Lane Change Decision Making using Deep Reinforcement Learning

This paper introduces a method, based on deep reinforcement learning, for automatically generating a general purpose decision making function. A Deep Q-Network agent was trained in a simulated environment to handle speed and lane change decisions for a truck-trailer combination. In a highway driving...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings (IEEE Conference on Intelligent Transportation Systems) Jg. 2018-November; S. 2148 - 2155
Hauptverfasser: Hoel, Carl-Johan, Wolff, Krister, Laine, Leo
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.11.2018
Schlagworte:
ISBN:9781728103211, 1728103215
ISSN:2153-0009, 2153-0017, 2153-0017
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract This paper introduces a method, based on deep reinforcement learning, for automatically generating a general purpose decision making function. A Deep Q-Network agent was trained in a simulated environment to handle speed and lane change decisions for a truck-trailer combination. In a highway driving case, it is shown that the method produced an agent that matched or surpassed the performance of a commonly used reference model. To demonstrate the generality of the method, the exact same algorithm was also tested by training it for an overtaking case on a road with oncoming traffic. Furthermore, a novel way of applying a convolutional neural network to high level input that represents interchangeable objects is also introduced.
AbstractList This paper introduces a method, based on deep reinforcement learning, for automatically generating a general purpose decision making function. A Deep Q-Network agent was trained in a simulated environment to handle speed and lane change decisions for a truck-trailer combination. In a highway driving case, it is shown that the method produced an agent that matched or surpassed the performance of a commonly used reference model. To demonstrate the generality of the method, the exact same algorithm was also tested by training it for an overtaking case on a road with oncoming traffic. Furthermore, a novel way of applying a convolutional neural network to high level input that represents interchangeable objects is also introduced. https://arxiv.org/abs/1803.10056
This paper introduces a method, based on deep reinforcement learning, for automatically generating a general purpose decision making function. A Deep Q-Network agent was trained in a simulated environment to handle speed and lane change decisions for a truck-trailer combination. In a highway driving case, it is shown that the method produced an agent that matched or surpassed the performance of a commonly used reference model. To demonstrate the generality of the method, the exact same algorithm was also tested by training it for an overtaking case on a road with oncoming traffic. Furthermore, a novel way of applying a convolutional neural network to high level input that represents interchangeable objects is also introduced.
Author Wolff, Krister
Laine, Leo
Hoel, Carl-Johan
Author_xml – sequence: 1
  givenname: Carl-Johan
  surname: Hoel
  fullname: Hoel, Carl-Johan
  email: carl-johan.hoel@chalmers.se
  organization: Chalmers University of Technology, Göteborg, 412 96, Sweden
– sequence: 2
  givenname: Krister
  surname: Wolff
  fullname: Wolff, Krister
  email: krister.wolff@chalmers.se
  organization: Chalmers University of Technology, Göteborg, 412 96, Sweden
– sequence: 3
  givenname: Leo
  surname: Laine
  fullname: Laine, Leo
  email: leo.laine@chalmers.se
  organization: Chalmers University of Technology, Göteborg, 412 96, Sweden
BackLink https://research.chalmers.se/publication/508724$$DView record from Swedish Publication Index (Chalmers tekniska högskola)
BookMark eNpVkF1LAkEUhqcyyMwfEN3sH9DmzOx8XYp9CUaUdj3Mzp7RLZ2VHSX6960oQjfveeGBh8N7TTqxjkjILdAhADX3k_lsPGQU9FALaYTUZ6RvlAbFNFDOuDgnXQaCDygFdfGPAXROjJor0k_pq20t0ZLTLnkf7bb12m2xzGYbbNPFMpu6iNl46eICswf0VarqmL267yousl3a5wPiJvvAKoa68bjGuM2m6JrYshtyGdwqYf94e-Tz6XE-fhlM354n49F0UDEB24EKiqL3JoAwoeBMUFkUZQ6AYKSm0tO8dEGJ3JeOUSdQBF1IV0CpypwGyXtkdvCmH9zsCrtpqrVrfm3tKttgar_xS-uXbrXGJtmEVqPwzAu00lBl81AW1nGpLDOsyLkCGejeenewVoh4ch5n5387iXL5
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
ADTPV
BNKNJ
F1S
DOI 10.1109/ITSC.2018.8569568
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Xplore
IEEE Proceedings Order Plans (POP) 1998-present
SwePub
SwePub Conference
SWEPUB Chalmers tekniska högskola
DatabaseTitleList

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISBN 9781728103235
1728103231
EISSN 2153-0017
EndPage 2155
ExternalDocumentID oai_research_chalmers_se_8e5c2c5e_6907_4fdb_a367_292b43716f06
8569568
Genre orig-research
GroupedDBID 6IE
6IF
6IH
6IK
6IL
6IM
6IN
AAJGR
AAWTH
ACGFS
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IPLJI
M43
OCL
RIE
RIL
RIO
RNS
ADTPV
BNKNJ
F1S
ID FETCH-LOGICAL-i251t-7f70ecc9f159fb32506bbd411e196806c04daf754cda20a5e5f8b6ab1d7d40f63
IEDL.DBID RIE
ISBN 9781728103211
1728103215
ISICitedReferencesCount 168
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000457881302024&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2153-0009
2153-0017
IngestDate Wed Nov 05 04:50:03 EST 2025
Wed Aug 27 02:53:22 EDT 2025
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i251t-7f70ecc9f159fb32506bbd411e196806c04daf754cda20a5e5f8b6ab1d7d40f63
PageCount 8
ParticipantIDs ieee_primary_8569568
swepub_primary_oai_research_chalmers_se_8e5c2c5e_6907_4fdb_a367_292b43716f06
PublicationCentury 2000
PublicationDate 2018-Nov.
2018
PublicationDateYYYYMMDD 2018-11-01
2018-01-01
PublicationDate_xml – month: 11
  year: 2018
  text: 2018-Nov.
PublicationDecade 2010
PublicationTitle Proceedings (IEEE Conference on Intelligent Transportation Systems)
PublicationTitleAbbrev ITSC
PublicationYear 2018
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0000328630
Score 2.2242558
Snippet This paper introduces a method, based on deep reinforcement learning, for automatically generating a general purpose decision making function. A Deep Q-Network...
SourceID swepub
ieee
SourceType Open Access Repository
Publisher
StartPage 2148
SubjectTerms Accelerated aging
Artificial Intelligence (cs.AI)
Decision making
Machine Learning (cs.LG)
Markov processes
Neurons
Roads
Robotics (cs.RO)
Title Automated Speed and Lane Change Decision Making using Deep Reinforcement Learning
URI https://ieeexplore.ieee.org/document/8569568
https://research.chalmers.se/publication/508724
Volume 2018-November
WOSCitedRecordID wos000457881302024&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3JTsMwFLRaxAEuLAWxyweOpM3i2M4RFRBIgIAWiZvl5bntJa1oy_djO6YgxIVb9kTPsTx-fjOD0HlmXJ8ByxNdWZ0QrlhSSakSaSwhxOv5VjaYTbDHR_72Vj210MWKCwMAofgMun4zrOWbqV76VFmPl9Sz29qozRhruFqrfIrXhaNF6rlbLOdeJi4ro6TT134WVzWztOrdDQd9X9jFu_Gh0V3ll2JoGGVutv73fdto75uuh59WA9EOakG9izZ_KA120PPlcjF14BQMHszcdVjWBt_LGnBDL8BX0WsHPwR7Kuzr4UfuKMzwCwR1VR0SiTgKso720OvN9bB_m0Q3hWTiMMwiYZalrr0q6wCMVYWDPlQpQ7IMXCfkKdUpMdKykmgj81SWUFquqFSZYYaklhb7aK2e1nCAsHIgDrLcKqBAtGRSgufYOujg7jUUDlHHB0fMGsEMEeNyiO6b2K5OeG3rKGo0FnocHGPmYg6CQ6lzXYLwk3dBrFFCFpSJvMoVKdwcz6b06O-3HKMN364NX_AErS3el3CK1vXHYjJ_Pws_zScku8FM
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LTtwwFL0CWqllw1vlUfCiSwJx4tjOEtEiUIcRLYPEzvLjGthkRswM34_tuNMKdcMu70TXsXx8fc85AN-oC30GvSxs623BpBFFq7UptPOMsajn2_pkNiGGQ3l_394swfGCC4OIqfgMT-JmWst3YzuPqbJT2fDIbluGDw1jFe3ZWouMSlSG43UZ2VuiklEojjZZ1OnPPs3rmrRsT69Gt-extEue5Mdmf5U3mqFpnLlYe98XrsP2X8IeuVkMRRuwhN0mrP6jNbgFv87ms3GAp-jI7SRcR3TnyEB3SHqCAfme3XbIdTKoIrEi_iEcxQn5jUlf1aZUIsmSrA_bcHfxY3R-WWQ_heIpoJhZIbwoQ4u1PkAYb-oAfrgxjlGKoRvKktuSOe1Fw6zTVakbbLw0XBvqhGOl5_UOrHTjDr8AMQHGIa28QY7MaqE1RpZtAA_hXsdxF7ZicNSkl8xQOS67MOhjuzgR1a2zrNGjso_JM2aqpqgkNrayDao4fVfMO6N0zYWq2sqwOszyfMn3_v-WI_h0OboeqMHV8Oc-fI5t3LMHD2Bl9jzHr_DRvsyeps-H6Qd6BV-axJM
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+%28IEEE+Conference+on+Intelligent+Transportation+Systems%29&rft.atitle=Automated+Speed+and+Lane+Change+Decision+Making+using+Deep+Reinforcement+Learning&rft.au=Hoel%2C+Carl-Johan+E&rft.au=Wolff%2C+Krister&rft.au=Laine%2C+Leo&rft.date=2018-01-01&rft.issn=2153-0017&rft.volume=2018-November&rft.spage=2148&rft_id=info:doi/10.1109%2FITSC.2018.8569568&rft.externalDocID=oai_research_chalmers_se_8e5c2c5e_6907_4fdb_a367_292b43716f06
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2153-0009&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2153-0009&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2153-0009&client=summon