Model-free Deep Reinforcement Learning for Urban Autonomous Driving

Urban autonomous driving decision making is challenging due to complex road geometry and multi-agent interactions. Current decision making methods are mostly manually designing the driving policy, which might result in suboptimal solutions and is expensive to develop, generalize and maintain at scal...

Full description

Saved in:
Bibliographic Details
Published in:2019 IEEE Intelligent Transportation Systems Conference (ITSC) pp. 2765 - 2771
Main Authors: Chen, Jianyu, Yuan, Bodi, Tomizuka, Masayoshi
Format: Conference Proceeding
Language:English
Published: IEEE 01.10.2019
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Urban autonomous driving decision making is challenging due to complex road geometry and multi-agent interactions. Current decision making methods are mostly manually designing the driving policy, which might result in suboptimal solutions and is expensive to develop, generalize and maintain at scale. On the other hand, with reinforcement learning (RL), a policy can be learned and improved automatically without any manual designs. However, current RL methods generally do not work well on complex urban scenarios. In this paper, we propose a framework to enable model-free deep reinforcement learning in challenging urban autonomous driving scenarios. We design a specific input representation and use visual encoding to capture the low-dimensional latent states. Several state-of-the-art model-free deep RL algorithms are implemented into our framework, with several tricks to improve their performance. We evaluate our method in a challenging roundabout task with dense surrounding vehicles in a high-definition driving simulator. The result shows that our method can solve the task well and is significantly better than the baseline.
AbstractList Urban autonomous driving decision making is challenging due to complex road geometry and multi-agent interactions. Current decision making methods are mostly manually designing the driving policy, which might result in suboptimal solutions and is expensive to develop, generalize and maintain at scale. On the other hand, with reinforcement learning (RL), a policy can be learned and improved automatically without any manual designs. However, current RL methods generally do not work well on complex urban scenarios. In this paper, we propose a framework to enable model-free deep reinforcement learning in challenging urban autonomous driving scenarios. We design a specific input representation and use visual encoding to capture the low-dimensional latent states. Several state-of-the-art model-free deep RL algorithms are implemented into our framework, with several tricks to improve their performance. We evaluate our method in a challenging roundabout task with dense surrounding vehicles in a high-definition driving simulator. The result shows that our method can solve the task well and is significantly better than the baseline.
Author Chen, Jianyu
Yuan, Bodi
Tomizuka, Masayoshi
Author_xml – sequence: 1
  givenname: Jianyu
  surname: Chen
  fullname: Chen, Jianyu
  organization: University of California,Department of Mechanical Engineering,Berkeley,USA,CA94720
– sequence: 2
  givenname: Bodi
  surname: Yuan
  fullname: Yuan, Bodi
  organization: University of California,Department of Mechanical Engineering,Berkeley,USA,CA94720
– sequence: 3
  givenname: Masayoshi
  surname: Tomizuka
  fullname: Tomizuka, Masayoshi
  organization: University of California,Department of Mechanical Engineering,Berkeley,USA,CA94720
BookMark eNotj9tKAzEYhCPohdY-gHiTF9j1z3lzWbYeCiuCttcl2f0jgW5S0q3g27tgrwZm-IaZO3KdckJCHhjUjIF92my_2poDs3VjmRGgr8jSmoYp0WgDXDa3pH3PAx6qUBDpGvFIPzGmkEuPI6aJduhKiumbzhbdFe8SXZ2nnPKYzye6LvFnDu_JTXCHEy4vuiC7l-dt-1Z1H6-bdtVVkYOYKi1VaDjX2vPgYJDg5k0GvdNusMaBgp6rgQmPgimvlZTog58hAbIXYRAL8vjfGxFxfyxxdOV3f7km_gDFQEel
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/ITSC.2019.8917306
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9781538670248
1538670240
EndPage 2771
ExternalDocumentID 8917306
Genre orig-research
GroupedDBID 6IE
6IH
CBEJK
RIE
RIO
ID FETCH-LOGICAL-i203t-645f82266b2fa0d40a1737eba6ad97a050c25d13be315b6544ebfb45f304c3fd3
IEDL.DBID RIE
ISICitedReferencesCount 215
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000521238102130&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Thu Jun 29 18:38:37 EDT 2023
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i203t-645f82266b2fa0d40a1737eba6ad97a050c25d13be315b6544ebfb45f304c3fd3
PageCount 7
ParticipantIDs ieee_primary_8917306
PublicationCentury 2000
PublicationDate 2019-Oct.
PublicationDateYYYYMMDD 2019-10-01
PublicationDate_xml – month: 10
  year: 2019
  text: 2019-Oct.
PublicationDecade 2010
PublicationTitle 2019 IEEE Intelligent Transportation Systems Conference (ITSC)
PublicationTitleAbbrev ITSC
PublicationYear 2019
Publisher IEEE
Publisher_xml – name: IEEE
Score 2.2943761
Snippet Urban autonomous driving decision making is challenging due to complex road geometry and multi-agent interactions. Current decision making methods are mostly...
SourceID ieee
SourceType Publisher
StartPage 2765
SubjectTerms Autonomous vehicles
Decision making
Learning (artificial intelligence)
Machine learning
Roads
Routing
Task analysis
Title Model-free Deep Reinforcement Learning for Urban Autonomous Driving
URI https://ieeexplore.ieee.org/document/8917306
WOSCitedRecordID wos000521238102130&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEA61ePCk0opvcvBobHaz2SRHaS16KUVb6K0k2YkUZFvWXX-_SXapCF68hWEgyczhYx7fDEJ3nGqpLOOE5gDEQ4AhSktGAvTqRDmTSReXTYjZTK5Wat5D93suDADE5jN4CMdYyy-2tgmpspH0sQUL87UPhBAtV6srVCZUjV4Wb-PQq-Wd3-r9WpgS8WJ6_L-bTtDwh3iH53tIOUU9KAdoHPaVfRBXAeAJwA6_Qhx3amNmD3cTUt-xF-FlZXSJH5s6cBV8UI8n1SakDIZoOX1ajJ9Jt_uAbFLKam8r7jx257lJnaZFRrV_kACjc10ooSmnNuVFwgywhJucZxkYb1juGM0scwU7Q_1yW8I5wqkyXs1w8OCdaSkNtVZSro1LnDYCLtAgGGC9a8dbrLu_X_4tvkJHwcZtP9s16tdVAzfo0H7Vm8_qNvrkGxkEj_0
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFA9jCnpS2cRvc_BoXNo0bXKUzbHhHEM32G0k6YsMpBu19e83actE8OItPB4k-b3Dj_eN0B2nSkjDOKExAHEUoIlUghFPvSqQVkfCVssmkulULJdy1kL3u14YAKiKz-DBH6tcfroxpQ-V9YTzLZifr73HoygM6m6tJlUZUNkbz9_6vlrLmb_W_LUypWKM4dH_7jpG3Z_WOzzbkcoJakHWQX2_seyD2BwADwC2-BWqgaemiu3hZkbqO3YivMi1yvBjWfhuBefW40G-9kGDLloMn-b9EWm2H5B1SFnh0OLWsXcc69AqmkZUuQcloFWsUpkoyqkJeRowDSzgOnaIgHbQcstoZJhN2SlqZ5sMzhAOpXZqmoOj70gJoakxgnKlbWCVTuAcdTwAq2094GLV_P3ib_EtOhjNXyaryXj6fIkOPd51ddsVahd5Cddo33wV68_8prLPN8WVk0Q
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2019+IEEE+Intelligent+Transportation+Systems+Conference+%28ITSC%29&rft.atitle=Model-free+Deep+Reinforcement+Learning+for+Urban+Autonomous+Driving&rft.au=Chen%2C+Jianyu&rft.au=Yuan%2C+Bodi&rft.au=Tomizuka%2C+Masayoshi&rft.date=2019-10-01&rft.pub=IEEE&rft.spage=2765&rft.epage=2771&rft_id=info:doi/10.1109%2FITSC.2019.8917306&rft.externalDocID=8917306