A Steering Algorithm for Redirected Walking Using Reinforcement Learning

Redirected Walking (RDW) steering algorithms have traditionally relied on human-engineered logic. However, recent advances in reinforcement learning (RL) have produced systems that surpass human performance on a variety of control tasks. This paper investigates the potential of using RL to develop a...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on visualization and computer graphics Ročník 26; číslo 5; s. 1955 - 1963
Hlavní autoři: Strauss, Ryan R., Ramanujan, Raghuram, Becker, Andrew, Peck, Tabitha C.
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.05.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1077-2626, 1941-0506, 1941-0506
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Redirected Walking (RDW) steering algorithms have traditionally relied on human-engineered logic. However, recent advances in reinforcement learning (RL) have produced systems that surpass human performance on a variety of control tasks. This paper investigates the potential of using RL to develop a novel reactive steering algorithm for RDW. Our approach uses RL to train a deep neural network that directly prescribes the rotation, translation, and curvature gains to transform a virtual environment given a user's position and orientation in the tracked space. We compare our learned algorithm to steer-to-center using simulated and real paths. We found that our algorithm outperforms steer-to-center on simulated paths, and found no significant difference on distance traveled on real paths. We demonstrate that when modeled as a continuous control problem, RDW is a suitable domain for RL, and moving forward, our general framework provides a promising path towards an optimal RDW steering algorithm.
AbstractList Redirected Walking (RDW) steering algorithms have traditionally relied on human-engineered logic. However, recent advances in reinforcement learning (RL) have produced systems that surpass human performance on a variety of control tasks. This paper investigates the potential of using RL to develop a novel reactive steering algorithm for RDW. Our approach uses RL to train a deep neural network that directly prescribes the rotation, translation, and curvature gains to transform a virtual environment given a user's position and orientation in the tracked space. We compare our learned algorithm to steer-to-center using simulated and real paths. We found that our algorithm outperforms steer-to-center on simulated paths, and found no significant difference on distance traveled on real paths. We demonstrate that when modeled as a continuous control problem, RDW is a suitable domain for RL, and moving forward, our general framework provides a promising path towards an optimal RDW steering algorithm.
Redirected Walking (RDW) steering algorithms have traditionally relied on human-engineered logic. However, recent advances in reinforcement learning (RL) have produced systems that surpass human performance on a variety of control tasks. This paper investigates the potential of using RL to develop a novel reactive steering algorithm for RDW. Our approach uses RL to train a deep neural network that directly prescribes the rotation, translation, and curvature gains to transform a virtual environment given a user's position and orientation in the tracked space. We compare our learned algorithm to steer-to-center using simulated and real paths. We found that our algorithm outperforms steer-to-center on simulated paths, and found no significant difference on distance traveled on real paths. We demonstrate that when modeled as a continuous control problem, RDW is a suitable domain for RL, and moving forward, our general framework provides a promising path towards an optimal RDW steering algorithm.Redirected Walking (RDW) steering algorithms have traditionally relied on human-engineered logic. However, recent advances in reinforcement learning (RL) have produced systems that surpass human performance on a variety of control tasks. This paper investigates the potential of using RL to develop a novel reactive steering algorithm for RDW. Our approach uses RL to train a deep neural network that directly prescribes the rotation, translation, and curvature gains to transform a virtual environment given a user's position and orientation in the tracked space. We compare our learned algorithm to steer-to-center using simulated and real paths. We found that our algorithm outperforms steer-to-center on simulated paths, and found no significant difference on distance traveled on real paths. We demonstrate that when modeled as a continuous control problem, RDW is a suitable domain for RL, and moving forward, our general framework provides a promising path towards an optimal RDW steering algorithm.
Author Ramanujan, Raghuram
Strauss, Ryan R.
Peck, Tabitha C.
Becker, Andrew
Author_xml – sequence: 1
  givenname: Ryan R.
  surname: Strauss
  fullname: Strauss, Ryan R.
  email: rystrauss@davidson.edu
  organization: Davidson College
– sequence: 2
  givenname: Raghuram
  surname: Ramanujan
  fullname: Ramanujan, Raghuram
  email: raramanujan@davidson.edu
  organization: Davidson College
– sequence: 3
  givenname: Andrew
  surname: Becker
  fullname: Becker, Andrew
  organization: Davidson College
– sequence: 4
  givenname: Tabitha C.
  surname: Peck
  fullname: Peck, Tabitha C.
  email: tapeck@davidson.edu
  organization: Davidson College
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32078549$$D View this record in MEDLINE/PubMed
BookMark eNp90c1u3CAUBWBUpWr--gBVpMhSNt14erlgMMvRKE0qjVQpTdKl5cHXKYmNE2AWeftizTSLLLoBBN8By-eYHfjJE2NfOCw4B_Pt9n51tUBAWKDRAhR8YEfcSF5CBeogr0HrEhWqQ3Yc4yMAl7I2n9ihQNB1Jc0Ru14WvxJRcP6hWA4PU3Dpz1j0UyhuqHOBbKKu-N0OTzO4i_N4Q85nYGkkn4o1tcHn7VP2sW-HSJ_38wm7-355u7ou1z-vfqyW69IKaVJp0RptwUIPleyo5pILi0IjiQ5ltZGKK26xN5hPkANVGyRVAxgUNWxAnLCvu3ufw_SypZia0UVLw9B6mraxQaEQjOEaM714Rx-nbfD567KqFVSokWd1vlfbzUhd8xzc2IbX5t8_yoDvgA1TjIH6N8KhmXto5h6auYdm30PO6HcZ61Kb3ORTaN3w3-TZLumI6O2l2pi60iD-ApFskYk
CODEN ITVGEA
CitedBy_id crossref_primary_10_1002_cav_2196
crossref_primary_10_1007_s11390_024_4585_3
crossref_primary_10_1109_TVCG_2024_3368043
crossref_primary_10_1109_ACCESS_2023_3255006
crossref_primary_10_1109_TVCG_2022_3203095
crossref_primary_10_1109_TVCG_2024_3409734
crossref_primary_10_1007_s10055_023_00763_6
crossref_primary_10_1109_TVCG_2022_3224073
crossref_primary_10_3390_electronics10060715
crossref_primary_10_1109_TVCG_2021_3106432
crossref_primary_10_1007_s10055_024_00962_9
crossref_primary_10_1109_TVCG_2024_3376080
crossref_primary_10_1109_TVCG_2022_3158609
crossref_primary_10_1109_TVCG_2023_3320208
crossref_primary_10_1145_3451264
crossref_primary_10_1109_TVCG_2021_3067781
crossref_primary_10_1145_3528223_3530113
crossref_primary_10_1109_TVCG_2023_3251648
crossref_primary_10_1007_s10055_022_00682_y
crossref_primary_10_1109_TVCG_2024_3372052
crossref_primary_10_1109_TVCG_2023_3313439
crossref_primary_10_1007_s11390_022_2266_7
crossref_primary_10_1109_TVCG_2021_3139990
crossref_primary_10_1109_TVCG_2022_3150500
crossref_primary_10_1109_TVCG_2022_3150466
crossref_primary_10_1109_TVCG_2022_3179269
crossref_primary_10_1109_TVCG_2025_3595181
crossref_primary_10_3390_s22052040
crossref_primary_10_1007_s10055_022_00734_3
crossref_primary_10_3390_computers11110156
crossref_primary_10_1109_TVCG_2021_3106504
crossref_primary_10_1109_TVCG_2023_3247107
crossref_primary_10_1145_3565020
crossref_primary_10_1109_TVCG_2023_3244359
crossref_primary_10_1007_s10055_023_00841_9
crossref_primary_10_1109_TVCG_2024_3372056
Cites_doi 10.1038/nature14236
10.1109/TVCG.2008.191
10.1038/nature16961
10.1126/science.aay2400
10.1109/3DUI.2014.6798852
10.1109/VR.2011.5759455
10.1145/2043603.2043604
10.1109/VR.2019.8798121
10.1145/311535.311589
10.1109/TVCG.2012.47
10.1126/science.aau6249
10.1109/JRPROC.1961.287775
10.1007/BF00992696
10.1145/2347736.2347755
10.1109/ICCV.2015.312
10.1109/TVCG.2013.28
10.1109/VR.2019.8797983
10.1109/3DUI.2014.6798851
10.1145/1272582.1272590
10.1038/nature14539
10.1007/BF00992698
10.1109/TVCG.2013.88
10.1109/TKDE.2009.191
10.1109/VR.2014.6802053
10.1109/MCG.2018.111125628
10.1109/TIP.2016.2613686
10.1109/TVCG.2009.62
10.1126/science.aar6404
10.1162/neco.1997.9.8.1735
10.1287/isre.2013.0480
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TVCG.2020.2973060
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005-present
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Electronic Library (IEL)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList MEDLINE
Technology Research Database
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1941-0506
EndPage 1963
ExternalDocumentID 32078549
10_1109_TVCG_2020_2973060
8998570
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: Davidson Research Initiative
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RZB
TN5
VH1
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c349t-c2c97c0c0f054de81413c2372e3d245b46161c2f92141210e5b2e680092380b03
IEDL.DBID RIE
ISICitedReferencesCount 68
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000523746000014&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1077-2626
1941-0506
IngestDate Wed Oct 01 14:01:56 EDT 2025
Sun Oct 05 00:29:21 EDT 2025
Mon Jul 21 05:57:49 EDT 2025
Tue Nov 18 22:34:46 EST 2025
Sat Nov 29 06:05:43 EST 2025
Wed Aug 27 02:35:24 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c349t-c2c97c0c0f054de81413c2372e3d245b46161c2f92141210e5b2e680092380b03
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
PMID 32078549
PQID 2386052721
PQPubID 75741
PageCount 9
ParticipantIDs proquest_journals_2386052721
proquest_miscellaneous_2362099172
crossref_primary_10_1109_TVCG_2020_2973060
pubmed_primary_32078549
ieee_primary_8998570
crossref_citationtrail_10_1109_TVCG_2020_2973060
PublicationCentury 2000
PublicationDate 2020-05-01
PublicationDateYYYYMMDD 2020-05-01
PublicationDate_xml – month: 05
  year: 2020
  text: 2020-05-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on visualization and computer graphics
PublicationTitleAbbrev TVCG
PublicationTitleAlternate IEEE Trans Vis Comput Graph
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref56
ref12
glorot (ref13) 2011
ref58
ref53
goodfellow (ref14) 2016
ref55
ref11
ref54
bishop (ref6) 2006
krizhevsky (ref24) 2012
ref17
ref19
ref18
sharma (ref43) 2017
schulman (ref41) 0
sutton (ref52) 2018
dhariwal (ref10) 2017
ref50
puterman (ref38) 2014
lecun (ref26) 2015; 521
ref46
ref47
steinicke (ref48) 0
ref44
mnih (ref31) 2015; 518
ref49
ref7
ref9
silver (ref45) 2016; 529
ref5
hessel (ref16) 0
chang (ref8) 2019
schulman (ref42) 2017
ref34
lample (ref25) 0
ref37
ref36
andrychowicz (ref3) 2018
ref32
henderson (ref15) 0
razzaque (ref40) 0; 9
amodei (ref2) 2016
razzaque (ref39) 2005
ref20
azmandian (ref4) 0
(ref35) 2018
ref21
jaderberg (ref22) 2016
kingma (ref23) 2014
ng (ref33) 2003
ref28
ref27
abu-mostafa (ref1) 2012
ref29
sutskever (ref51) 2014
mnih (ref30) 2016; abs 1602 1783
References_xml – year: 2017
  ident: ref43
  article-title: Learning to repeat: Fine grained action repetition for deep reinforcement learning
  publication-title: 5th International Conference on Learning Representations - ICLR 2017
– volume: 518
  start-page: 529
  year: 2015
  ident: ref31
  article-title: Human-level control through deep reinforcement learning
  publication-title: Nature
  doi: 10.1038/nature14236
– ident: ref37
  doi: 10.1109/TVCG.2008.191
– volume: 529
  start-page: 484
  year: 2016
  ident: ref45
  article-title: Mastering the game of go with deep neural networks and tree search
  publication-title: Nature
  doi: 10.1038/nature16961
– year: 2018
  ident: ref35
  publication-title: Openai five
– ident: ref7
  doi: 10.1126/science.aay2400
– ident: ref12
  doi: 10.1109/3DUI.2014.6798852
– year: 2014
  ident: ref23
  publication-title: Adam A method for stochastic optimization
– ident: ref49
  doi: 10.1109/VR.2011.5759455
– year: 2012
  ident: ref1
  publication-title: Learning From Data
– volume: 9
  start-page: 105
  year: 0
  ident: ref40
  article-title: Redirected walking
  publication-title: Proceedings of EUROGRAPHICS
– start-page: 93
  year: 0
  ident: ref4
  article-title: Physical space requirements for redirected walking: how size and shape affect performance
  publication-title: Proceedings of the 25th International Conference on Artificial Reality and Telexistence and 20th Eurographics Symposium on Virtual Environments
– ident: ref19
  doi: 10.1145/2043603.2043604
– year: 2005
  ident: ref39
  publication-title: Redirected Walking
– ident: ref27
  doi: 10.1109/VR.2019.8798121
– year: 2018
  ident: ref3
  publication-title: Learning Dexterous in-Hand Manipulation
– ident: ref54
  doi: 10.1145/311535.311589
– ident: ref50
  doi: 10.1109/TVCG.2012.47
– year: 0
  ident: ref41
  article-title: High-dimensional continuous control using generalized advantage estimation
  publication-title: Proceedings of the International Conference on Learning Representations (ICLR)
– ident: ref21
  doi: 10.1126/science.aau6249
– start-page: 315
  year: 2011
  ident: ref13
  article-title: Deep sparse rectifier neural networks
  publication-title: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
– start-page: 3104
  year: 2014
  ident: ref51
  article-title: Sequence to sequence learning with neural networks
  publication-title: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 NIPS'14
– year: 2018
  ident: ref52
  publication-title: Reinforcement Learning An Introduction
– ident: ref29
  doi: 10.1109/JRPROC.1961.287775
– year: 2016
  ident: ref22
  publication-title: Reinforcement learning with unsupervised auxiliary tasks
– year: 2017
  ident: ref10
  publication-title: OpenAI Baselines
– year: 2014
  ident: ref38
  publication-title: Markov Decision Processes Discrete Stochastic Dynamic Programming
– ident: ref57
  doi: 10.1007/BF00992696
– year: 0
  ident: ref25
  article-title: Playing fps games with deep reinforcement learning
  publication-title: Thirty-First AAAI Conference on Artificial Intelligence
– ident: ref11
  doi: 10.1145/2347736.2347755
– ident: ref9
  doi: 10.1109/ICCV.2015.312
– ident: ref18
  doi: 10.1109/TVCG.2013.28
– ident: ref53
  doi: 10.1109/VR.2019.8797983
– ident: ref32
  doi: 10.1109/3DUI.2014.6798851
– ident: ref56
  doi: 10.1145/1272582.1272590
– year: 0
  ident: ref16
  article-title: Rainbow: Combining improvements in deep reinforcement learning
  publication-title: Thirty-Second AAAI Conference on Artificial Intelligence
– volume: 521
  start-page: 436
  year: 2015
  ident: ref26
  article-title: Deep learning
  publication-title: Nature
  doi: 10.1038/nature14539
– year: 2003
  ident: ref33
  publication-title: Ng Shaping and Police Search in Reinforcement Learning
– start-page: 15
  year: 0
  ident: ref48
  article-title: Moving towards generally applicable redirected walking
  publication-title: Proceedings of the Virtual Reality International Conference (VRIC)
– ident: ref55
  doi: 10.1007/BF00992698
– year: 0
  ident: ref15
  article-title: Deep reinforcement learning that matters
  publication-title: Thirty-Second AAAI Conference on Artificial Intelligence
– ident: ref58
  doi: 10.1109/TVCG.2013.88
– ident: ref36
  doi: 10.1109/TKDE.2009.191
– ident: ref44
  doi: 10.1038/nature16961
– volume: abs 1602 1783
  year: 2016
  ident: ref30
  article-title: Asynchronous methods for deep reinforcement learning
  publication-title: CoRR
– ident: ref5
  doi: 10.1109/VR.2014.6802053
– ident: ref34
  doi: 10.1109/MCG.2018.111125628
– ident: ref20
  doi: 10.1109/TIP.2016.2613686
– year: 2016
  ident: ref2
  publication-title: Concrete problems in ai safety
– start-page: 1097
  year: 2012
  ident: ref24
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Proceedings of the 25th International Conference on Neural Information Processing Systems NIPS'12
– year: 2017
  ident: ref42
  publication-title: Proximal policy optimization algorithms
– ident: ref47
  doi: 10.1109/TVCG.2009.62
– year: 2019
  ident: ref8
  publication-title: Redirection controller using reinforcement learning
– year: 2016
  ident: ref14
  publication-title: Deep Learning
– ident: ref46
  doi: 10.1126/science.aar6404
– ident: ref17
  doi: 10.1162/neco.1997.9.8.1735
– year: 2006
  ident: ref6
  publication-title: Pattern Recognition and Machine Learning (Information Science and Statistics)
– ident: ref28
  doi: 10.1287/isre.2013.0480
SSID ssj0014489
Score 2.5321543
Snippet Redirected Walking (RDW) steering algorithms have traditionally relied on human-engineered logic. However, recent advances in reinforcement learning (RL) have...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1955
SubjectTerms Algorithms
Artificial neural networks
Computer Graphics
Computer simulation
Control tasks
Deep Learning
Heuristic algorithms
Human performance
Humans
Learning (artificial intelligence)
Legged locomotion
Locomotion
Machine learning
Meters
Prediction algorithms
Redirected Walking
Reinforcement Learning
Space exploration
Steering
Steering Algorithms
Tracking
Video Games
Virtual environments
Virtual Reality
Walking
Walking - physiology
Title A Steering Algorithm for Redirected Walking Using Reinforcement Learning
URI https://ieeexplore.ieee.org/document/8998570
https://www.ncbi.nlm.nih.gov/pubmed/32078549
https://www.proquest.com/docview/2386052721
https://www.proquest.com/docview/2362099172
Volume 26
WOSCitedRecordID wos000523746000014&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0506
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014489
  issn: 1077-2626
  databaseCode: RIE
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9swDCWyooft0O-1XrNAA3Yq5lSRFUs6BsWynIKiS9fcDEum2wJpUqROf_8o2fV2aAfsZkOyLZA0-J5IkQBfCTW4wuUmHhaWxzIXOrZmYGOOOSf4oVVeFqHZhJpO9XxuLjvwrT0Lg4gh-Qz7_jLE8ouV2_itsnPPDYaKCPo7pVR9VquNGBDNMHV-oYoFofQmgjng5nz26-IHMUHB-75REw_VKP_4oNBU5W18GfzMePf_VrgHOw2eZKPaAPahg8sD-PBXlcFDmIzYz6q-YaPF7Wp9X909MAKr7Aprj4YFu8kXftOchRQCGggFVV3YO2RNDdbbI7gef59dTOKmgULsEmmq2AlnlOOOlwTMCtQD8lhOJEpgUgg5tDIlvOdEaQSNEPfDoRWYal-HKdHc8uQjbC1XSzwB5iWcDrRzqpRSWdSYEhGxKikTKaXNI-AvIs1cU13cN7lYZIFlcJN5LWReC1mjhQjO2kce69Ia_5p86KXdTmwEHUH3RW9Z8_M9ZbR4ImmCuG0EX9ph-m18LCRf4mrj56T-0DDBtwiOa323704EGSnx5k-vf_MU3vuV1VmPXdiq1hv8DNvuubp_WvfINue6F2zzNwxp3Lk
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4hQGo58CwQnkbqqWrAcZw4Pq4QsFXpqmq3LbcodiaAtOyiJcvvZ-yE0ENB6i2R7cSasTXf5xnPAHwk1GBLW-gwKQ0PZSGy0OjIhBwLTvAjU0VV-mITajDIrq709zn43N2FQUQffIbH7tH78suJnbmjshPHDRJFBH0hkVJEzW2tzmdAREM3EYYqFITTWx9mxPXJ8PfpBXFBwY9dqSbu81G-WCFfVuV1hOktzfnK_81xFZZbRMl6zRJYgzkcr8PSX3kGN6DfYz_r5oX1RteT6W19c8cIrrIf2Ng0LNmfYuSOzZkPIqAGn1LV-tND1mZhvf4Av87Phqf9sC2hENpY6jq0wmplueUVQbMSs4hslhWxEhiXQiZGpoT4rKi0oBZif5gYgWnmMjHFGTc83oT58WSM28CchNMos1ZVUiqDGaZERYyKq1hKaYoA-LNIc9vmF3dlLka55xlc504LudNC3mohgE_dkPsmucZbnTectLuOraAD2HvWW95uv4ecJk80TRC7DeCoa6aN47whxRgnM9cnddeGCcAFsNXou_t2LGiZEnPe-fc_D-Fdf_jtMr_8Mvi6C-_dLJsYyD2Yr6cz3IdF-1jfPkwP_Ap9An2S3xg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Steering+Algorithm+for+Redirected+Walking+Using+Reinforcement+Learning&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Strauss%2C+Ryan+R&rft.au=Ramanujan%2C+Raghuram&rft.au=Becker%2C+Andrew&rft.au=Peck%2C+Tabitha+C&rft.date=2020-05-01&rft.eissn=1941-0506&rft.volume=26&rft.issue=5&rft.spage=1955&rft_id=info:doi/10.1109%2FTVCG.2020.2973060&rft_id=info%3Apmid%2F32078549&rft.externalDocID=32078549
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon