RLOC: Terrain-Aware Legged Locomotion Using Reinforcement Learning and Optimal Control

We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a re...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on robotics Ročník 38; číslo 5; s. 2908 - 2927
Hlavní autoři: Gangapurwala, Siddhant, Geisert, Mathieu, Orsolino, Romeo, Fallon, Maurice, Havoutis, Ioannis
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.10.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1552-3098, 1941-0468
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a reinforcement learning (RL) policy. This RL policy is trained in simulation over a wide range of procedurally generated terrains. When run online, the system tracks the generated footstep plans using a model-based motion controller. We evaluate the robustness of our method over a wide variety of complex terrains. It exhibits behaviors that prioritize stability over aggressive locomotion. Additionally, we introduce two ancillary RL policies for corrective whole-body motion tracking and recovery control. These policies account for changes in physical parameters and external perturbations. We train and evaluate our framework on a complex quadrupedal system, ANYmal version B, and demonstrate transferability to a larger and heavier robot, ANYmal C, without requiring retraining.
AbstractList We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a reinforcement learning (RL) policy. This RL policy is trained in simulation over a wide range of procedurally generated terrains. When run online, the system tracks the generated footstep plans using a model-based motion controller. We evaluate the robustness of our method over a wide variety of complex terrains. It exhibits behaviors that prioritize stability over aggressive locomotion. Additionally, we introduce two ancillary RL policies for corrective whole-body motion tracking and recovery control. These policies account for changes in physical parameters and external perturbations. We train and evaluate our framework on a complex quadrupedal system, ANYmal version B, and demonstrate transferability to a larger and heavier robot, ANYmal C, without requiring retraining.
Author Gangapurwala, Siddhant
Fallon, Maurice
Geisert, Mathieu
Orsolino, Romeo
Havoutis, Ioannis
Author_xml – sequence: 1
  givenname: Siddhant
  orcidid: 0000-0002-1308-3744
  surname: Gangapurwala
  fullname: Gangapurwala, Siddhant
  email: siddhant@robots.ox.ac.uk
  organization: Dynamic Robots Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, U.K
– sequence: 2
  givenname: Mathieu
  orcidid: 0000-0002-5651-8736
  surname: Geisert
  fullname: Geisert, Mathieu
  email: geisert.mathieu@gmail.com
  organization: Dynamic Robots Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, U.K
– sequence: 3
  givenname: Romeo
  orcidid: 0000-0001-9847-2601
  surname: Orsolino
  fullname: Orsolino, Romeo
  email: orsolino@arrival.com
  organization: Dynamic Robots Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, U.K
– sequence: 4
  givenname: Maurice
  orcidid: 0000-0003-2940-0879
  surname: Fallon
  fullname: Fallon, Maurice
  email: mfallon@robots.ox.ac.uk
  organization: Dynamic Robots Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, U.K
– sequence: 5
  givenname: Ioannis
  orcidid: 0000-0002-4371-4623
  surname: Havoutis
  fullname: Havoutis, Ioannis
  email: ioannis@robots.ox.ac.uk
  organization: Dynamic Robots Systems Group, Oxford Robotics Institute, University of Oxford, Oxford, U.K
BookMark eNp9kM9LwzAUx4MouE3vgpeC5878apt4G8VfUCiMzWtI05fRsSUz7RD_e1M2PHjw9B6P74_HZ4ounXeA0B3Bc0KwfFwt6znFlM4ZKSjP5QWaEMlJinkuLuOeZTRlWIprNO37LcaUS8wm6GNZ1eVTsoIQdOfSxZcOkFSw2UCbVN74vR8675J137lNsoTOWR8M7MENUaWDG8_atUl9GLq93iWld0Pwuxt0ZfWuh9vznKH1y_OqfEur-vW9XFSpoZIMqWBE8JbQjHBBLUijOeHMUt4a0eZ5QwUUjPMmN8KypskFyExom3PLJGG2ZTP0cMo9BP95hH5QW38MLlYqWtCYxZjMogqfVCb4vg9g1SHEb8O3IliN9FSkp0Z66kwvWvI_FtMNemQxRFC7_4z3J2MHAL89sigkp5L9AAX6fXI
CODEN ITREAE
CitedBy_id crossref_primary_10_3390_s23041873
crossref_primary_10_1017_S0263574724000626
crossref_primary_10_1126_scirobotics_adh5401
crossref_primary_10_1109_LRA_2023_3329766
crossref_primary_10_1109_TRO_2022_3186804
crossref_primary_10_1109_LRA_2021_3136645
crossref_primary_10_3390_s24123825
crossref_primary_10_1007_s11071_024_10510_4
crossref_primary_10_1109_LRA_2024_3375086
crossref_primary_10_1109_TRO_2023_3275384
crossref_primary_10_3390_machines12120902
crossref_primary_10_1126_scirobotics_adv3604
crossref_primary_10_3390_s23115194
crossref_primary_10_1109_LRA_2024_3524890
crossref_primary_10_1109_TASE_2024_3354830
crossref_primary_10_1177_02783649241312698
crossref_primary_10_1109_LRA_2025_3595037
crossref_primary_10_1016_j_eswa_2023_121798
crossref_primary_10_1108_RIA_09_2024_0207
crossref_primary_10_1109_LRA_2023_3304561
crossref_primary_10_3390_s24113675
crossref_primary_10_1126_scirobotics_adi9641
crossref_primary_10_1109_LRA_2022_3184779
crossref_primary_10_1109_TRO_2023_3324580
crossref_primary_10_1109_LRA_2025_3592067
crossref_primary_10_1109_TRO_2023_3302239
crossref_primary_10_1109_LRA_2023_3323893
crossref_primary_10_1109_TASE_2023_3345876
crossref_primary_10_1109_ACCESS_2023_3311141
crossref_primary_10_1109_TRO_2023_3297015
crossref_primary_10_1109_ACCESS_2025_3582523
crossref_primary_10_1109_TCST_2024_3377949
crossref_primary_10_1007_s10489_025_06584_1
crossref_primary_10_1109_TMECH_2024_3421251
crossref_primary_10_1146_annurev_control_030323_022510
Cites_doi 10.1109/ICRA40945.2020.9196777
10.1126/scirobotics.abk2822
10.1109/LRA.2018.2792536
10.1109/ICRA40945.2020.9196673
10.1109/IROS.2017.8202133
10.1242/jeb.202.23.3325
10.1109/LRA.2020.2979656
10.1007/s10514-013-9341-4
10.1109/TBME.1969.4502596
10.1109/ICHR.2006.321385
10.1177/0278364906066768
10.1126/scirobotics.abc5986
10.1109/IROS.2018.8593722
10.1109/LRA.2019.2931284
10.1109/IROS40897.2019.8968251
10.1109/IROS.2001.973365
10.1145/2185520.2185539
10.1109/TRO.2018.2862902
10.1002/rob.21964
10.1109/LRA.2018.2849506
10.1109/IROS51168.2021.9636474
10.1109/ICRA48506.2021.9561639
10.15607/RSS.2019.XV.011
10.1109/IROS.2004.1389727
10.1109/LRA.2020.2979660
10.1109/TAC.2009.2024565
10.1109/HUMANOIDS.2016.7803333
10.1109/LRA.2020.3007427
10.1109/IROS.2017.8206174
10.1109/IROS.2016.7758092
10.1109/TRO.2020.2983318
10.15607/rss.2020.xvi.064
10.1109/LRA.2018.2794620
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
7TB
8FD
FR3
JQ2
L7M
L~C
L~D
DOI 10.1109/TRO.2022.3172469
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Mechanical & Transportation Engineering Abstracts
Technology Research Database
Engineering Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Engineering Research Database
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1941-0468
EndPage 2927
ExternalDocumentID 10_1109_TRO_2022_3172469
9779429
Genre orig-research
GrantInformation_xml – fundername: Engineering and Physical Sciences Research Council; EPSRC
  grantid: EP/S002383/1
  funderid: 10.13039/501100000266
– fundername: EU H2020
– fundername: UKRI/EPSRC RAIN Hub
  grantid: EP/R026084/1
– fundername: Royal Society University Research Fellowship
GroupedDBID .DC
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AIBXA
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
VJK
AAYXX
CITATION
7SC
7SP
7TB
8FD
FR3
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c291t-83184d1251482fe9ca4143f24dc8d66b28e7344b6c8f3bb68e958af64f3913fd3
IEDL.DBID RIE
ISICitedReferencesCount 66
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000800818100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1552-3098
IngestDate Sun Jun 29 16:57:33 EDT 2025
Tue Nov 18 20:38:46 EST 2025
Sat Nov 29 01:47:29 EST 2025
Wed Aug 27 02:14:17 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c291t-83184d1251482fe9ca4143f24dc8d66b28e7344b6c8f3bb68e958af64f3913fd3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-2940-0879
0000-0002-1308-3744
0000-0002-4371-4623
0000-0002-5651-8736
0000-0001-9847-2601
PQID 2721433395
PQPubID 27625
PageCount 20
ParticipantIDs crossref_primary_10_1109_TRO_2022_3172469
crossref_citationtrail_10_1109_TRO_2022_3172469
proquest_journals_2721433395
ieee_primary_9779429
PublicationCentury 2000
PublicationDate 2022-Oct.
2022-10-00
20221001
PublicationDateYYYYMMDD 2022-10-01
PublicationDate_xml – month: 10
  year: 2022
  text: 2022-Oct.
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on robotics
PublicationTitleAbbrev TRO
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref15
ref14
ref11
ref10
ref17
ref16
ref18
Sutton (ref32) 1998; 135
ref46
ref47
ref49
ref8
ref7
Levine (ref24) 2013
ref9
ref4
ref6
ref5
Glorot (ref48) 2010
Haarnoja (ref44) 2018; abs/1812.05905
ref37
ref36
ref31
ref30
ref33
Yang (ref26) 2020
ref1
ref39
ref38
Schulman (ref43) 2017
Lee (ref20) 2019
Iscen (ref35) 2018
Fujimoto (ref45) 2018
Liang (ref41) 2018
Ackerman (ref2) 2019
ref23
ref25
Hwangbo (ref19) 2019; 4
ref22
ref21
ref28
ref27
ref29
Haarnoja (ref40) 2018
Rudin (ref34) 2022
Smith (ref50) 2005
Schulman (ref42) 2015
Ackerman (ref3) 2018
References_xml – year: 2019
  ident: ref20
  article-title: Robust recovery controller for a quadrupedal robot using deep reinforcement learning
– year: 2017
  ident: ref43
  article-title: Proximal policy optimization algorithms
– ident: ref18
  doi: 10.1109/ICRA40945.2020.9196777
– ident: ref30
  doi: 10.1126/scirobotics.abk2822
– ident: ref46
  doi: 10.1109/LRA.2018.2792536
– start-page: 1582
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2018
  ident: ref45
  article-title: Addressing function approximation error in actor-critic methods
– volume: 135
  volume-title: Introduction to Reinforcement Learning
  year: 1998
  ident: ref32
– start-page: 916
  volume-title: Proc. Conf. Robot Learn.
  year: 2018
  ident: ref35
  article-title: Policies modulating trajectory generators
– start-page: 249
  volume-title: Proc. 13th Int. Conf. Artif. Intell. Statist.
  year: 2010
  ident: ref48
  article-title: Understanding the difficulty of training deep feedforward neural networks
– ident: ref6
  doi: 10.1109/ICRA40945.2020.9196673
– ident: ref27
  doi: 10.1109/IROS.2017.8202133
– start-page: 91
  volume-title: Proc. Conf. Robot Learn.
  year: 2022
  ident: ref34
  article-title: Learning to walk in minutes using massively parallel deep reinforcement learning
– ident: ref8
  doi: 10.1242/jeb.202.23.3325
– ident: ref21
  doi: 10.1109/LRA.2020.2979656
– start-page: 1
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2013
  ident: ref24
  article-title: Guided policy search
– start-page: 1861
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2018
  ident: ref40
  article-title: Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor
– ident: ref14
  doi: 10.1007/s10514-013-9341-4
– ident: ref12
  doi: 10.1109/TBME.1969.4502596
– ident: ref13
  doi: 10.1109/ICHR.2006.321385
– ident: ref10
  doi: 10.1177/0278364906066768
– ident: ref29
  doi: 10.1126/scirobotics.abc5986
– ident: ref23
  doi: 10.1109/IROS.2018.8593722
– ident: ref37
  doi: 10.1109/LRA.2019.2931284
– year: 2005
  ident: ref50
  article-title: Open dynamics engine
– ident: ref7
  doi: 10.1109/IROS40897.2019.8968251
– ident: ref9
  doi: 10.1109/IROS.2001.973365
– start-page: 3053
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2018
  ident: ref41
  article-title: RLlib: Abstractions for distributed reinforcement learning
– ident: ref5
  doi: 10.1145/2185520.2185539
– ident: ref16
  doi: 10.1109/TRO.2018.2862902
– ident: ref4
  doi: 10.1002/rob.21964
– ident: ref33
  doi: 10.1109/LRA.2018.2849506
– volume-title: IEEE Spectr.
  year: 2019
  ident: ref2
  article-title: Anybotics introduces sleek new ANYmal C quadruped
– ident: ref39
  doi: 10.1109/IROS51168.2021.9636474
– ident: ref47
  doi: 10.1109/ICRA48506.2021.9561639
– start-page: 1889
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2015
  ident: ref42
  article-title: Trust region policy optimization
– volume: abs/1812.05905
  volume-title: CoRR
  year: 2018
  ident: ref44
  article-title: Soft actor-critic algorithms and applications
– ident: ref25
  doi: 10.15607/RSS.2019.XV.011
– start-page: 1
  volume-title: Proc. Conf. Robot Learn.
  year: 2020
  ident: ref26
  article-title: Data efficient reinforcement learning for legged robots
– ident: ref49
  doi: 10.1109/IROS.2004.1389727
– ident: ref28
  doi: 10.1109/LRA.2020.2979660
– ident: ref11
  doi: 10.1109/TAC.2009.2024565
– volume-title: IEEE Spectr.
  year: 2018
  ident: ref3
  article-title: North sea deployment shows how quadruped robots can be commercially useful
– ident: ref15
  doi: 10.1109/HUMANOIDS.2016.7803333
– ident: ref17
  doi: 10.1109/LRA.2020.3007427
– ident: ref36
  doi: 10.1109/IROS.2017.8206174
– volume: 4
  issue: 26
  volume-title: Sci. Robot.
  year: 2019
  ident: ref19
  article-title: Learning agile and dynamic motor skills for legged robots
– ident: ref1
  doi: 10.1109/IROS.2016.7758092
– ident: ref38
  doi: 10.1109/TRO.2020.2983318
– ident: ref22
  doi: 10.15607/rss.2020.xvi.064
– ident: ref31
  doi: 10.1109/LRA.2018.2794620
SSID ssj0024903
Score 2.662384
Snippet We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2908
SubjectTerms AI-based methods
Computational modeling
deep learning in robotics and automation
Learning
Legged locomotion
legged robots
Locomotion
Optimal control
Perturbation
Physical properties
Planning
Policies
Quadrupedal robots
Robots
robust/adaptive control of robotic systems
Terrain
Tracking
Tracking control
Training
Title RLOC: Terrain-Aware Legged Locomotion Using Reinforcement Learning and Optimal Control
URI https://ieeexplore.ieee.org/document/9779429
https://www.proquest.com/docview/2721433395
Volume 38
WOSCitedRecordID wos000800818100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0468
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0024903
  issn: 1552-3098
  databaseCode: RIE
  dateStart: 20040101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFH-oeNCDX1OcX-TgRTCuS9I28SZD8TCcjCneSpu8DkFb6ab--yZpNweK4K2EpCnvNe8jefn9AE6tl5M2TWYUMQupCLFLZSpz6pBLMEYu04ZsIr67k09P6n4Jzud3YRDRF5_hhXv0Z_mm1O9uq6xjYxVl7ecyLMdxXN_V-sbVU54F2SGKUR4oOTuSDFRnNBzYRJAxm5_GTLjS5gUX5DlVfhhi711uNv_3XVuw0USR5KpW-zYsYbED6wvYgi14HPYHvUsywsqRQNCrz7RC0sfxGA3pl7qs6XuILxkgQ_QAqtrvFZIGc3VM0sKQgbUpr3auXl3TvgsPN9ej3i1tSBSoZqo7pdIuWmFcGCMky1HpVNgQKWfCaGmiKGMSYy5EFmmZ8yyLJKpQpnkkcq66PDd8D1aKssB9IFKjYbmJAqO0yESojOmidfeBbZUhmjZ0ZnJNdIMw7oguXhKfaQQqsZpInCaSRhNtOJuPeKvRNf7o23KSn_drhN6Go5nqkmb5TRJm81rBOVfhwe-jDmHNvbuuyjuClWn1jsewqj-mz5PqxP9ZX2bKyno
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1bS8MwFD54A_XBuzivefBFMK5N0i7xTYaiWDcZU3wrbXI6BN1kTv37Jmk3BUXwrYSElHOSc0lOvg_g0Ho5adNkRhHziIoIQyozWVCHXIIN5DKryCYarZZ8eFC3U3A8eQuDiL74DE_cp7_LNwP95o7K6jZWUdZ-TsNsJAQLy9daX8h6yvMgO0wxygMlx5eSgap3O22bCjJmM9QGE664-ZsT8qwqP0yx9y8Xy__7sxVYquJIclYqfhWmsL8Gi9_QBdfhvpO0m6eki0NHA0HPPrIhkgR7PTQkGehBSeBDfNEA6aCHUNX-tJBUqKs9kvUNaVur8mznapZV7Rtwd3HebV7SikaBaqbCEZV22wrjAhkhWYFKZ8IGSQUTRksTxzmT2OBC5LGWBc_zWKKKZFbEouAq5IXhmzDTH_RxC4jUaFhh4sAoLXIRKWNCtA4_sK0yQlOD-liuqa4wxh3VxVPqc41ApVYTqdNEWmmiBkeTES8lvsYffded5Cf9KqHXYHesurTagK8ps5mt4JyraPv3UQcwf9m9SdLkqnW9AwtunrJGbxdmRsM33IM5_T56fB3u-1X2CRlXzcE
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=RLOC%3A+Terrain-Aware+Legged+Locomotion+Using+Reinforcement+Learning+and+Optimal+Control&rft.jtitle=IEEE+transactions+on+robotics&rft.au=Gangapurwala%2C+Siddhant&rft.au=Geisert%2C+Mathieu&rft.au=Romeo+Orsolino&rft.au=Fallon%2C+Maurice&rft.date=2022-10-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1552-3098&rft.eissn=1941-0468&rft.volume=38&rft.issue=5&rft.spage=2908&rft_id=info:doi/10.1109%2FTRO.2022.3172469&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1552-3098&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1552-3098&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1552-3098&client=summon