Graph Soft Actor-Critic Reinforcement Learning for Large-Scale Distributed Multirobot Coordination
Learning distributed cooperative policies for large-scale multirobot systems remains a challenging task in the multiagent reinforcement learning (MARL) context. In this work, we model the interactions among the robots as a graph and propose a novel off-policy actor-critic MARL algorithm to train dis...
Saved in:
| Published in: | IEEE transaction on neural networks and learning systems Vol. 36; no. 1; pp. 665 - 676 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.01.2025
|
| Subjects: | |
| ISSN: | 2162-237X, 2162-2388, 2162-2388 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Learning distributed cooperative policies for large-scale multirobot systems remains a challenging task in the multiagent reinforcement learning (MARL) context. In this work, we model the interactions among the robots as a graph and propose a novel off-policy actor-critic MARL algorithm to train distributed coordination policies on the graph by leveraging the ability of information extraction of graph neural networks (GNNs). First, a new type of Gaussian policy parameterized by the GNNs is designed for distributed decision-making in continuous action spaces. Second, a scalable centralized value function network is designed based on a novel GNN-based value function decomposition technique. Then, based on the designed actor and the critic networks, a GNN-based MARL algorithm named graph soft actor-critic (G-SAC) is proposed and utilized to train the distributed policies in an effective and centralized fashion. Finally, two custom multirobot coordination environments are built, under which the simulation results are performed to empirically demonstrate both the sample efficiency and the scalability of G-SAC as well as the strong zero-shot generalization ability of the trained policy in large-scale multirobot coordination problems. |
|---|---|
| AbstractList | Learning distributed cooperative policies for large-scale multirobot systems remains a challenging task in the multiagent reinforcement learning (MARL) context. In this work, we model the interactions among the robots as a graph and propose a novel off-policy actor-critic MARL algorithm to train distributed coordination policies on the graph by leveraging the ability of information extraction of graph neural networks (GNNs). First, a new type of Gaussian policy parameterized by the GNNs is designed for distributed decision-making in continuous action spaces. Second, a scalable centralized value function network is designed based on a novel GNN-based value function decomposition technique. Then, based on the designed actor and the critic networks, a GNN-based MARL algorithm named graph soft actor-critic (G-SAC) is proposed and utilized to train the distributed policies in an effective and centralized fashion. Finally, two custom multirobot coordination environments are built, under which the simulation results are performed to empirically demonstrate both the sample efficiency and the scalability of G-SAC as well as the strong zero-shot generalization ability of the trained policy in large-scale multirobot coordination problems. Learning distributed cooperative policies for large-scale multirobot systems remains a challenging task in the multiagent reinforcement learning (MARL) context. In this work, we model the interactions among the robots as a graph and propose a novel off-policy actor-critic MARL algorithm to train distributed coordination policies on the graph by leveraging the ability of information extraction of graph neural networks (GNNs). First, a new type of Gaussian policy parameterized by the GNNs is designed for distributed decision-making in continuous action spaces. Second, a scalable centralized value function network is designed based on a novel GNN-based value function decomposition technique. Then, based on the designed actor and the critic networks, a GNN-based MARL algorithm named graph soft actor-critic (G-SAC) is proposed and utilized to train the distributed policies in an effective and centralized fashion. Finally, two custom multirobot coordination environments are built, under which the simulation results are performed to empirically demonstrate both the sample efficiency and the scalability of G-SAC as well as the strong zero-shot generalization ability of the trained policy in large-scale multirobot coordination problems.Learning distributed cooperative policies for large-scale multirobot systems remains a challenging task in the multiagent reinforcement learning (MARL) context. In this work, we model the interactions among the robots as a graph and propose a novel off-policy actor-critic MARL algorithm to train distributed coordination policies on the graph by leveraging the ability of information extraction of graph neural networks (GNNs). First, a new type of Gaussian policy parameterized by the GNNs is designed for distributed decision-making in continuous action spaces. Second, a scalable centralized value function network is designed based on a novel GNN-based value function decomposition technique. Then, based on the designed actor and the critic networks, a GNN-based MARL algorithm named graph soft actor-critic (G-SAC) is proposed and utilized to train the distributed policies in an effective and centralized fashion. Finally, two custom multirobot coordination environments are built, under which the simulation results are performed to empirically demonstrate both the sample efficiency and the scalability of G-SAC as well as the strong zero-shot generalization ability of the trained policy in large-scale multirobot coordination problems. |
| Author | Wen, Guanghui Fu, Junjie Hu, Yifan |
| Author_xml | – sequence: 1 givenname: Yifan orcidid: 0000-0003-0756-6157 surname: Hu fullname: Hu, Yifan email: yfhu@seu.edu.cn organization: School of Mathematics, Southeast University, Nanjing, China – sequence: 2 givenname: Junjie orcidid: 0000-0002-1528-8727 surname: Fu fullname: Fu, Junjie email: fujunjie@seu.edu.cn organization: School of Mathematics, Southeast University, Nanjing, China – sequence: 3 givenname: Guanghui orcidid: 0000-0003-0070-8597 surname: Wen fullname: Wen, Guanghui email: ghwen@seu.edu.cn organization: School of Mathematics, Southeast University, Nanjing, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/37948149$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kM1O3DAURi1EBZTyAqiqvGSTwb-Js0RTSpEClTqzYBfZzjV1lbGntrPo2zcwA6pY4I0_Wefz1T0f0WGIARA6p2RBKWkv1_f33WrBCOMLzlkrOTlAJ4zWrGJcqcPX3Dwco7Ocf5P51ETWoj1Cx7xphaKiPUHmJuntL7yKruArW2KqlskXb_FP8MHFZGEDoeAOdAo-POL5CXc6PUK1snoE_NXnkryZCgz4bhqLT9HEgpcxpsEHXXwMn9AHp8cMZ_v7FK2_Xa-X36vux83t8qqrLGeiVC1zZmiMprWVzlFirDVGizm5mmkrpALDtZRsoFwSrUCTGQWAxjWi5fwUXey-3ab4Z4Jc-o3PFsZRB4hT7plSLRNcKDmjX_boZDYw9NvkNzr97V-0zADbATbFnBO4V4SS_kl__6y_f9Lf7_XPJfWmZH15NlCS9uP71c-7qp_3-W8Wp0Jyxf8BuPWUHw |
| CODEN | ITNNAL |
| CitedBy_id | crossref_primary_10_1109_TAI_2024_3461630 crossref_primary_10_3390_electronics14040820 crossref_primary_10_1109_TRO_2025_3582836 crossref_primary_10_3390_electronics14081686 crossref_primary_10_3389_frobt_2025_1492526 |
| Cites_doi | 10.1016/j.automatica.2016.06.024 10.1109/ICRA46639.2022.9812263 10.1109/IROS51168.2021.9635898 10.1109/IROS.2018.8593871 10.1109/MSP.2020.3016143 10.1109/TII.2012.2219061 10.1109/TAC.2004.834113 10.1109/tnnls.2022.3172168 10.48550/ARXIV.1706.03762 10.1609/aaai.v32i1.11794 10.1109/tnnls.2021.3104987 10.1109/LRA.2022.3146912 10.1109/ICRA46639.2022.9811744 10.1109/TNNLS.2020.2984944 10.1109/IROS51168.2021.9635836 10.1109/OJCOMS.2021.3081996 10.1109/TNNLS.2021.3095431 10.1007/978-3-030-60990-0_12 10.1109/TAC.2015.2465071 10.1109/LRA.2021.3074885 10.1109/TCYB.2020.3000264 10.48550/arXiv.1812.05905 10.1109/ICRA.2018.8461113 10.1109/TSMC.2019.2961753 |
| ContentType | Journal Article |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7X8 |
| DOI | 10.1109/TNNLS.2023.3329530 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 2162-2388 |
| EndPage | 676 |
| ExternalDocumentID | 37948149 10_1109_TNNLS_2023_3329530 10314538 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: National Key Research and Development Program of China grantid: 2022YFA1004702 – fundername: General Joint Fund of the Equipment Advance Research Program of Ministry of Education grantid: 8091B022114 – fundername: National Natural Science Foundation of China grantid: 62173085; 62325304; U22B2046; 62088101; 62073079 funderid: 10.13039/501100001809 |
| GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION NPM 7X8 |
| ID | FETCH-LOGICAL-c324t-92fbd7ba16c5ff10bccbba4f10f62ac458eb3a552d1350a8ea016ceee7f74933 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 19 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001106580100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2162-237X 2162-2388 |
| IngestDate | Sat Sep 27 18:48:52 EDT 2025 Wed Dec 10 14:04:40 EST 2025 Sat Nov 29 01:40:28 EST 2025 Tue Nov 18 22:30:35 EST 2025 Wed Aug 27 01:57:59 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Issue | 1 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c324t-92fbd7ba16c5ff10bccbba4f10f62ac458eb3a552d1350a8ea016ceee7f74933 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0000-0003-0756-6157 0000-0002-1528-8727 0000-0003-0070-8597 |
| PMID | 37948149 |
| PQID | 2889243485 |
| PQPubID | 23479 |
| PageCount | 12 |
| ParticipantIDs | proquest_miscellaneous_2889243485 pubmed_primary_37948149 ieee_primary_10314538 crossref_primary_10_1109_TNNLS_2023_3329530 crossref_citationtrail_10_1109_TNNLS_2023_3329530 |
| PublicationCentury | 2000 |
| PublicationDate | 2025-Jan. 2025-1-00 2025-Jan 20250101 |
| PublicationDateYYYYMMDD | 2025-01-01 |
| PublicationDate_xml | – month: 01 year: 2025 text: 2025-Jan. |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | IEEE transaction on neural networks and learning systems |
| PublicationTitleAbbrev | TNNLS |
| PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
| PublicationYear | 2025 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| References | ref35 ref12 ref15 ref14 ref11 Lowe (ref28) ref10 Zhu (ref18) 2022 ref2 ref1 ref16 ref19 Schulman (ref13) 2017 Green (ref46) 2019 Wang (ref26) Nayak (ref17) ref23 ref45 Liu (ref37) ref42 Naderializadeh (ref36) 2020 ref44 Chou (ref41) ref43 Iqbal (ref30) Sutton (ref27) 2018 ref29 ref8 Agarwal (ref21) Peng (ref33) 2017 ref7 Kingma (ref39) 2014 ref9 ref4 ref3 ref6 ref5 Jiang (ref34) Jiang (ref20) ref40 Sukhbaatar (ref32) Rashid (ref25) Lillicrap (ref31) Haarnoja (ref38) Khan (ref22) Sunehag (ref24) |
| References_xml | – start-page: 1741 volume-title: Proc. Int. Conf. Auton. Agents Multiagent Syst. (AAMAS) ident: ref21 article-title: Learning transferable cooperative behavior in multi-agent team – year: 2017 ident: ref13 article-title: Proximal policy optimization algorithms publication-title: arXiv:1707.06347 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. (ICLR) ident: ref26 article-title: QPLEX: Duplex dueling multi-agent Q-learning – start-page: 590 volume-title: Proc. 3rd Conf. Robot Learn. (CoRL) ident: ref37 article-title: PIC: Permutation invariant critic for multi-agent deep reinforcement learning – start-page: 1861 volume-title: Proc. 35th Int. Conf. Mach. Learn. (ICML) ident: ref38 article-title: Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor – ident: ref10 doi: 10.1016/j.automatica.2016.06.024 – ident: ref15 doi: 10.1109/ICRA46639.2022.9812263 – ident: ref2 doi: 10.1109/IROS51168.2021.9635898 – year: 2014 ident: ref39 article-title: Adam: A method for stochastic optimization publication-title: arXiv:1412.6980 – ident: ref44 doi: 10.1109/IROS.2018.8593871 – ident: ref19 doi: 10.1109/MSP.2020.3016143 – ident: ref5 doi: 10.1109/TII.2012.2219061 – year: 2020 ident: ref36 article-title: Graph convolutional value decomposition in multi-agent reinforcement learning publication-title: arXiv:2010.04740 – start-page: 834 volume-title: Proc. 34th Int. Conf. Mach. Learn. (ICML) ident: ref41 article-title: Improving stochastic policy gradients in continuous control with deep reinforcement learning using the beta distribution – ident: ref42 doi: 10.1109/TAC.2004.834113 – start-page: 25817 volume-title: Proc. 40th Int. Conf. Mach. Learn. (ICML) ident: ref17 article-title: Scalable multi-agent reinforcement learning through intelligent information aggregation – ident: ref16 doi: 10.1109/tnnls.2022.3172168 – year: 2019 ident: ref46 article-title: Distillation strategies for proximal policy optimization publication-title: arXiv:1901.08128 – ident: ref35 doi: 10.48550/ARXIV.1706.03762 – start-page: 2961 volume-title: Proc. 36th Int. Conf. Mach. Learn. (ICML) ident: ref30 article-title: Actor-attention-critic for multi-agent reinforcement learning – ident: ref29 doi: 10.1609/aaai.v32i1.11794 – ident: ref8 doi: 10.1109/tnnls.2021.3104987 – ident: ref3 doi: 10.1109/LRA.2022.3146912 – ident: ref45 doi: 10.1109/ICRA46639.2022.9811744 – ident: ref6 doi: 10.1109/TNNLS.2020.2984944 – start-page: 2244 volume-title: Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) ident: ref32 article-title: Learning multiagent communication with backpropagation – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. (ICLR) ident: ref31 article-title: Continuous control with deep reinforcement learning – start-page: 7254 volume-title: Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) ident: ref34 article-title: Learning attentional communication for multi-agent cooperation – ident: ref40 doi: 10.1109/IROS51168.2021.9635836 – ident: ref1 doi: 10.1109/OJCOMS.2021.3081996 – volume-title: Reinforcement Learning: An Introduction year: 2018 ident: ref27 – ident: ref7 doi: 10.1109/TNNLS.2021.3095431 – ident: ref12 doi: 10.1007/978-3-030-60990-0_12 – start-page: 823 volume-title: Proc. 3rd Conf. Robot Learn. (CoRL) ident: ref22 article-title: Graph policy gradients for large scale robot control – ident: ref11 doi: 10.1109/TAC.2015.2465071 – start-page: 6379 volume-title: Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) ident: ref28 article-title: Multi-agent actor-critic for mixed cooperative-competitive environments – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. (ICLR) ident: ref20 article-title: Graph convolutional reinforcement learning – ident: ref23 doi: 10.1109/LRA.2021.3074885 – ident: ref9 doi: 10.1109/TCYB.2020.3000264 – ident: ref14 doi: 10.48550/arXiv.1812.05905 – start-page: 4295 volume-title: Proc. 35th Int. Conf. Mach. Learn. (ICML) ident: ref25 article-title: QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning – start-page: 2085 volume-title: Proc. Int. Conf. Auton. Agents MultiAgent Syst. (AAMAS) ident: ref24 article-title: Value-decomposition networks for cooperative multi-agent learning – ident: ref43 doi: 10.1109/ICRA.2018.8461113 – ident: ref4 doi: 10.1109/TSMC.2019.2961753 – year: 2022 ident: ref18 article-title: A survey of multi-agent reinforcement learning with communication publication-title: arXiv:2203.08975 – year: 2017 ident: ref33 article-title: Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play StarCraft combat games publication-title: arXiv:1703.10069 |
| SSID | ssj0000605649 |
| Score | 2.5452952 |
| Snippet | Learning distributed cooperative policies for large-scale multirobot systems remains a challenging task in the multiagent reinforcement learning (MARL)... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 665 |
| SubjectTerms | Distributed coordination graph neural network (GNN) Multi-robot systems multiagent reinforcement learning (MARL) multirobot system Protocols Reinforcement learning Robot kinematics Scalability Simulation soft \text{actor(!{-}!)critic} (SAC) algorithm Task analysis |
| Title | Graph Soft Actor-Critic Reinforcement Learning for Large-Scale Distributed Multirobot Coordination |
| URI | https://ieeexplore.ieee.org/document/10314538 https://www.ncbi.nlm.nih.gov/pubmed/37948149 https://www.proquest.com/docview/2889243485 |
| Volume | 36 |
| WOSCitedRecordID | wos001106580100001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2162-2388 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000605649 issn: 2162-237X databaseCode: RIE dateStart: 20120101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4VxKEXaHl1-0BG4lZ5SWzHdo6IFjigVcXuYW-R7TgICW3QPvr7O-MkKy4g9WZFdhTlG9szY8_3AVwIJ5zBqItn0tYcPeKGu0I77kUWo9QWTaxOYhNmMrHzefmnL1ZPtTAxxnT5LI6pmc7y6zZsKFV2SZIECmfoDuwYo7tirW1CJUPHXCd3V-RacCHNfCiSycrL2WRyPx2TVvhYSlEWkiTgpCGuEqLRfLUnJZGVt_3NtO_cHPznF3-C_d7BZFedRXyGD3FxCAeDeAPr5_IR-FuiqmZTXIbZFWXuead6wB5iIlMNKW_Iev7VR4aP2D1dG-dThDWyX8S4S2JZsWapinfZ-nbNrluMZp-6FOMxzG5-z67veC-4wAP6VWteisbXxrtch6Jp8syH4L1T2Gq0cEEVFkNvVxSizmWRORsdOoy4y0bTGFVKeQK7i3YRvwCrc41xI_EtqloplzttdKO8VY10RCk3gnz441XoychJE-O5SkFJVlYJsIoAq3rARvBzO-alo-J4t_cxwfGqZ4fECM4HZCucSXQ84hax3awqYS0Go1LZYgSnHeTb0YOlfH3jrd_goyBh4JSb-Q676-Um_oC98Hf9tFqeobnO7Vky13-qMuPm |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB5BQaKXlkcpy9NI3JC3iV9xjlWhFLFEiN3D3iLbcVClalNtd_n9nXGSVS9F4mZFdhTlG9szY8_3AXwSTrgCoy6eSdtw9Ihb7rRx3IssRmksmliTxCaKqrLLZflrKFZPtTAxxnT5LE6pmc7ymy5sKVV2QpIECmfoQ3iklRJZX661S6lk6Jqb5PCK3AguZLEcy2Sy8mRRVbP5lNTCp1KKUksSgZMFsZUQkeadXSnJrNzvcaad5_zwP7_5KRwMLiY77W3iGTyIq-dwOMo3sGE2vwD_jciq2RwXYnZKuXve6x6w3zHRqYaUOWQDA-sfho_YjC6O8zkCG9kX4twluazYsFTHu-58t2FnHcazl32S8QgW518XZxd8kFzgAT2rDS9F65vCu9wE3bZ55kPw3ilstUa4oLTF4NtpLZpc6szZ6NBlxH02Fm2hSilfwt6qW8VXwJrcYORIjIuqUcrlzhSmVd6qVjoilZtAPv7xOgx05KSKcVWnsCQr6wRYTYDVA2AT-Lwbc92Tcfyz9xHBcadnj8QEPo7I1jiX6IDErWK3vamFtRiOSmX1BI57yHejR0t5fc9bP8CTi8XPWT37Xv14A_uCZIJTpuYt7G3W2_gOHoe_m8ub9ftktLdMH-ZF |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Graph+Soft+Actor%E2%80%93Critic+Reinforcement+Learning+for+Large-Scale+Distributed+Multirobot+Coordination&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Hu%2C+Yifan&rft.au=Fu%2C+Junjie&rft.au=Wen%2C+Guanghui&rft.date=2025-01-01&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=36&rft.issue=1&rft.spage=665&rft.epage=676&rft_id=info:doi/10.1109%2FTNNLS.2023.3329530&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TNNLS_2023_3329530 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |