DARL: Distributed Reconfigurable Accelerator for Hyperdimensional Reinforcement Learning
Reinforcement Learning (RL) is a powerful technology to solve decision-making problems such as robotics control. Modern RL algorithms, i.e., Deep Q-Learning, are based on costly and resource hungry deep neural networks. This motivates us to deploy alternative models for powering RL agents on edge de...
Gespeichert in:
| Veröffentlicht in: | 2022 IEEE/ACM International Conference On Computer Aided Design (ICCAD) S. 1 - 9 |
|---|---|
| Hauptverfasser: | , , , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
ACM
29.10.2022
|
| Schlagworte: | |
| ISSN: | 1558-2434 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Reinforcement Learning (RL) is a powerful technology to solve decision-making problems such as robotics control. Modern RL algorithms, i.e., Deep Q-Learning, are based on costly and resource hungry deep neural networks. This motivates us to deploy alternative models for powering RL agents on edge devices. Recently, brain-inspired Hyper-Dimensional Computing (HDC) has been introduced as a promising solution for lightweight and efficient machine learning, particularly for classification.In this work, we develop a novel platform capable of real-time hyper-dimensional reinforcement learning. Our heterogeneous CPU-FPGA platform, called DARL, maximizes FPGA's computing capabilities by applying hardware optimizations to hyperdimensional computing's critical operations, including hardware-friendly encoder IP, the hypervector chunk fragmentation, and the delayed model update. Aside from hardware innovation, we also extend the platform to basic single-agent RL to support multi-agents distributed learning. We evaluate the effectiveness of our approach on OpenAI Gym tasks. Our results show that the FPGA platform provides on average 20× speedup compared to current state-of-the-art hyperdimensional RL methods running on Intel Xeon 6226 CPU. In addition, DARL provides around 4.8× faster and 4.2× higher energy efficiency compared to the state-of-the-art RL accelerator while ensuring a better or comparable quality of learning. |
|---|---|
| AbstractList | Reinforcement Learning (RL) is a powerful technology to solve decision-making problems such as robotics control. Modern RL algorithms, i.e., Deep Q-Learning, are based on costly and resource hungry deep neural networks. This motivates us to deploy alternative models for powering RL agents on edge devices. Recently, brain-inspired Hyper-Dimensional Computing (HDC) has been introduced as a promising solution for lightweight and efficient machine learning, particularly for classification.In this work, we develop a novel platform capable of real-time hyper-dimensional reinforcement learning. Our heterogeneous CPU-FPGA platform, called DARL, maximizes FPGA's computing capabilities by applying hardware optimizations to hyperdimensional computing's critical operations, including hardware-friendly encoder IP, the hypervector chunk fragmentation, and the delayed model update. Aside from hardware innovation, we also extend the platform to basic single-agent RL to support multi-agents distributed learning. We evaluate the effectiveness of our approach on OpenAI Gym tasks. Our results show that the FPGA platform provides on average 20× speedup compared to current state-of-the-art hyperdimensional RL methods running on Intel Xeon 6226 CPU. In addition, DARL provides around 4.8× faster and 4.2× higher energy efficiency compared to the state-of-the-art RL accelerator while ensuring a better or comparable quality of learning. |
| Author | Ni, Yang Chen, Hanning Imani, Mohsen Issa, Mariam |
| Author_xml | – sequence: 1 givenname: Hanning surname: Chen fullname: Chen, Hanning email: hanningc@uci.edu organization: University of California,Department of Computer Science,Irvine,CA,USA – sequence: 2 givenname: Mariam surname: Issa fullname: Issa, Mariam email: mariamai@uci.edu organization: University of California,Department of Computer Science,Irvine,CA,USA – sequence: 3 givenname: Yang surname: Ni fullname: Ni, Yang email: yni3@uci.edu organization: University of California,Department of Computer Science,Irvine,CA,USA – sequence: 4 givenname: Mohsen surname: Imani fullname: Imani, Mohsen email: m.imani@uci.edu organization: University of California,Department of Computer Science,Irvine,CA,USA |
| BookMark | eNotj01Lw0AYhFdRsNacvXjIH0h993vjrbTVCgGhKHgru5s3ZSXdlE166L93QQ_DwDzDwNyTmzhEJOSRwoJSIZ-5BMMlW3ApasH1FSlqbTIAXjOqxTWZUSlNxQQXd6QYxx8AYEZTrWFGvtfLXfNSrsM4peDOE7blDv0Qu3A4J-t6LJfeY4_JTkMqu6zt5YSpDUeMYxii7XM_xAw85mgqG7Qphnh4ILed7Ucs_n1Ovl43n6tt1Xy8va-WTWWZMFMlgKHvGPeoZAud14qJmkmknDmtvUJv2xadMdQBze-UswIUB_A1ExQZn5Onv92AiPtTCkebLnsKoExNDf8FxlxUQw |
| ContentType | Conference Proceeding |
| DBID | 6IE 6IH CBEJK RIE RIO |
| DOI | 10.1145/3508352.3549437 |
| DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP) 1998-present |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering |
| EISBN | 9781450392174 1450392172 |
| EISSN | 1558-2434 |
| EndPage | 9 |
| ExternalDocumentID | 10068918 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Air Force Office of Scientific Research funderid: 10.13039/100000181 – fundername: Semiconductor Research Corporation funderid: 10.13039/100000028 – fundername: Office of Naval Research funderid: 10.13039/100000006 – fundername: National Science Foundation funderid: 10.13039/100000001 |
| GroupedDBID | 6IE 6IF 6IH 6IL 6IN AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO FEDTE IEGSK IJVOP M43 OCL RIE RIL RIO |
| ID | FETCH-LOGICAL-a248t-402ecf23ce65d0fc7624925e132b77c6ecaddeb881b014376ba406300c9241e23 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 17 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000981574300083&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| IngestDate | Wed Aug 27 02:46:18 EDT 2025 |
| IsDoiOpenAccess | false |
| IsOpenAccess | true |
| IsPeerReviewed | false |
| IsScholarly | true |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-a248t-402ecf23ce65d0fc7624925e132b77c6ecaddeb881b014376ba406300c9241e23 |
| PageCount | 9 |
| ParticipantIDs | ieee_primary_10068918 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-Oct.-29 |
| PublicationDateYYYYMMDD | 2022-10-29 |
| PublicationDate_xml | – month: 10 year: 2022 text: 2022-Oct.-29 day: 29 |
| PublicationDecade | 2020 |
| PublicationTitle | 2022 IEEE/ACM International Conference On Computer Aided Design (ICCAD) |
| PublicationTitleAbbrev | ICCAD |
| PublicationYear | 2022 |
| Publisher | ACM |
| Publisher_xml | – name: ACM |
| SSID | ssj0002871770 ssj0020286 |
| Score | 2.3460808 |
| Snippet | Reinforcement Learning (RL) is a powerful technology to solve decision-making problems such as robotics control. Modern RL algorithms, i.e., Deep Q-Learning,... |
| SourceID | ieee |
| SourceType | Publisher |
| StartPage | 1 |
| SubjectTerms | Computational modeling Computer aided instruction Distance learning Hardware IP networks Real-time systems Technological innovation |
| Title | DARL: Distributed Reconfigurable Accelerator for Hyperdimensional Reinforcement Learning |
| URI | https://ieeexplore.ieee.org/document/10068918 |
| WOSCitedRecordID | wos000981574300083&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwELVoxQALX0V8ywNrShonvpitolQdqqpCIHWrYvtSdSBFpeX3c-eGAgMDW2RlsPx175393glxC9Z6mxJyU6pwUZpaiGyeEZArwVttKGDa4DM7hNEon0zMuBarBy0MIobHZ9jmz3CX7xduzaky2uGxzk0nb4gGAGzEWtuECkN_4MVXsy1q0LWXTyfN7lQWwEZbESFK1e9iKiGW9A_-2YtD0fpW5cnxNt4ciR2sjsX-D0PBEzHpdZ-G97LHbrhcyAq9ZH5ZlfPZeskiKdl1jgJNuFuXhFflgHgorZFXfsfOoJz-D16qLqQNZW2_OmuJl_7j88MgqmsnREWS5iumhejKRDnUmY9LR2ce2xAikU8L4DQ6PthsTqiVHf5A2yIN9luOCFkHE3UqmtWiwjMhTay98UaDJ7jlYyhUZhBj0MaXHnJzLlo8SNO3jT3G9Gt8Lv5ovxR7CWsIKAAk5ko0V8s1Xotd97Gavy9vwqR-ApRLohs |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV27TsMwFLWgIAELryLeeGBNSWLHjtkqSlVEqCpUpG5V_EjVgRSFlu_nXjcUGBjYIiuD5dc959rnXEKupdZWc0BujOUm4FzLQKcJALlCWi0UBEztfWYz2e-no5Ea1GJ1r4VxzvnHZ66Fn_4u387MAlNlsMNDkaooXScbCedxtJRrrVIqCP4lLr-ab0GDqN18Ip7csMTDjRYDSsTZ73IqPpp0d__Zjz3S_Nbl0cEq4uyTNVcekJ0floKHZNRpP2e3tIN-uFjKylmKDLMsppNFhTIp2jYGQo2_XaeAWGkPmCiskld8yY6wHP73bqrGJw5pbcA6aZKX7v3wrhfU1ROCPObpHImhM0XMjBOJDQsDpx4aETqgn1pKI5zBo02ngFvR408KnXNvwGWAkkUuZkekUc5Kd0yoCoVVVglpAXDZUOYsUc6FUihbWJmqE9LEQRq_LQ0yxl_jc_pH-xXZ6g2fsnH20H88I9sxKgogHMTqnDTm1cJdkE3zMZ--V5d-gj8BMu-lYg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2022+IEEE%2FACM+International+Conference+On+Computer+Aided+Design+%28ICCAD%29&rft.atitle=DARL%3A+Distributed+Reconfigurable+Accelerator+for+Hyperdimensional+Reinforcement+Learning&rft.au=Chen%2C+Hanning&rft.au=Issa%2C+Mariam&rft.au=Ni%2C+Yang&rft.au=Imani%2C+Mohsen&rft.date=2022-10-29&rft.pub=ACM&rft.eissn=1558-2434&rft.spage=1&rft.epage=9&rft_id=info:doi/10.1145%2F3508352.3549437&rft.externalDocID=10068918 |