DRiLLS: Deep Reinforcement Learning for Logic Synthesis
Logic synthesis requires extensive tuning of the synthesis optimization flow where the quality of results (QoR) depends on the sequence of optimizations used. Efficient design space exploration is challenging due to the exponential number of possible optimization permutations. Therefore, automating...
Gespeichert in:
| Veröffentlicht in: | Proceedings of the ASP-DAC ... Asia and South Pacific Design Automation Conference S. 581 - 586 |
|---|---|
| Hauptverfasser: | , , , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
IEEE
01.01.2020
|
| Schlagworte: | |
| ISSN: | 2153-697X |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Logic synthesis requires extensive tuning of the synthesis optimization flow where the quality of results (QoR) depends on the sequence of optimizations used. Efficient design space exploration is challenging due to the exponential number of possible optimization permutations. Therefore, automating the optimization process is necessary. In this work, we propose a novel reinforcement learning-based methodology that navigates the optimization space without human intervention. We demonstrate the training of an Advantage Actor Critic (A2C) agent that seeks to minimize area subject to a timing constraint. Using the proposed methodology, designs can be optimized autonomously with no-humans in-loop. Evaluation on the comprehensive EPFL benchmark suite shows that the agent outperforms existing exploration methodologies and improves QoRs by an average of 13%. |
|---|---|
| AbstractList | Logic synthesis requires extensive tuning of the synthesis optimization flow where the quality of results (QoR) depends on the sequence of optimizations used. Efficient design space exploration is challenging due to the exponential number of possible optimization permutations. Therefore, automating the optimization process is necessary. In this work, we propose a novel reinforcement learning-based methodology that navigates the optimization space without human intervention. We demonstrate the training of an Advantage Actor Critic (A2C) agent that seeks to minimize area subject to a timing constraint. Using the proposed methodology, designs can be optimized autonomously with no-humans in-loop. Evaluation on the comprehensive EPFL benchmark suite shows that the agent outperforms existing exploration methodologies and improves QoRs by an average of 13%. |
| Author | Hashemi, Soheil Shalan, Mohamed Hosny, Abdelrahman Reda, Sherief |
| Author_xml | – sequence: 1 givenname: Abdelrahman surname: Hosny fullname: Hosny, Abdelrahman organization: Brown University Providence,Computer Science Dept,RI – sequence: 2 givenname: Soheil surname: Hashemi fullname: Hashemi, Soheil organization: Brown University Providence,School of Engineering,RI – sequence: 3 givenname: Mohamed surname: Shalan fullname: Shalan, Mohamed organization: American University in Cairo,Computer Science Dept,Cairo,Egypt – sequence: 4 givenname: Sherief surname: Reda fullname: Reda, Sherief organization: Brown University Providence,School of Engineering,RI |
| BookMark | eNotj81Kw0AURkdRsK19AjfxAVLvnZnbmXFX2voDA0qj4K5Mkps6YiclyaZvb8F-mwNnceAbi6vUJhbiHmGGCO5hUbznq8VSG0PzmQQJMweaiNyFGKORFjVKBZdiJJFUPnfm60ZM-_4HTiOQBmEkzGoTvS8esxXzIdtwTE3bVbznNGSeQ5di2mUnlfl2F6usOKbhm_vY34rrJvz2PD1zIj6f1h_Ll9y_Pb8uFz4PEmHIS3IgK8JyThYU1Q5IGhuQa83aNjWVilywlitErVSjaitVJeuguaFGspqIu_9uZObtoYv70B2355_qD8cLSJk |
| ContentType | Conference Proceeding |
| DBID | 6IE 6IL CBEJK RIE RIL |
| DOI | 10.1109/ASP-DAC47756.2020.9045559 |
| DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP All) 1998-Present |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISBN | 1728141230 9781728141237 |
| EISSN | 2153-697X |
| EndPage | 586 |
| ExternalDocumentID | 9045559 |
| Genre | orig-research |
| GroupedDBID | 5VS 6IE 6IF 6IL 6IN AAWTH ABLEC ACGFS ADZIZ ALMA_UNASSIGNED_HOLDINGS APO AVWKF BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO I07 IEGSK M43 OCL RIE RIL |
| ID | FETCH-LOGICAL-a210t-b5902c51b658035d905278a1ed4e48fd5b359a88ec11433f3d823c2da4ef5f2e3 |
| IEDL.DBID | RIE |
| IngestDate | Wed Aug 27 05:55:38 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | false |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-a210t-b5902c51b658035d905278a1ed4e48fd5b359a88ec11433f3d823c2da4ef5f2e3 |
| PageCount | 6 |
| ParticipantIDs | ieee_primary_9045559 |
| PublicationCentury | 2000 |
| PublicationDate | 2020-Jan. |
| PublicationDateYYYYMMDD | 2020-01-01 |
| PublicationDate_xml | – month: 01 year: 2020 text: 2020-Jan. |
| PublicationDecade | 2020 |
| PublicationTitle | Proceedings of the ASP-DAC ... Asia and South Pacific Design Automation Conference |
| PublicationTitleAbbrev | ASP-DAC |
| PublicationYear | 2020 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| SSID | ssj0000502710 ssib055574204 |
| Score | 2.151234 |
| Snippet | Logic synthesis requires extensive tuning of the synthesis optimization flow where the quality of results (QoR) depends on the sequence of optimizations used.... |
| SourceID | ieee |
| SourceType | Publisher |
| StartPage | 581 |
| SubjectTerms | Benchmark testing Circuit synthesis Delays Optimization Reinforcement learning Space exploration Tuning |
| Title | DRiLLS: Deep Reinforcement Learning for Logic Synthesis |
| URI | https://ieeexplore.ieee.org/document/9045559 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NTwIxEJ0AMUYvKGD8Tk08urC7bWnXGwGJB0IIqOFGuu2s4bIQPkz897a7C2rixVszh6aZtpn3pp03APccNVU8Fp7WTHksaXNP2SjuCaMsOGW6HWbPBW8DMRzK6TQaleBhXwuDiNnnM2y6YfaWbxZ661JlrcjiD4uAy1AWop3Xau3OjrVbkldQi1zX2xKuwD-Eu0JWs9WZjLxep8uE4O5vQug3i_l-NVbJ4kq_-r8VnUDju0CPjPah5xRKmNaguuvQQIoLW4PjH3KDdRC98XwwmDySHuKSjDETTdVZfpAUOqvvxJqIa8CsyeQztehwPV834LX_9NJ99orGCZ6yDG7jxU6TRfMgtvDCp9xEPg-FVAEahkwmhseUR0pK1JYNUZpQI0OqQ6MYJjwJkZ5BJV2keA5EUqNloIKgHRrH9WJECzokmpgJ5UfyAurOK7Nlro0xKxxy-bf5Co6c4_MUxjVUNqst3sCB_tjM16vbbEO_AMVOnY4 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwIxEJ4gGh8XFDC-rYlHV3a3Ldv1RkCCcSUE0HAj3XbWcFkIDxP_vd0HqIkXb800aZo-Mt837XwDcMtRUclDz1KKSYtFdW5J48UtT0sDTpmqu-lzwVvgdbtiNPJ7Bbjb5MIgYvr5DO-TZvqWr6dqlYTKar7BHwYBb8E2Z8y1s2yt9ekxPYbm5eQiU_Y2lMuxd-EmF9asNQY9q9VoMs_jye8E177PR_xVWiX1LO3S_-Z0CNXvFD3S2zifIyhgXIbSukYDya9sGQ5-CA5WwGv1J0EweCAtxBnpYyqbqtIIIcmVVt-JMZGkBLMig8_Y4MPFZFGF1_bjsNmx8tIJljQcbmmFiSqL4k5oAIZNufZt7npCOqgZMhFpHlLuSyFQGT5EaUS1cKlytWQY8chFegzFeBrjCRBBtRKOdJy6qxO2FyIa2CFQh8yTti9OoZKsyniWqWOM8wU5-9t8DXud4UswDp66z-ewn2xCFtC4gOJyvsJL2FEfy8lifpVu7hcsGKDV |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+of+the+ASP-DAC+...+Asia+and+South+Pacific+Design+Automation+Conference&rft.atitle=DRiLLS%3A+Deep+Reinforcement+Learning+for+Logic+Synthesis&rft.au=Hosny%2C+Abdelrahman&rft.au=Hashemi%2C+Soheil&rft.au=Shalan%2C+Mohamed&rft.au=Reda%2C+Sherief&rft.date=2020-01-01&rft.pub=IEEE&rft.eissn=2153-697X&rft.spage=581&rft.epage=586&rft_id=info:doi/10.1109%2FASP-DAC47756.2020.9045559&rft.externalDocID=9045559 |