Trust-Region Method with Deep Reinforcement Learning in Analog Design Space Exploration

This paper introduces new perspectives on analog design space search. To minimize the time-to-market, this endeavor better cast as constraint satisfaction problem than global optimization defined in prior arts. We incorporate model based agents, contrasted with model-free learning, to implement a tr...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2021 58th ACM/IEEE Design Automation Conference (DAC) s. 1225 - 1230
Hlavní autoři: Yang, Kai-En, Tsai, Chia-Yu, Shen, Hung-Hao, Chiang, Chen-Feng, Tsai, Feng-Ming, Wang, Chung-An, Ting, Yiju, Yeh, Chia-Shun, Lai, Chin-Tang
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 05.12.2021
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract This paper introduces new perspectives on analog design space search. To minimize the time-to-market, this endeavor better cast as constraint satisfaction problem than global optimization defined in prior arts. We incorporate model based agents, contrasted with model-free learning, to implement a trust-region strategy. As such, simple feed-forward networks can be trained with supervised learning, where the convergence is relatively trivial. Experiment results demonstrate orders of magnitude improvement on search iterations. Additionally, the unprecedented consideration of PVT conditions are accommodated. On circuits with TSMC 5/6nm process, our method achieve performance surpassing human designers. Furthermore, this framework is in production in industrial settings.
AbstractList This paper introduces new perspectives on analog design space search. To minimize the time-to-market, this endeavor better cast as constraint satisfaction problem than global optimization defined in prior arts. We incorporate model based agents, contrasted with model-free learning, to implement a trust-region strategy. As such, simple feed-forward networks can be trained with supervised learning, where the convergence is relatively trivial. Experiment results demonstrate orders of magnitude improvement on search iterations. Additionally, the unprecedented consideration of PVT conditions are accommodated. On circuits with TSMC 5/6nm process, our method achieve performance surpassing human designers. Furthermore, this framework is in production in industrial settings.
Author Ting, Yiju
Yeh, Chia-Shun
Tsai, Feng-Ming
Chiang, Chen-Feng
Yang, Kai-En
Tsai, Chia-Yu
Shen, Hung-Hao
Wang, Chung-An
Lai, Chin-Tang
Author_xml – sequence: 1
  givenname: Kai-En
  surname: Yang
  fullname: Yang, Kai-En
  organization: National Tsing Hua University,EECS,Hsinchu,Taiwan
– sequence: 2
  givenname: Chia-Yu
  surname: Tsai
  fullname: Tsai, Chia-Yu
  organization: MediaTek Inc.,Hsinchu,Taiwan
– sequence: 3
  givenname: Hung-Hao
  surname: Shen
  fullname: Shen, Hung-Hao
  organization: MediaTek Inc.,Hsinchu,Taiwan
– sequence: 4
  givenname: Chen-Feng
  surname: Chiang
  fullname: Chiang, Chen-Feng
  organization: MediaTek Inc.,Hsinchu,Taiwan
– sequence: 5
  givenname: Feng-Ming
  surname: Tsai
  fullname: Tsai, Feng-Ming
  organization: MediaTek Inc.,Hsinchu,Taiwan
– sequence: 6
  givenname: Chung-An
  surname: Wang
  fullname: Wang, Chung-An
  organization: MediaTek Inc.,Hsinchu,Taiwan
– sequence: 7
  givenname: Yiju
  surname: Ting
  fullname: Ting, Yiju
  organization: MediaTek Inc.,Hsinchu,Taiwan
– sequence: 8
  givenname: Chia-Shun
  surname: Yeh
  fullname: Yeh, Chia-Shun
  organization: MediaTek Inc.,Hsinchu,Taiwan
– sequence: 9
  givenname: Chin-Tang
  surname: Lai
  fullname: Lai, Chin-Tang
  organization: MediaTek Inc.,Hsinchu,Taiwan
BookMark eNotj81Kw0AUhUdQUGueQIR5gdQ7k_ldhrRWISLUissySe-kA-kkJBH17Q3YzTmbj49zbsll7CIS8sBgyRjYx1VeMANaLDlwtrTSKDD6giRWG6aUFBnXAq5JMo6hAgXSiDlvyOdu-BqndItN6CJ9xenYHeh3mI50hdjTLYbou6HGE8aJluiGGGJDQ6R5dG3XzNQYmkjfe1cjXf_0bTe4aVbdkSvv2hGTcy_Ix9N6Vzyn5dvmpcjL1M07p1RJC9rWSgvFnZJZheCZMV5z7exBe69rIbn0NQI4nFFuXS0Mr5TzwEyVLcj9vzcg4r4fwskNv_vz_ewPEy5Srg
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/DAC18074.2021.9586087
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE/IET Electronic Library (IEL) (UW System Shared)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9781665432740
1665432748
EndPage 1230
ExternalDocumentID 9586087
Genre orig-research
GroupedDBID 6IE
6IH
ACM
ALMA_UNASSIGNED_HOLDINGS
CBEJK
RIE
RIO
ID FETCH-LOGICAL-a202t-659079c67462a653be0f188f727a9d7ff7c4525fce00ae07929ac482b6af018b3
IEDL.DBID RIE
IngestDate Wed Aug 27 02:28:29 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a202t-659079c67462a653be0f188f727a9d7ff7c4525fce00ae07929ac482b6af018b3
PageCount 6
ParticipantIDs ieee_primary_9586087
PublicationCentury 2000
PublicationDate 2021-Dec.-5
PublicationDateYYYYMMDD 2021-12-05
PublicationDate_xml – month: 12
  year: 2021
  text: 2021-Dec.-5
  day: 05
PublicationDecade 2020
PublicationTitle 2021 58th ACM/IEEE Design Automation Conference (DAC)
PublicationTitleAbbrev DAC
PublicationYear 2021
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib060584060
Score 2.3153057
Snippet This paper introduces new perspectives on analog design space search. To minimize the time-to-market, this endeavor better cast as constraint satisfaction...
SourceID ieee
SourceType Publisher
StartPage 1225
SubjectTerms artificial intelligence
Design automation
electronic design automation
Employee welfare
Production
Reinforcement learning
Search problems
Space exploration
Supervised learning
transistor sizing
Title Trust-Region Method with Deep Reinforcement Learning in Analog Design Space Exploration
URI https://ieeexplore.ieee.org/document/9586087
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwIxEG6AePCkBozv9ODRQvfR19GAxIMSgqjcSNudEi4LwcXfb1tWjIkXb5umm2ZnupnXN98gdAuFktxknIDOGckTMERa63-8jGXCxyvGRHb-tycxGsnZTI0b6G7fCwMAEXwG3fAYa_nFym5DqqynmORUiiZqCsF3vVrfdydU97xtonWTTkJVb3DfTwLViw8C06Rbv_triEq0IcOj_51-jDo_zXh4vDczJ6gBZRu9T0OvBJlAgBPj5zgGGoecKh4ArPEEIiGqjbk_XHOoLvCyxIGEZLXwuwJwA7_4iBnwDocXVdRBr8OHaf-R1DMSiPafVRHOfHSrLBc5TzVnmQHqEimdd0u0KoRzwobKpbNAqQa_NVXa5jI1XDuaSJOdola5KuEMYasy727LPDXK5t5rVNYUifaKVMwoZ8w5agehzNc7Gox5LY-Lv5cv0WGQe0R-sCvUqjZbuEYH9rNafmxuou6-AJN4mqs
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PT8IwFG4QTfSkBoy_7cGjhW5ru_ZoQIMRCEFUbqTt3giXQRD8-23LxJh48bYsXZb1dXm_vu97CN1CpqQwiSCgGScsAkOkte7HS3iSunzFmKDO_9ZN-305HqtBBd1tuTAAEMBn0PCXoZefze3al8qaiktBZbqDdjljMd2wtb5Pj-_vOe9ES5pORFWzfd-KvNiLSwPjqFE-_WuMSvAij4f_e_8Rqv_Q8fBg62iOUQWKGnofebYEGYIHFONeGASNfVUVtwEWeAhBEtWG6h8uVVSneFZgL0Myn7pVHrqBX1zODHiDxAtGqqPXx4dRq0PKKQlEu89aEcFdfqusSJmIteCJAZpHUuYuMNEqS_M8tb53mVugVINbGittmYyN0DmNpElOULWYF3CKsFWJC7gli42yzMWNypos0s6UihuVG3OGan5TJouNEMak3I_zv2_foP3OqNeddJ_6zxfowNsg4ED4Jaqulmu4Qnv2czX7WF4HO34Bzzad8g
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2021+58th+ACM%2FIEEE+Design+Automation+Conference+%28DAC%29&rft.atitle=Trust-Region+Method+with+Deep+Reinforcement+Learning+in+Analog+Design+Space+Exploration&rft.au=Yang%2C+Kai-En&rft.au=Tsai%2C+Chia-Yu&rft.au=Shen%2C+Hung-Hao&rft.au=Chiang%2C+Chen-Feng&rft.date=2021-12-05&rft.pub=IEEE&rft.spage=1225&rft.epage=1230&rft_id=info:doi/10.1109%2FDAC18074.2021.9586087&rft.externalDocID=9586087