RL-CCD: Concurrent Clock and Data Optimization using Attention-Based Self-Supervised Reinforcement Learning

Concurrent Clock and Data (CCD) optimization is a well-adopted approach in modern commercial tools that resolves timing violations using a mixture of clock skewing and delay fixing strategies. However, existing CCD algorithms are flawed. Particularly, they fail to prioritize violating endpoints for...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 60th ACM/IEEE Design Automation Conference (DAC) s. 1 - 6
Hlavní autoři: Lu, Yi-Chen, Chan, Wei-Ting, Guo, Deyuan, Kundu, Sudipto, Khandelwal, Vishal, Lim, Sung Kyu
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 09.07.2023
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Concurrent Clock and Data (CCD) optimization is a well-adopted approach in modern commercial tools that resolves timing violations using a mixture of clock skewing and delay fixing strategies. However, existing CCD algorithms are flawed. Particularly, they fail to prioritize violating endpoints for different optimization strategies correctly, leading to flow-wise globally sub-optimal results. In this paper, we overcome this issue by presenting RL-CCD, a Reinforcement Learning (RL) agent that selects endpoints for useful skew prioritization using the proposed EP-GNN, an endpoint-oriented Graph Neural Network (GNN) model, and a Transformer-based self-supervised attention mechanism. Experimental results on 19 industrial designs in 5 − 12nm technologies demonstrate that RL-CCD achieves up to 64% Total Negative Slack (TNS) reduction and 66.5% number of violating endpoints (NVE) improvement over the native implementation of a commercial tool.
AbstractList Concurrent Clock and Data (CCD) optimization is a well-adopted approach in modern commercial tools that resolves timing violations using a mixture of clock skewing and delay fixing strategies. However, existing CCD algorithms are flawed. Particularly, they fail to prioritize violating endpoints for different optimization strategies correctly, leading to flow-wise globally sub-optimal results. In this paper, we overcome this issue by presenting RL-CCD, a Reinforcement Learning (RL) agent that selects endpoints for useful skew prioritization using the proposed EP-GNN, an endpoint-oriented Graph Neural Network (GNN) model, and a Transformer-based self-supervised attention mechanism. Experimental results on 19 industrial designs in 5 − 12nm technologies demonstrate that RL-CCD achieves up to 64% Total Negative Slack (TNS) reduction and 66.5% number of violating endpoints (NVE) improvement over the native implementation of a commercial tool.
Author Guo, Deyuan
Chan, Wei-Ting
Kundu, Sudipto
Khandelwal, Vishal
Lim, Sung Kyu
Lu, Yi-Chen
Author_xml – sequence: 1
  givenname: Yi-Chen
  surname: Lu
  fullname: Lu, Yi-Chen
  email: yclu@gatech.edu
  organization: Georgia Institute of Technology,School of ECE,Atlanta,GA
– sequence: 2
  givenname: Wei-Ting
  surname: Chan
  fullname: Chan, Wei-Ting
  email: wei-ting.chan@synopsys.com
  organization: Synopsys Inc.,Hillsboro,OR
– sequence: 3
  givenname: Deyuan
  surname: Guo
  fullname: Guo, Deyuan
  email: deyuan.guo@synopsys.com
  organization: Synopsys Inc.,Mountain View,CA
– sequence: 4
  givenname: Sudipto
  surname: Kundu
  fullname: Kundu, Sudipto
  email: sudipto.kundu@synopsys.com
  organization: Synopsys Inc.,Mountain View,CA
– sequence: 5
  givenname: Vishal
  surname: Khandelwal
  fullname: Khandelwal, Vishal
  email: vishal.khandelwal@synopsys.com
  organization: Synopsys Inc.,Hillsboro,OR
– sequence: 6
  givenname: Sung Kyu
  surname: Lim
  fullname: Lim, Sung Kyu
  email: limsk@gatech.edu
  organization: Georgia Institute of Technology,School of ECE,Atlanta,GA
BookMark eNo1kM1KAzEUhSMoqHXeQCQvMPUmmcwk7mpaf2Cg0Oq6JNMbCW0zJZMK-vS2qKvDB-d8i3NNzmMfkZA7BmPGQN9PJ0bWmusxBy7GDHilANQZKXSjlZAguKgUuyTFMAQHNUhVQV1dkc2iLY2ZPlDTx-6QEsZMzbbvNtTGNZ3abOl8n8MufNsc-kgPQ4gfdJLzsXjk8tEOuKZL3Ppyedhj-gwnXmCIvk8d7k6-Fm2Kx9kNufB2O2DxlyPy_jR7My9lO39-NZO2tFxDLhkqbKRsnHMofVUrL9BLhw7QdwyYcBzk2irQ3gvPGyd4DVpaDsjANlyMyO2vNyDiap_Czqav1f8n4gdW4Fq3
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/DAC56929.2023.10248008
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798350323481
EndPage 6
ExternalDocumentID 10248008
Genre orig-research
GroupedDBID 6IE
6IH
ACM
ALMA_UNASSIGNED_HOLDINGS
CBEJK
RIE
RIO
ID FETCH-LOGICAL-a290t-1e8e7557bbbe5f468f3ef5beb0efc1013b205da809ff3f27b326095a20e10a723
IEDL.DBID RIE
ISICitedReferencesCount 6
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001073487300312&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:51:00 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a290t-1e8e7557bbbe5f468f3ef5beb0efc1013b205da809ff3f27b326095a20e10a723
PageCount 6
ParticipantIDs ieee_primary_10248008
PublicationCentury 2000
PublicationDate 2023-July-9
PublicationDateYYYYMMDD 2023-07-09
PublicationDate_xml – month: 07
  year: 2023
  text: 2023-July-9
  day: 09
PublicationDecade 2020
PublicationTitle 2023 60th ACM/IEEE Design Automation Conference (DAC)
PublicationTitleAbbrev DAC
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssib060584064
Score 2.2658606
Snippet Concurrent Clock and Data (CCD) optimization is a well-adopted approach in modern commercial tools that resolves timing violations using a mixture of clock...
SourceID ieee
SourceType Publisher
StartPage 1
SubjectTerms Charge coupled devices
Delays
Design automation
Graph neural networks
Optimization
Reinforcement learning
Transformers
Title RL-CCD: Concurrent Clock and Data Optimization using Attention-Based Self-Supervised Reinforcement Learning
URI https://ieeexplore.ieee.org/document/10248008
WOSCitedRecordID wos001073487300312&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwELagYmACRBFveWB1cZ2HHbaSghhQqVqQulV2fK5QIa1Kyu_n7KQgBgY2x4oV6fz4Lr77viPkysnUFFpIJp3FHxTEMGaM6TKFA6wA7qKm2IQcDNRkkg0bsnrgwgBASD6Djm-GWL5dFGt_VYY7XMQqUHu3pUxrstZm8fjwHoJT3LCAuzy77vfyJEX47_gS4Z3N4F9lVAKK3O_98_v7pP3Dx6PDb6Q5IFtQHpL56JHlef-G4ktFrbJEc4SmOdWlpX1dafqE58F7Q7SkPsN9RntVVSc4slvEL0vH8ObYeL30R4Z_HkGQUi3CrSFt1FdnbfJyf_ecP7CmdALTIuMV64ICmSQS7Q6Ji1PlInCJAcPBFbgLIyN4YrXimXORE9KgF4fOlhYculxLER2RVrko4ZhQkSrrDHoFWnn1L-XlZKS20qInFxexOyFtb6npslbHmG6MdPpH_xnZ9fMRUl6zc9KqVmu4IDvFZ_X6sboMc_oFaGmjPA
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1dT8IwFG0MmuiTGjF-2wdfi1330c43HBKNiAQw4Y206y0x6CA4_P22ZWh88MG3bVmT5fbj3LX3nIPQleGJyiXjhBttf1AshhGlVECEbaAZUBNWZhO82xWjUdqryOqeCwMAvvgMGu7Sn-XrWb50W2V2hrNIeGrvprPOquha6-HjDvgsPEUVDzig6XWrmcWJTQAaziS8sW7-y0jF40h7959fsIfqP4w83PvGmn20AcUBmvY7JMtaN9i-lK90lnBmwWmKZaFxS5YSP9sV4b2iWmJX4z7BzbJclTiSW4tgGg_gzZDBcu4WDXffBy-mmvt9Q1zpr07q6KV9N8zuSWWeQCRLaUkCEMDjmNvIQ2yiRJgQTKxAUTC5nYehYjTWUtDUmNAwrmweZ9MtySgEVHIWHqJaMSvgCGGWCG2UzQukcPpfwgnKcKm5trlclEfmGNVdpMbzlT7GeB2kkz-eX6Lt--FTZ9x56D6eoh3XN74ANj1DtXKxhHO0lX-Wrx-LC9-_XxippoU
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+60th+ACM%2FIEEE+Design+Automation+Conference+%28DAC%29&rft.atitle=RL-CCD%3A+Concurrent+Clock+and+Data+Optimization+using+Attention-Based+Self-Supervised+Reinforcement+Learning&rft.au=Lu%2C+Yi-Chen&rft.au=Chan%2C+Wei-Ting&rft.au=Guo%2C+Deyuan&rft.au=Kundu%2C+Sudipto&rft.date=2023-07-09&rft.pub=IEEE&rft.spage=1&rft.epage=6&rft_id=info:doi/10.1109%2FDAC56929.2023.10248008&rft.externalDocID=10248008