PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning

In this work, we present a reinforcement learning (RL) based approach to designing parallel prefix circuits such as adders or priority encoders that are fundamental to high-performance digital design. Unlike prior methods, our approach designs solutions tabula rasa purely through learning with synth...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2021 58th ACM/IEEE Design Automation Conference (DAC) S. 853 - 858
Hauptverfasser: Roy, Rajarshi, Raiman, Jonathan, Kant, Neel, Elkin, Ilyas, Kirby, Robert, Siu, Michael, Oberman, Stuart, Godil, Saad, Catanzaro, Bryan
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 05.12.2021
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this work, we present a reinforcement learning (RL) based approach to designing parallel prefix circuits such as adders or priority encoders that are fundamental to high-performance digital design. Unlike prior methods, our approach designs solutions tabula rasa purely through learning with synthesis in the loop. We design a grid-based state-action representation and an RL environment for constructing legal prefix circuits. Deep Convolutional RL agents trained on this environment produce prefix adder circuits that Pareto-dominate existing baselines with up to 16.0% and 30.2% lower area for the same delay in the 32b and 64b settings respectively. We observe that agents trained with open-source synthesis tools and cell library can design adder circuits that achieve lower area and delay than commercial tool adders in an industrial cell library.
DOI:10.1109/DAC18074.2021.9586094