Invited Paper: VerilogEval: Evaluating Large Language Models for Verilog Code Generation

The increasing popularity of large language models (LLMs) has paved the way for their application in diverse domains. This paper proposes a benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code generation for hardware design and verification. We p...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Digest of technical papers - IEEE/ACM International Conference on Computer-Aided Design s. 1 - 8
Hlavní autoři: Liu, Mingjie, Pinckney, Nathaniel, Khailany, Brucek, Ren, Haoxing
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 28.10.2023
Témata:
ISSN:1558-2434
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The increasing popularity of large language models (LLMs) has paved the way for their application in diverse domains. This paper proposes a benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code generation for hardware design and verification. We present a comprehensive evaluation dataset consisting of 156 problems from the Verilog instructional website HDLBits. The evaluation set consists of a diverse set of Verilog code generation tasks, ranging from simple combinational circuits to complex finite state machines. The Verilog code completions can be automatically tested for functional correctness by comparing the transient simulation outputs of the generated design with a golden solution. We also demonstrate that the Verilog code generation capability of pretrained language models could be improved with supervised fine-tuning by bootstrapping with LLM generated synthetic problem-code pairs.
ISSN:1558-2434
DOI:10.1109/ICCAD57390.2023.10323812