Invited Paper: VerilogEval: Evaluating Large Language Models for Verilog Code Generation

The increasing popularity of large language models (LLMs) has paved the way for their application in diverse domains. This paper proposes a benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code generation for hardware design and verification. We p...

Full description

Saved in:
Bibliographic Details
Published in:Digest of technical papers - IEEE/ACM International Conference on Computer-Aided Design pp. 1 - 8
Main Authors: Liu, Mingjie, Pinckney, Nathaniel, Khailany, Brucek, Ren, Haoxing
Format: Conference Proceeding
Language:English
Published: IEEE 28.10.2023
Subjects:
ISSN:1558-2434
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The increasing popularity of large language models (LLMs) has paved the way for their application in diverse domains. This paper proposes a benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code generation for hardware design and verification. We present a comprehensive evaluation dataset consisting of 156 problems from the Verilog instructional website HDLBits. The evaluation set consists of a diverse set of Verilog code generation tasks, ranging from simple combinational circuits to complex finite state machines. The Verilog code completions can be automatically tested for functional correctness by comparing the transient simulation outputs of the generated design with a golden solution. We also demonstrate that the Verilog code generation capability of pretrained language models could be improved with supervised fine-tuning by bootstrapping with LLM generated synthetic problem-code pairs.
ISSN:1558-2434
DOI:10.1109/ICCAD57390.2023.10323812