Variational Prefix Tuning for diverse and accurate code summarization using pre-trained language models

Recent advancements in source code summarization have leveraged transformer-based pre-trained models, including Large Language Models of Code (LLMCs), to automate and improve the generation of code summaries. However, existing methods often focus on generating a single high-quality summary for a giv...

Full description

Saved in:
Bibliographic Details
Published in:The Journal of systems and software Vol. 229; p. 112493
Main Authors: Zhao, Junda, Song, Yuliang, Cohen, Eldan
Format: Journal Article
Language:English
Published: Elsevier Inc 01.11.2025
Subjects:
ISSN:0164-1212
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advancements in source code summarization have leveraged transformer-based pre-trained models, including Large Language Models of Code (LLMCs), to automate and improve the generation of code summaries. However, existing methods often focus on generating a single high-quality summary for a given source code, neglecting scenarios where the generated summary might be inadequate and alternative options are needed. In this paper, we introduce Variational Prefix Tuning (VPT), a novel approach that enhances pre-trained models’ ability to generate diverse yet accurate sets of summaries, allowing the user to choose the most suitable one for the given source code. Our method integrates a Conditional Variational Autoencoder (CVAE) framework as a modular component into pre-trained models, enabling us to model the distribution of observed target summaries and sample continuous embeddings to be used as prefixes to steer the generation of diverse outputs during decoding. Importantly, we construct our method in a parameter-efficient manner, eliminating the need for expensive model retraining, especially when using LLMCs. Furthermore, we employ a bi-criteria reranking method to select a subset of generated summaries, optimizing both the diversity and the accuracy of the options presented to users. We present extensive experimental evaluations using widely used datasets and current state-of-the-art pre-trained code summarization models to demonstrate the effectiveness of our approach and its adaptability across models. •First work to enable diverse and accurate code summarization for Large Language Models of Code.•Propose a novel approach (VPT) to enable such capability without requiring a costly full retraining.•Demonstrate the adaptability of VPT by applying it to several transformer-based pre-trained models.•Provide open-source implementation and datasets for our proposed approach
ISSN:0164-1212
DOI:10.1016/j.jss.2025.112493