Consistent Graph Model Generation with Large Language Models

Graph model generation from natural language requirements is an essential task in software engineering, for which large language models (LLMs) have become increasingly popular. A key challenge is ensuring that the generated graph models are consistent with domain-specific well-formed constraints. LL...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings (IEEE/ACM International Conference on Software Engineering Companion. Online) S. 218 - 219
1. Verfasser: Chen, Boqi
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 27.04.2025
Schlagworte:
ISSN:2574-1934
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Graph model generation from natural language requirements is an essential task in software engineering, for which large language models (LLMs) have become increasingly popular. A key challenge is ensuring that the generated graph models are consistent with domain-specific well-formed constraints. LLM-generated graphs are often partially correct due to inconsistency with the constraints, limiting their practical usage. To address this, we propose a novel abstraction-concretization framework motivated by self-consistency for generating consistent models. Our approach first abstracts candidate models into a probabilistic partial model and then concretizes this abstraction into a consistent graph model. Preliminary evaluations on taxonomy generation demonstrate that our method significantly enhances both the consistency and quality of generated graph models.
ISSN:2574-1934
DOI:10.1109/ICSE-Companion66252.2025.00067