Consistent Graph Model Generation with Large Language Models

Graph model generation from natural language requirements is an essential task in software engineering, for which large language models (LLMs) have become increasingly popular. A key challenge is ensuring that the generated graph models are consistent with domain-specific well-formed constraints. LL...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings (IEEE/ACM International Conference on Software Engineering Companion. Online) s. 218 - 219
Hlavný autor: Chen, Boqi
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 27.04.2025
Predmet:
ISSN:2574-1934
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Graph model generation from natural language requirements is an essential task in software engineering, for which large language models (LLMs) have become increasingly popular. A key challenge is ensuring that the generated graph models are consistent with domain-specific well-formed constraints. LLM-generated graphs are often partially correct due to inconsistency with the constraints, limiting their practical usage. To address this, we propose a novel abstraction-concretization framework motivated by self-consistency for generating consistent models. Our approach first abstracts candidate models into a probabilistic partial model and then concretizes this abstraction into a consistent graph model. Preliminary evaluations on taxonomy generation demonstrate that our method significantly enhances both the consistency and quality of generated graph models.
ISSN:2574-1934
DOI:10.1109/ICSE-Companion66252.2025.00067