Consistent Graph Model Generation with Large Language Models

Graph model generation from natural language requirements is an essential task in software engineering, for which large language models (LLMs) have become increasingly popular. A key challenge is ensuring that the generated graph models are consistent with domain-specific well-formed constraints. LL...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings (IEEE/ACM International Conference on Software Engineering Companion. Online) s. 218 - 219
Hlavní autor: Chen, Boqi
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 27.04.2025
Témata:
ISSN:2574-1934
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Graph model generation from natural language requirements is an essential task in software engineering, for which large language models (LLMs) have become increasingly popular. A key challenge is ensuring that the generated graph models are consistent with domain-specific well-formed constraints. LLM-generated graphs are often partially correct due to inconsistency with the constraints, limiting their practical usage. To address this, we propose a novel abstraction-concretization framework motivated by self-consistency for generating consistent models. Our approach first abstracts candidate models into a probabilistic partial model and then concretizes this abstraction into a consistent graph model. Preliminary evaluations on taxonomy generation demonstrate that our method significantly enhances both the consistency and quality of generated graph models.
ISSN:2574-1934
DOI:10.1109/ICSE-Companion66252.2025.00067