JMigBench: A Benchmark for Evaluating LLMs on Source Code Migration (Java 8 to Java 11)

Gespeichert in:
Bibliographische Detailangaben
Titel: JMigBench: A Benchmark for Evaluating LLMs on Source Code Migration (Java 8 to Java 11)
Autoren: Amin, Nishil, Fei, Zhiwei, Li, Xiang, Petke, Justyna, Ye, He
Publikationsjahr: 2026
Bestand: ArXiv.org (Cornell University Library)
Schlagwörter: Software Engineering
Beschreibung: We build a benchmark to evaluate large language models (LLMs) for source code migration tasks, specifically upgrading functions from Java 8 to Java 11. We first collected a dataset of function pairs from open-source repositories, but limitations in data quality led us to construct a refined dataset covering eight categories of deprecated APIs. Using this dataset, the Mistral Codestral model was evaluated with CodeBLEU and keyword-based metrics to measure lexical and semantic similarity as well as migration correctness. Results show that the evaluated model (Mistral Codestral) can handle trivial one-to-one API substitutions with moderate success, achieving identical migrations in 11.11% of the cases, but it struggles with more complex migrations such as CORBA or JAX-WS. These findings suggest Mistral Codestral can partially reduce developer effort by automating repetitive migration tasks but cannot yet replace humans within the scope of the JMigBench benchmark. The benchmark and analysis provide a foundation for future work on expanding datasets, refining prompting strategies, and improving migration performance across different LLMs.
Publikationsart: text
Sprache: unknown
Relation: http://arxiv.org/abs/2602.09930
Verfügbarkeit: http://arxiv.org/abs/2602.09930
Dokumentencode: edsbas.6A35429F
Datenbank: BASE
Beschreibung
Abstract:We build a benchmark to evaluate large language models (LLMs) for source code migration tasks, specifically upgrading functions from Java 8 to Java 11. We first collected a dataset of function pairs from open-source repositories, but limitations in data quality led us to construct a refined dataset covering eight categories of deprecated APIs. Using this dataset, the Mistral Codestral model was evaluated with CodeBLEU and keyword-based metrics to measure lexical and semantic similarity as well as migration correctness. Results show that the evaluated model (Mistral Codestral) can handle trivial one-to-one API substitutions with moderate success, achieving identical migrations in 11.11% of the cases, but it struggles with more complex migrations such as CORBA or JAX-WS. These findings suggest Mistral Codestral can partially reduce developer effort by automating repetitive migration tasks but cannot yet replace humans within the scope of the JMigBench benchmark. The benchmark and analysis provide a foundation for future work on expanding datasets, refining prompting strategies, and improving migration performance across different LLMs.