Quality In, Quality Out: Investigating Training Data's Role in AI Code Generation

Uloženo v:
Podrobná bibliografie
Název: Quality In, Quality Out: Investigating Training Data's Role in AI Code Generation
Autoři: Improta, Cristina, Tufano, Rosalia, Liguori, Pietro, Cotroneo, Domenico, Bavota, Gabriele
Zdroj: 2025 IEEE/ACM 33rd International Conference on Program Comprehension (ICPC)
Publication Status: Preprint
Informace o vydavateli: IEEE, 2025.
Rok vydání: 2025
Témata: Software Engineering (cs.SE), FOS: Computer and information sciences, Training Data, Computer Science - Software Engineering, Code Generation
Popis: Deep Learning-based code generators have seen significant advancements in recent years. Tools such as GitHub Copilot are used by thousands of developers with the main promise of a boost in productivity. However, researchers have recently questioned their impact on code quality showing, for example, that code generated by DL-based tools may be affected by security vulnerabilities. Since DL models are trained on large code corpora, one may conjecture that low-quality code they output is the result of low-quality code they have seen during training. However, there is very little empirical evidence documenting this phenomenon. Indeed, most of previous work look at the frequency with which commercial code generators recommend low-quality code without the possibility of relating this to their training set. We investigate the extent to which low-quality code instances seen during training affect the quality of the code generated at inference time. We start by fine-tuning a pre-trained DL model on a large-scale dataset being representative of those usually adopted in the training of code generators. We show that 4.98% of functions in this dataset exhibit one or more quality issues related to security, maintainability, best practices, etc. We use the fine-tuned model to generate 551k Python functions, showing that 5.85% of them are affected by at least one quality issue. We then remove from the training set the low-quality functions, and use the cleaned dataset to fine-tune a second model which has been used to generate the same 551k Python functions. We show that the model trained on the cleaned dataset exhibits similar performance in terms of functional correctness as compared to the original model while, however, generating a statistically significant lower number of low-quality functions (2.16%). Our study empirically documents the importance of high-quality training data for code generators.
Accepted to the 33rd IEEE/ACM International Conference on Program Comprehension (ICPC 2025)
Druh dokumentu: Article
Conference object
Popis souboru: application/pdf
DOI: 10.1109/icpc66645.2025.00056
DOI: 10.48550/arxiv.2503.11402
Přístupová URL adresa: http://arxiv.org/abs/2503.11402
https://hdl.handle.net/11588/1008376
https://hdl.handle.net/11588/1008376
https://doi.org/10.1109/icpc66645.2025.00056
Rights: STM Policy #29
arXiv Non-Exclusive Distribution
Přístupové číslo: edsair.doi.dedup.....4bd84c5abb5d652e2c2b0049dde85daa
Databáze: OpenAIRE
Popis
Abstrakt:Deep Learning-based code generators have seen significant advancements in recent years. Tools such as GitHub Copilot are used by thousands of developers with the main promise of a boost in productivity. However, researchers have recently questioned their impact on code quality showing, for example, that code generated by DL-based tools may be affected by security vulnerabilities. Since DL models are trained on large code corpora, one may conjecture that low-quality code they output is the result of low-quality code they have seen during training. However, there is very little empirical evidence documenting this phenomenon. Indeed, most of previous work look at the frequency with which commercial code generators recommend low-quality code without the possibility of relating this to their training set. We investigate the extent to which low-quality code instances seen during training affect the quality of the code generated at inference time. We start by fine-tuning a pre-trained DL model on a large-scale dataset being representative of those usually adopted in the training of code generators. We show that 4.98% of functions in this dataset exhibit one or more quality issues related to security, maintainability, best practices, etc. We use the fine-tuned model to generate 551k Python functions, showing that 5.85% of them are affected by at least one quality issue. We then remove from the training set the low-quality functions, and use the cleaned dataset to fine-tune a second model which has been used to generate the same 551k Python functions. We show that the model trained on the cleaned dataset exhibits similar performance in terms of functional correctness as compared to the original model while, however, generating a statistically significant lower number of low-quality functions (2.16%). Our study empirically documents the importance of high-quality training data for code generators.<br />Accepted to the 33rd IEEE/ACM International Conference on Program Comprehension (ICPC 2025)
DOI:10.1109/icpc66645.2025.00056