Security Degradation in Iterative AI Code Generation: A Systematic Analysis of the Paradox

The rapid adoption of Large Language Models (LLMs) for code generation has transformed software development, yet little attention has been given to how security vulnerabilities evolve through iterative LLM feedback. This paper analyzes security degradation in AI-generated code through a controlled e...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE International Symposium on Technology and Society (Print) s. 1 - 8
Hlavní autori: Shukla, Shivani, Joshi, Himanshu, Syed, Romilla
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 10.09.2025
Predmet:
ISSN:2158-3412
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The rapid adoption of Large Language Models (LLMs) for code generation has transformed software development, yet little attention has been given to how security vulnerabilities evolve through iterative LLM feedback. This paper analyzes security degradation in AI-generated code through a controlled experiment with 400 code samples across 40 rounds of "improvements" using four distinct prompting strategies. Our findings show a 37.6% increase in critical vulnerabilities after just five iterations, with distinct vulnerability patterns emerging across different prompting approaches. This evidence challenges the assumption that iterative LLM refinement improves code security and highlights the essential role of human expertise in the loop. We propose practical guidelines for developers to mitigate these risks, emphasizing the need for robust human validation between LLM iterations to prevent the paradoxical introduction of new security issues during supposedly beneficial code "improvements."
ISSN:2158-3412
DOI:10.1109/ISTAS65609.2025.11269659