Evaluating Multi-Agent AI Systems for Automated Bug Detection and Code Refactoring

This paper evaluates multi-agent AI systems for automating software bug detection and code refactoring. We design a cooperative architecture in which specialized agents—static-analysis, test-generation, root-cause, and refactoring—coordinate via a planning agent to propose, verify, and apply patches...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International journal for research in applied science and engineering technology Ročník 13; číslo 10; s. 12 - 21
Hlavní autoři: Aamina, Tanveer, Zaid, Mohammed, Huda, Syeda
Médium: Journal Article
Jazyk:angličtina
Vydáno: 31.10.2025
ISSN:2321-9653, 2321-9653
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:This paper evaluates multi-agent AI systems for automating software bug detection and code refactoring. We design a cooperative architecture in which specialized agents—static-analysis, test-generation, root-cause, and refactoring—coordinate via a planning agent to propose, verify, and apply patches. The system integrates LLM-based reasoning with conventional program analysis to reduce false positives and preserve behavioral equivalence. We implement a reference pipeline on opensource Python/Java projects and compare against single-agent and non-LLM baselines. Results indicate higher fix precision and refactoring quality, with reduced developer review time, especially on multi-file defects and design-smell cleanups. We report ablations on agent roles, verification depth, and communication cost, and discuss failure modes (spec ambiguities, overrefactoring, flaky tests). A reproducible workflow, dataflow diagram, and flowcharts are provided to support replication. Our findings suggest that disciplined, verifiable agent orchestration is a practical path to safer, more scalable automated maintenance in modern codebases.
ISSN:2321-9653
2321-9653
DOI:10.22214/ijraset.2025.74423