Evaluating Multi-Agent AI Systems for Automated Bug Detection and Code Refactoring
This paper evaluates multi-agent AI systems for automating software bug detection and code refactoring. We design a cooperative architecture in which specialized agents—static-analysis, test-generation, root-cause, and refactoring—coordinate via a planning agent to propose, verify, and apply patches...
Saved in:
| Published in: | International journal for research in applied science and engineering technology Vol. 13; no. 10; pp. 12 - 21 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
31.10.2025
|
| ISSN: | 2321-9653, 2321-9653 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | This paper evaluates multi-agent AI systems for automating software bug detection and code refactoring. We design a cooperative architecture in which specialized agents—static-analysis, test-generation, root-cause, and refactoring—coordinate via a planning agent to propose, verify, and apply patches. The system integrates LLM-based reasoning with conventional program analysis to reduce false positives and preserve behavioral equivalence. We implement a reference pipeline on opensource Python/Java projects and compare against single-agent and non-LLM baselines. Results indicate higher fix precision and refactoring quality, with reduced developer review time, especially on multi-file defects and design-smell cleanups. We report ablations on agent roles, verification depth, and communication cost, and discuss failure modes (spec ambiguities, overrefactoring, flaky tests). A reproducible workflow, dataflow diagram, and flowcharts are provided to support replication. Our findings suggest that disciplined, verifiable agent orchestration is a practical path to safer, more scalable automated maintenance in modern codebases. |
|---|---|
| ISSN: | 2321-9653 2321-9653 |
| DOI: | 10.22214/ijraset.2025.74423 |