Intelligent software debugging: A reinforcement learning approach for detecting the shortest crashing scenarios

The Quality Assurance (QA) team verifies software for months before its release decisions. Nevertheless, some crucial bugs remain undetected in manual testing. These bugs would make the system unusable on field, thus merchant loses money then manufacturer loses its customers. Thus, automatic softwar...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Expert systems with applications Ročník 198; s. 116722
Hlavní autoři: Durmaz, Engin, Tümer, M. Borahan
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Elsevier Ltd 15.07.2022
Elsevier BV
Témata:
ISSN:0957-4174, 1873-6793
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The Quality Assurance (QA) team verifies software for months before its release decisions. Nevertheless, some crucial bugs remain undetected in manual testing. These bugs would make the system unusable on field, thus merchant loses money then manufacturer loses its customers. Thus, automatic software testing methods have become inevitable to catch more bugs. To locate and repair bugs with an emphasis on the crash scenarios, we present in this work a reinforcement learning (RL) approach for finding and simplifying the input sequence(s) leading to a system crash or blocking, which represents the goal state of the RL problem. We aim at obtaining the shortest input sequence for the same bug so that developers would analyze agent’s actions causing crashes or freeze. We first simplify the given crash scenario using Recursive Delta Debugging (RDD), then we apply RL algorithms to explore a possibly shorter crashing sequence. We approach the exploration of crash scenarios as a RL problem where the agent first attains the goal state of crash/blocking by executing inputs, then shortens the input sequence with the help of the rewarding mechanism. We apply both model-free on-policy and model-based planning-capable RL agents to our problem. Furthermore, we present a novel RL approach, involving Detected Goal Catalyst (DGC), which reduces the time complexity by avoiding grappling with convergence via stopping learning at a small variance and attaining the shortest crash sequence with an algorithm that recursively removes the unrelated actions. Experiments show DGC significantly improves the learning performance of both SARSA and Prioritized Sweeping algorithms on obtaining the shortest path. •Reinforcement learning in automated testing to find crash scenarios.•Model-free on-policy and model-based planning-capable RL agents.•Detected Goal Catalyst as an heuristic approach.•Results shows that Detected Goal Catalyst increases performance.•Combination of Reinforcement learning and Recursive Delta Debugging.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2022.116722