SBFT Tool Competition 2024 - Python Test Case Generation Track

Test case generation (TCG) for Python poses distinctive challenges due to the language's dynamic nature and the absence of strict type information. Previous research has successfully explored automated unit TCG for Python, with solutions outperforming random test generation methods. Nevertheles...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2024 IEEE/ACM International Workshop on Search-Based and Fuzz Testing (SBFT) s. 37 - 40
Hlavní autoři: Erni, Nicolas, Mohammed, Al-Ameen Mohammed Ali, Birchler, Christian, Derakhshanfar, Pouria, Lukasczyk, Stephan, Panichella, Sebastiano
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: ACM 14.04.2024
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Test case generation (TCG) for Python poses distinctive challenges due to the language's dynamic nature and the absence of strict type information. Previous research has successfully explored automated unit TCG for Python, with solutions outperforming random test generation methods. Nevertheless, fundamental issues persist, hindering the practical adoption of existing test case generators. To address these challenges, we report on the organization, challenges, and results of the first edition of the Python Testing Competition. Four tools, namely UTBotPython, Klara, Hypothesis Ghostwriter, and Pynguin were executed on a benchmark set consisting of 35 Python source files sampled from 7 open-source Python projects for a time budget of 400 seconds. We considered one configuration of each tool for each test subject and evaluated the tools' effectiveness in terms of code and mutation coverage. This paper describes our methodology, the analysis of the results together with the competing tools, and the challenges faced while running the competition experiments.
DOI:10.1145/3643659.3643930