TestGenie: An Automated Unit Test Generation Tool in Python
Gespeichert in:
| Titel: | TestGenie: An Automated Unit Test Generation Tool in Python |
|---|---|
| Autoren: | Seitz, Charles |
| Weitere Verfasser: | University of Scranton. Department of Computing Sciences |
| Verlagsinformationen: | University of Scranton |
| Publikationsjahr: | 2025 |
| Bestand: | The University of Scranton Digital Collections |
| Schlagwörter: | University of Scranton -- Dissertations, Academic theses, Python (Computer language program), Computer software -- Testing |
| Time: | 2020-2029 |
| Beschreibung: | Unit testing is a crucial step in the software development lifecycle, designed to ensure the correctness and reliability of individual software units. These tests help verify that individual functions or classes behave as expected, thereby reducing bugs, increasing developer confidence, and facilitating maintenance and refactoring. As the scale and complexity of software systems continue to grow, so does the need for efficient and scalable testing strategies. However, manual test creation efforts by the developer are labor-intensive, error-prone, and subject to inconsistency in test quality. Developers face challenges related to time constraints, evolving codebases, and the increasing pressure to maintain high code coverage metrics. These challenges have inspired a broad range of research and tools aimed at automating the test generation process, including academic frameworks like Randoop (Pacheco et al., 2007) for feedback-directed test generation in Java, EvoSuite (Fraser and Arcuri, 2011) for evolutionary test generation. This thesis presents an approach to unit test generation using several test case analysis strategies, and ultimately aims to support reinforcement learning. Reinforcement learning provides a dynamic, feedback-driven mechanism through which software can learn a policy to generate optimized test cases that maximize code coverage while minimizing redundancy. The system developed in this research is a command-line tool designed to work with Python codebases. The tool is extensible, supporting multiple output formats such as PyTest, Unittest, Doctest, and is built with a modular pipe and filter architecture that ideally promotes scalability and maintainability. This document outlines the motivations, design, requirements, architecture, and capabilities of the tool, aiming to demonstrate its potential as a practical solution to the unit test generation problem. |
| Publikationsart: | text |
| Dateibeschreibung: | application/pdf |
| Sprache: | English |
| Relation: | Master of Science in Software Engineering; University of Scranton Archives; University of Scranton Masters and Honors Theses; University of Scranton Masters Theses; MT_Seitz_C_2025; http://digitalservices.scranton.edu/u?/p15111coll1,1480 |
| Verfügbarkeit: | http://digitalservices.scranton.edu/u?/p15111coll1,1480 |
| Rights: | http://rightsstatements.org/vocab/InC/1.0/ ; The author of this work retains the copyright. The University of Scranton does not have permission from the author to provide access to this work in the Library's Digital Collections. The print thesis is available for review in the University Archives reading room. |
| Dokumentencode: | edsbas.7DA08534 |
| Datenbank: | BASE |
| Abstract: | Unit testing is a crucial step in the software development lifecycle, designed to ensure the correctness and reliability of individual software units. These tests help verify that individual functions or classes behave as expected, thereby reducing bugs, increasing developer confidence, and facilitating maintenance and refactoring. As the scale and complexity of software systems continue to grow, so does the need for efficient and scalable testing strategies. However, manual test creation efforts by the developer are labor-intensive, error-prone, and subject to inconsistency in test quality. Developers face challenges related to time constraints, evolving codebases, and the increasing pressure to maintain high code coverage metrics. These challenges have inspired a broad range of research and tools aimed at automating the test generation process, including academic frameworks like Randoop (Pacheco et al., 2007) for feedback-directed test generation in Java, EvoSuite (Fraser and Arcuri, 2011) for evolutionary test generation. This thesis presents an approach to unit test generation using several test case analysis strategies, and ultimately aims to support reinforcement learning. Reinforcement learning provides a dynamic, feedback-driven mechanism through which software can learn a policy to generate optimized test cases that maximize code coverage while minimizing redundancy. The system developed in this research is a command-line tool designed to work with Python codebases. The tool is extensible, supporting multiple output formats such as PyTest, Unittest, Doctest, and is built with a modular pipe and filter architecture that ideally promotes scalability and maintainability. This document outlines the motivations, design, requirements, architecture, and capabilities of the tool, aiming to demonstrate its potential as a practical solution to the unit test generation problem. |
|---|
Nájsť tento článok vo Web of Science