An Empirical Investigation on the Readability of Manual and Generated Test Cases

Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSui...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC) s. 348 - 3483
Hlavní autoři: Grano, Giovanni, Scalabrino, Simone, Gall, Harald C., Oliveto, Rocco
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: ACM 01.05.2018
Témata:
ISSN:2643-7171
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSuite - have been introduced. Nevertheless, developers have to maintain and evolve tests to sustain the changes in the source code; therefore, having readable test cases is important to ease such a process. However, it is still not clear whether developers make an effort in writing readable unit tests. Therefore, in this paper, we conduct an explorative study comparing the readability of manually written test cases with the classes they test. Moreover, we deepen such analysis looking at the readability of automatically generated test cases. Our results suggest that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.
ISSN:2643-7171
DOI:10.1145/3196321.3196363