An Empirical Investigation on the Readability of Manual and Generated Test Cases

Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSui...

Full description

Saved in:
Bibliographic Details
Published in:2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC) pp. 348 - 3483
Main Authors: Grano, Giovanni, Scalabrino, Simone, Gall, Harald C., Oliveto, Rocco
Format: Conference Proceeding
Language:English
Published: ACM 01.05.2018
Subjects:
ISSN:2643-7171
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Software testing is one of the most crucial tasks in the typical development process. Developers are usually required to write unit test cases for the code they implement. Since this is a time-consuming task, in last years many approaches and tools for automatic test case generation - such as EvoSuite - have been introduced. Nevertheless, developers have to maintain and evolve tests to sustain the changes in the source code; therefore, having readable test cases is important to ease such a process. However, it is still not clear whether developers make an effort in writing readable unit tests. Therefore, in this paper, we conduct an explorative study comparing the readability of manually written test cases with the classes they test. Moreover, we deepen such analysis looking at the readability of automatically generated test cases. Our results suggest that developers tend to neglect the readability of test cases and that automatically generated test cases are generally even less readable than manually written ones.
ISSN:2643-7171
DOI:10.1145/3196321.3196363