Many software projects are built upon reusable components that are integrated to fulfill the requirements. When creating component-based systems, it’s crucial not only to test the individual components, but also to test how they interact when integrated. Various methods exist for automatically generating test cases, and when researchers develop new tools for test generation, they need to evaluate their effectiveness. The aim of this thesis is to investigate how current evaluations of integration testing tools for component-based software systems are conducted and whether they allow different approaches to be benchmarked and compared. The research involves examining existing literature on evaluation approaches for automated integration testing. The literature review reveals that most evaluations focus on specific applications, each with varying characteristics. What stands out is the inconsistency in the information provided about these applications. Different metrics are used, and in some cases, important variables are not even mentioned. As a result of the literature search, it is concluded that the diversity of applications and the lack of consistency in information make comparisons challenging. To enhance comparability and facilitate benchmarking, the suggestion is made to develop a reference application for future evaluations. This standardized reference application would provide a common ground for evaluating integration testing tools.
Project information
Finished
Bachelor
Charlotte Olischläger
An Approach for Vertical Reuse of Unit Test Cases to Automate Integration Testing
2024-002