COST-COGNIZANT COMBINATORIAL TEST CASE PRIORITIZATION

Author(s):  
ZIYUAN WANG ◽  
LIN CHEN ◽  
BAOWEN XU ◽  
YAN HUANG

Combinatorial testing has been widely used in practice. People usually assume all test cases in combinatorial test suite will run completely. However, in many scenarios where combinatorial testing is needed, for example the regression testing, the entire combinatorial test suite is not run completely as a result of test resource constraints. To improve the efficiency of testing, combinatorial test case prioritization technique is required. For the scenario of regression testing, this paper proposes a new cost-cognizant combinatorial test case prioritization technique, which takes both combination weights and test costs into account. Here we propose a series of metrics with physical meaning, which assess the combinatorial coverage efficiency of test suite, to guide the prioritization of combinatorial test cases. And two heuristic test case prioritization algorithms, which are based on total and additional techniques respectively, are utilized in our technique. Simulation experimental results illustrate some properties and advantages of proposed technique.

2013 ◽  
Vol 10 (1) ◽  
pp. 73-102 ◽  
Author(s):  
Lijun Mei ◽  
Yan Cai ◽  
Changjiang Jia ◽  
Bo Jiang ◽  
W.K. Chan

Many web services not only communicate through XML-based messages, but also may dynamically modify their behaviors by applying different interpretations on XML messages through updating the associated XML Schemas or XML-based interface specifications. Such artifacts are usually complex, allowing XML-based messages conforming to these specifications structurally complex. Testing should cost-effectively cover all scenarios. Test case prioritization is a dimension of regression testing that assures a program from unintended modifications by reordering the test cases within a test suite. However, many existing test case prioritization techniques for regression testing treat test cases of different complexity generically. In this paper, the authors exploit the insights on the structural similarity of XML-based artifacts between test cases in both static and dynamic dimensions, and propose a family of test case prioritization techniques that selects pairs of test case without replacement in turn. To the best of their knowledge, it is the first test case prioritization proposal that selects test case pairs for prioritization. The authors validate their techniques by a suite of benchmarks. The empirical results show that when incorporating all dimensions, some members of our technique family can be more effective than conventional coverage-based techniques.


2021 ◽  
Vol 27 (2) ◽  
pp. 170-189
Author(s):  
P. K. Gupta

Software is an integration of numerous programming modules  (e.g., functions, procedures, legacy system, reusable components, etc.) tested and combined to build the entire module. However, some undesired faults may occur due to a change in modules while performing validation and verification. Retesting of entire software is a costly affair in terms of money and time. Therefore, to avoid retesting of entire software, regression testing is performed. In regression testing, an earlier created test suite is used to retest the software system's modified module. Regression Testing works in three manners; minimizing test cases, selecting test cases, and prioritizing test cases. In this paper, a two-phase algorithm has been proposed that considers test case selection and test case prioritization technique for performing regression testing on several modules ranging from a smaller line of codes to huge line codes of procedural language. A textual based differencing algorithm has been implemented for test case selection. Program statements modified between two modules are used for textual differencing and utilized to identify test cases that affect modified program statements. In the next step, test case prioritization is implemented by applying the Genetic Algorithm for code/condition coverage. Genetic operators: Crossover and Mutation have been applied over the initial population (i.e. test cases), taking code/condition coverage as fitness criterion to provide a prioritized test suite. Prioritization algorithm can be applied over both original and reduced test suite depending upon the test suite's size or the need for accuracy. In the obtained results, the efficiency of the prioritization algorithms has been analyzed by the Average Percentage of Code Coverage (APCC) and Average Percentage of Code Coverage with cost (APCCc). A comparison of the proposed approach is also done with the previously proposed methods and it is observed that APCC & APCCc values achieve higher percentage values faster in the case of the prioritized test suite in contrast to the non-prioritized test suite.


Author(s):  
Abhishek Pandey ◽  
Soumya Banerjee

This article describes about the application of search-based techniques in regression testing and compares the performance of various search-based techniques for software testing. Test cases tend to increase exponentially as the software is modified. It is essential to remove redundant test cases from the existing test suite. Regression testing is very costly and must be performed in restricted ways to ensure the validity of the existing software. There exist different methods to improve the quality of test cases in terms of the number of faults covered, opposed to the number of statements covered in a minimum time. Different methods exist for this purpose, such as minimization, test case selection, and test case prioritization. In this article, search-based methods are applied to improve the quality of the test suite in order to select a minimum set of test cases which covers all the statements in a minimum time. The whole approach is named search based regression testing. In this paper, the performance of different metaheuristics for test suite minimization problem is also compared with a hybrid approach of ant colony optimization algorithm and genetic algorithm.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Ali M. Alakeel

Program assertions have been recognized as a supporting tool during software development, testing, and maintenance. Therefore, software developers place assertions within their code in positions that are considered to be error prone or that have the potential to lead to a software crash or failure. Similar to any other software, programs with assertions must be maintained. Depending on the type of modification applied to the modified program, assertions also might have to undergo some modifications. New assertions may also be introduced in the new version of the program, while some assertions can be kept the same. This paper presents a novel approach for test case prioritization during regression testing of programs that have assertions using fuzzy logic. The main objective of this approach is to prioritize the test cases according to their estimated potential in violating a given program assertion. To develop the proposed approach, we utilize fuzzy logic techniques to estimate the effectiveness of a given test case in violating an assertion based on the history of the test cases in previous testing operations. We have conducted a case study in which the proposed approach is applied to various programs, and the results are promising compared to untreated and randomly ordered test cases.


Test case prioritization (TCP) is a software testing technique that finds an ideal ordering of test cases for regression testing, so that testers can obtain the maximum benefit of their test suite, even if the testing process is stop at some arbitrary point. The recent trend of software development uses OO paradigm. This paper proposed a cost-cognizant TCP approach for object-oriented software that uses path-based integration testing. Path-based integration testing will identify the possible execution path and extract these paths from the Java System Dependence Graph (JSDG) model of the source code using forward slicing technique. Afterward evolutionary algorithm (EA) was employed to prioritize test cases based on the severity detection per unit cost for each of the dependent faults. The proposed technique was known as Evolutionary Cost-Cognizant Regression Test Case Prioritization (ECRTP) and being implemented as regression testing approach for experiment.


Regression testing is performed to make conformity that any changes in software program do not disturb the existing characteristics of the software. As the software improves, the test case tends to grow in size that makes it very costly to be executed, and thus the test cases are needed to be prioritized to select the effective test cases for software testing. In this paper, a test case prioritization technique in regression testing is proposed using a novel optimization algorithm known as Taylor series-based Jaya Optimization Algorithm (Taylor-JOA), which is the integration of Taylor series in Jaya Optimization Algorithm (JOA). The optimal test cases are selected based on the fitness function, modelled depending on the constraints, namely fault detection and branch coverage. The experimentation of the proposed Taylor-JOA is performed with the consideration of the evaluation metrics, namely Average Percentage of Fault Detected (APFD) and the Average Percentage of Branch Coverage (APBC). The APFD and the APBC of the proposed Taylor-JOA is 0.995, and 0.9917, respectively, which is high as compared to the existing methods that show the effectiveness of the proposed Taylor-JOA in the task of test case prioritization


2018 ◽  
Vol 7 (3.8) ◽  
pp. 22 ◽  
Author(s):  
Dr V. Chandra Prakash ◽  
Subhash Tatale ◽  
Vrushali Kondhalkar ◽  
Laxmi Bewoor

In software development life cycle, testing plays the significant role to verify requirement specification, analysis, design, coding and to estimate the reliability of software system. A test manager can write a set of test cases manually for the smaller software systems. However, for the extensive software system, normally the size of test suite is large, and the test suite is prone to an error committed like omissions of important test cases, duplication of some test cases and contradicting test cases etc. When test cases are generated automatically by a tool in an intelligent way, test errors can be eliminated. In addition, it is even possible to reduce the size of test suite and thereby to decrease the cost & time of software testing.It is a challenging job to reduce test suite size. When there are interacting inputs of Software under Test (SUT), combinatorial testing is highly essential to ensure higher reliability from 72 % to 91 % or even more than that. A meta-heuristic algorithm like Particle Swarm Optimization (PSO) solves optimization problem of automated combinatorial test case generation. Many authors have contributed in the field of combinatorial test case generation using PSO algorithms.We have reviewed some important research papers on automated test case generation for combinatorial testing using PSO. This paper provides a critical review of use of PSO and its variants for solving the classical optimization problem of automatic test case generation for conducting combinatorial testing.   


2016 ◽  
Vol 2016 ◽  
pp. 1-19 ◽  
Author(s):  
Rongcun Wang ◽  
Shujuan Jiang ◽  
Deng Chen ◽  
Yanmei Zhang

Similarity-based test case prioritization algorithms have been applied to regression testing. The common characteristic of these algorithms is to reschedule the execution order of test cases according to the distances between pair-wise test cases. The distance information can be calculated by different similarity measures. Since the topologies vary with similarity measures, the distances between pair-wise test cases calculated by different similarity measures are different. Similarity measures could significantly influence the effectiveness of test case prioritization. Therefore, we empirically evaluate the effects of six similarity measures on two similarity-based test case prioritization algorithms. The obtained results are statistically analyzed to recommend the best combination of similarity-based prioritization algorithms and similarity measures. The experimental results, confirmed by a statistical analysis, indicate that Euclidean distance is more efficient in finding defects than other similarity measures. The combination of the global similarity-based prioritization algorithm and Euclidean distance could be a better choice. It generates not only higher fault detection effectiveness but also smaller standard deviation. The goal of this study is to provide practical guides for picking the appropriate combination of similarity-based test case prioritization techniques and similarity measures.


2018 ◽  
Vol 7 (2.28) ◽  
pp. 332 ◽  
Author(s):  
Lei Xiao ◽  
Huaikou Miao ◽  
Ying Zhong

Regression testing is a very important activity in continuous integration development environments. Software engineers frequently integrate new or changed code that involves in a new regression testing. Furthermore, regression testing in continuous integration development environments is together with tight time constraints. It is also impossible to re-run all the test cases in regression testing. Test case prioritization and selection technique are often used to render continuous integration processes more cost-effective. According to multi objective optimization, we present a test case prioritization and selection technique, TCPSCI, to satisfy time constraints and achieve testing goals in continuous integration development environments. Based on historical failure data, testing coverage code size and testing execution time, we order and select test cases. The test cases of the maximize code coverage, the shorter execution time and revealing the latest faults have the higher priority in the same change request. The case study results show that using TCPSCI has a higher cost-effectiveness comparing to the manually prioritization.  


2019 ◽  
Vol 8 (3) ◽  
pp. 6004-6009

There are countless optimization problems that have been accelerated by Nature Inspired Metaheuristic Optimization Algorithms (NIMOA) in the earlier decades. NIMOA have gained huge popularity owing to their effective results. In this study NIMOA namely, Cuckoo Search Algorithm (CSA) is used to prioritize (order) the test cases for Regression Testing (RT). Prioritizations aids in the execution of higher priority test cases to give early fault detection. This research adopts the aggressive approach of reproduction made by Cuckoos to prioritize the test cases for RT. Average Percentage of Fault Detected (APFD) metrics is used in this paper for validations of results. APFD metrics is used to compare the performance of CSA with Flower Pollination Algorithm (FPA) and traditional approaches for Test Case Prioritization (TCP). Two java applications are used for the study. CSA is implemented in Java on eclipse platform. It is learnt from the study that APFD results of CSA outperformed the FPA for both the applications namely Puzzle Game and AreaandPerimeter. It is inferred from the results that prioritized set of test cases given by NIMOA outperformed the APFD results of traditional approaches and also CSA performed better than the FPA for TCP.


Sign in / Sign up

Export Citation Format

Share Document