Date of Conferral

2015

Degree

Ph.D.

School

Management

Advisor

Branford McAllister

Abstract

Effective software performance testing is essential to the development and delivery of quality software products. Many software testing investigations have reported software performance testing improvements, but few have quantitatively validated measurable software testing performance improvements across an aggregate of studies. This study addressed that gap by conducting a meta-analysis to assess the relationship between applying Design of Experiments (DOE) techniques in the software testing process and the reported software performance testing improvements. Software performance testing theories and DOE techniques composed the theoretical framework for this study. Software testing studies (n = 96) were analyzed, where half had DOE techniques applied and the other half did not. Five research hypotheses were tested, where findings were measured in (a) the number of detected defects, (b) the rate of defect detection, (c) the phase in which the defect was detected, (d) the total number of hours it took to complete the testing, and (e) an overall hypothesis which included all measurements for all findings. The data were analyzed by first computing standard difference in means effect sizes, then through the Z test, the Q test, and the t test in statistical comparisons. Results of the meta-analysis showed that applying DOE techniques in the software testing process improved software performance testing (p < 05). These results have social implications for the software testing industry and software testing professionals, providing another empirically-validated testing methodology. Software organizations can use this methodology to differentiate their software testing process, to create more quality products, and to benefit the consumer and society in general.

Share

 
COinS