ABSTRACT
Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can provide faster feedback on the system under test, and let software engineers begin locating and correcting faults earlier than might otherwise be possible. In previous work, we reported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: (1) can prioritization techniques be effective when aimed at specific modified versions; (2) what tradeoffs exist between fine granularity and coarse granularity prioritization techniques; (3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? This paper reports the results of new experiments addressing these questions.
- 1.I. S. Association. Software Engineering Standards, volume 3 of Std. 1061: Standard for Software Quality Methodology. Institute of Electrical and Electronics Engineers, 1999 edition, 1992.Google Scholar
- 2.A. Avritzer and E. J. Weyuker. The automatic generation of load test suites and the assessment of the resulting software. IEEE Trans. on Softw. Eng., 21(9):705{716, Sept. 1995. Google ScholarDigital Library
- 3.A. L. Baker, J. M. Bieman, N. Fenton, D. A. Gustafson, A. Melton, and R. Whitty. Philosophy for software measurement. J. Sys. Softw., 12(3):277{281, 1990. Google ScholarDigital Library
- 4.M. Balcer, W. Hasling, and T. Ostrand. Automatic generation of test scripts from formal test speci~cations. In Proc. of the 3rd Symp. on Softw. Testing, Analysis, and Veri~cation, pages 210{218, Dec. 1989. Google ScholarDigital Library
- 5.L. C. Briand, J. Wust, S. Ikonomovski, and H. Lounis. Investigating quality factors in object oriented designs: an industrial case study. In Proc. Int'l. Conf. on Softw. Eng., pages 345{354, May 1999. Google ScholarDigital Library
- 6.M. E. Delamaro and J. C. Maldonado. Proteum{A Tool for the Assessment ofTest Adequacy for C Programs. In Proc. of the Conf. on Performability in Computing Sys. (PCS 96), pages 79{95, July 1996.Google Scholar
- 7.R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Hints on Test Data Selection: Help for the Practicing Programmer. Computer, 11(4):34{41, Apr. 1978.Google ScholarDigital Library
- 8.S. Elbaum, A. G. Malishevsky, and G. Rothermel. Prioritizing test cases for regression testing. Technical Report 00-60-03, Oregon State University, Feb. 2000.Google ScholarDigital Library
- 9.S. G. Elbaum and J. C. Munson. A standard for the measurement of C complexity attributes. Technical Report TR-CS-98-02, University of Idaho, Feb. 1998.Google Scholar
- 10.S. G. Elbaum and J. C. Munson. Code churn: A measure for estimating the impact of code change. In Proc. Int'l. Conf. Softw. Maint., pages 24{31, Nov. 1998. Google ScholarDigital Library
- 11.S. G. Elbaum and J. C. Munson. Software evolution and the code fault introduction process. Emp. Softw. Eng. J., 4(3):241{262, Sept. 1999. Google ScholarDigital Library
- 12.M. Garey and D. Johnson. Computers and Intractability. W.H. Freeman, New York, 1979. Google ScholarDigital Library
- 13.T. Goradia. Dynamic impact analysis: A cost-e~ective technique to enforce error-propagation. In ACM Int'l. Symp. on Softw. Testing and Analysis, pages 171{181, June 1993. Google ScholarDigital Library
- 14.R. G. Hamlet. Testing programs with the aid of a compiler. IEEE Trans. Softw. Eng., SE-3(4):279{290, July 1977.Google ScholarDigital Library
- 15.R. G. Hamlet. Probable correctness theory. Information Processing Letters, 25:17{25, Apr. 1987. Google ScholarDigital Library
- 16.M. Harrold and G. Rothermel. Aristotle: A system for research on and development of program analysis based tools. Technical Report OSU-CISRC- 3/97-TR17, Ohio State University, Mar 1997.Google Scholar
- 17.M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments on the e~ectiveness of data ow- and control ow-based test adequacy criteria. In Proc. Int'l. Conf. on Softw. Eng., pages 191{200, May 1994. Google ScholarDigital Library
- 18.T. M. Khoshgoftaar and J. C. Munson. Predicting software development errors using complexity metrics. J. on Selected Areas in Comm., 8(2):253{261, Feb. 1990.Google ScholarDigital Library
- 19.J. C. Munson. Software measurement: Problems and practice. Annals of Softw. Eng., 1(1):255{285, 1995.Google ScholarCross Ref
- 20.J. C. Munson, S. G. Elbaum, R. M. Karcich, and J. P. Wilcox. Software risk assessment through software measurement and modeling. In Proc. IEEE Aerospace Conf., pages 137{147, Mar. 1998.Google ScholarCross Ref
- 21.A. P. Nikora and J. C. Munson. Software evolution and the fault process. In Proc. Twenty Third Annual Softw. Eng. Workshop, NASA/Goddard Space Flight Center, 1998.Google Scholar
- 22.K. Onoma, W.-T. Tsai, M. Poonawala, and H. Suganuma. Regression testing in an industrial environment. Comm. ACM, 41(5):81{86, May 1988. Google ScholarDigital Library
- 23.T. Ostrand and M. Balcer. The category-partition method for specifying and generating functional tests. Comm. ACM, 31(6), June 1988. Google ScholarDigital Library
- 24.G. Rothermel, R. Untch, C. Chu,and M. J. Harrold. Test case prioritization: an empirical study. In Proc. Int'l. Conf. Softw. Maint., pages 179{188, Aug. 1999. Google ScholarDigital Library
- 25.M. Thompson, D. Richardson, and L. Clarke. An information ow model of fault detection. In ACM Int'l. Symp. on Softw. Testing and Analysis, pages 182{192, June 1993. Google ScholarDigital Library
- 26.J. Voas. PIE: A dynamic failure-based technique. IEEE Trans. on Softw. Eng., pages 717{727, Aug. 1992. Google ScholarDigital Library
- 27.F. I. Vokolos and P. G.Frankl. Empirical evaluation of the textual di~erencing regression testing technique. In Proc. Int'l. Conf. Softw. Maint., pages 44{53, Nov. 1998. Google ScholarDigital Library
- 28.W. Wong, J. Horgan, S. London, and H. Agrawal. A study of e~ective regression testing in practice. In Proc. of the Eighth Intl. Symp. on Softw. Rel. Engr., pages 230{238, Nov. 1997. Google ScholarDigital Library
Index Terms
- Prioritizing test cases for regression testing
Recommendations
Code coverage based technique for prioritizing test cases for regression testing
Test case prioritization involves scheduling test cases in an order that increases their effectiveness in meeting some performance goals. One of the common performance goals is to run those test cases that achieve total code coverage at the earliest. In ...
Prioritizing Test Cases For Regression Testing
Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection a measure of how quickly ...
Prioritizing test cases for regression testing
Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing ...
Comments