skip to main content
10.1145/347324.348910acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
Article
Free Access

Prioritizing test cases for regression testing

Authors Info & Claims
Published:01 August 2000Publication History

ABSTRACT

Test case prioritization techniques schedule test cases in an order that increases their effectiveness in meeting some performance goal. One performance goal, rate of fault detection, is a measure of how quickly faults are detected within the testing process; an improved rate of fault detection can provide faster feedback on the system under test, and let software engineers begin locating and correcting faults earlier than might otherwise be possible. In previous work, we reported the results of studies that showed that prioritization techniques can significantly improve rate of fault detection. Those studies, however, raised several additional questions: (1) can prioritization techniques be effective when aimed at specific modified versions; (2) what tradeoffs exist between fine granularity and coarse granularity prioritization techniques; (3) can the incorporation of measures of fault proneness into prioritization techniques improve their effectiveness? This paper reports the results of new experiments addressing these questions.

References

  1. 1.I. S. Association. Software Engineering Standards, volume 3 of Std. 1061: Standard for Software Quality Methodology. Institute of Electrical and Electronics Engineers, 1999 edition, 1992.Google ScholarGoogle Scholar
  2. 2.A. Avritzer and E. J. Weyuker. The automatic generation of load test suites and the assessment of the resulting software. IEEE Trans. on Softw. Eng., 21(9):705{716, Sept. 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. 3.A. L. Baker, J. M. Bieman, N. Fenton, D. A. Gustafson, A. Melton, and R. Whitty. Philosophy for software measurement. J. Sys. Softw., 12(3):277{281, 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. 4.M. Balcer, W. Hasling, and T. Ostrand. Automatic generation of test scripts from formal test speci~cations. In Proc. of the 3rd Symp. on Softw. Testing, Analysis, and Veri~cation, pages 210{218, Dec. 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. 5.L. C. Briand, J. Wust, S. Ikonomovski, and H. Lounis. Investigating quality factors in object oriented designs: an industrial case study. In Proc. Int'l. Conf. on Softw. Eng., pages 345{354, May 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. 6.M. E. Delamaro and J. C. Maldonado. Proteum{A Tool for the Assessment ofTest Adequacy for C Programs. In Proc. of the Conf. on Performability in Computing Sys. (PCS 96), pages 79{95, July 1996.Google ScholarGoogle Scholar
  7. 7.R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Hints on Test Data Selection: Help for the Practicing Programmer. Computer, 11(4):34{41, Apr. 1978.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. 8.S. Elbaum, A. G. Malishevsky, and G. Rothermel. Prioritizing test cases for regression testing. Technical Report 00-60-03, Oregon State University, Feb. 2000.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. 9.S. G. Elbaum and J. C. Munson. A standard for the measurement of C complexity attributes. Technical Report TR-CS-98-02, University of Idaho, Feb. 1998.Google ScholarGoogle Scholar
  10. 10.S. G. Elbaum and J. C. Munson. Code churn: A measure for estimating the impact of code change. In Proc. Int'l. Conf. Softw. Maint., pages 24{31, Nov. 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. 11.S. G. Elbaum and J. C. Munson. Software evolution and the code fault introduction process. Emp. Softw. Eng. J., 4(3):241{262, Sept. 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12.M. Garey and D. Johnson. Computers and Intractability. W.H. Freeman, New York, 1979. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. 13.T. Goradia. Dynamic impact analysis: A cost-e~ective technique to enforce error-propagation. In ACM Int'l. Symp. on Softw. Testing and Analysis, pages 171{181, June 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. 14.R. G. Hamlet. Testing programs with the aid of a compiler. IEEE Trans. Softw. Eng., SE-3(4):279{290, July 1977.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. 15.R. G. Hamlet. Probable correctness theory. Information Processing Letters, 25:17{25, Apr. 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. 16.M. Harrold and G. Rothermel. Aristotle: A system for research on and development of program analysis based tools. Technical Report OSU-CISRC- 3/97-TR17, Ohio State University, Mar 1997.Google ScholarGoogle Scholar
  17. 17.M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments on the e~ectiveness of data ow- and control ow-based test adequacy criteria. In Proc. Int'l. Conf. on Softw. Eng., pages 191{200, May 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18.T. M. Khoshgoftaar and J. C. Munson. Predicting software development errors using complexity metrics. J. on Selected Areas in Comm., 8(2):253{261, Feb. 1990.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. 19.J. C. Munson. Software measurement: Problems and practice. Annals of Softw. Eng., 1(1):255{285, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  20. 20.J. C. Munson, S. G. Elbaum, R. M. Karcich, and J. P. Wilcox. Software risk assessment through software measurement and modeling. In Proc. IEEE Aerospace Conf., pages 137{147, Mar. 1998.Google ScholarGoogle ScholarCross RefCross Ref
  21. 21.A. P. Nikora and J. C. Munson. Software evolution and the fault process. In Proc. Twenty Third Annual Softw. Eng. Workshop, NASA/Goddard Space Flight Center, 1998.Google ScholarGoogle Scholar
  22. 22.K. Onoma, W.-T. Tsai, M. Poonawala, and H. Suganuma. Regression testing in an industrial environment. Comm. ACM, 41(5):81{86, May 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. 23.T. Ostrand and M. Balcer. The category-partition method for specifying and generating functional tests. Comm. ACM, 31(6), June 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. 24.G. Rothermel, R. Untch, C. Chu,and M. J. Harrold. Test case prioritization: an empirical study. In Proc. Int'l. Conf. Softw. Maint., pages 179{188, Aug. 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. 25.M. Thompson, D. Richardson, and L. Clarke. An information ow model of fault detection. In ACM Int'l. Symp. on Softw. Testing and Analysis, pages 182{192, June 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. 26.J. Voas. PIE: A dynamic failure-based technique. IEEE Trans. on Softw. Eng., pages 717{727, Aug. 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. 27.F. I. Vokolos and P. G.Frankl. Empirical evaluation of the textual di~erencing regression testing technique. In Proc. Int'l. Conf. Softw. Maint., pages 44{53, Nov. 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. 28.W. Wong, J. Horgan, S. London, and H. Agrawal. A study of e~ective regression testing in practice. In Proc. of the Eighth Intl. Symp. on Softw. Rel. Engr., pages 230{238, Nov. 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Prioritizing test cases for regression testing

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                ISSTA '00: Proceedings of the 2000 ACM SIGSOFT international symposium on Software testing and analysis
                August 2000
                211 pages
                ISBN:1581132662
                DOI:10.1145/347324

                Copyright © 2000 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 1 August 2000

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • Article

                Acceptance Rates

                Overall Acceptance Rate58of213submissions,27%

                Upcoming Conference

                ISSTA '24

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader