skip to main content
10.1145/581339.581358acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
Article

The impact of test suite granularity on the cost-effectiveness of regression testing

Published:19 May 2002Publication History

ABSTRACT

Regression testing is an expensive testing process used to validate software following modifications. The cost-effectiveness of regression testing techniques varies with characteristics of test suites. One such characteristic, test suite granularity, involves the way in which test inputs are grouped into test cases within a test suite. Various cost-benefits tradeoffs have been attributed to choices of test suite granularity, but almost no research has formally examined these tradeoffs. To address this lack, we conducted several controlled experiments, examining the effects of test suite granularity on the costs and benefits of several regression testing methodologies across six releases of two non-trivial software systems. Our results expose essential tradeoffs to consider when designing test suites for use in regression testing evolving systems.

References

  1. J. Bach. Useful features of a test automation system (part iii). Testing Techniques Newsletter, Oct. 1996.Google ScholarGoogle Scholar
  2. B. Beizer. Black-Box Testing. John Wiley and Sons, New York, NY, 1995.Google ScholarGoogle Scholar
  3. R. Binder. Testing Object-Oriented Systems. Addison Wesley, Reading, MA, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. T. Chen and M. Lau. Dividing strategies for the optimization of a test suite. Info. Proc. Let., 60(3):135-141, Mar. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Y. Chen, D. Rosenblum, and K. Vo. TestTube: A system for selective regression testing. In Proc. 16th Int'l. Conf. Softw. Eng., pages 211-220, May 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. S. Elbaum, A. Malishevsky, and G. Rothermel. Prioritizing test cases for regression testing. In Proc. Int'l. Symp. Softw. Testing and Analysis, pages 102-112, Aug. 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Elbaum, J. Munson, and M. Harrison. CLIC: A tool for the measurement of software system dynamics. In SETL Technical Report - TR-98-04., 04 1998.Google ScholarGoogle Scholar
  8. T. Graves, M. Harrold, J.-M. Kim, A. Porter, and G. Rothermel. An empirical study of regression test selection techniques. In Proc. 20th Int'l. Conf. Softw. Eng., pages 188-197, Apr. 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Harrold and G. Rothermel. Aristotle: A system for research on and development of program analysis based tools. Technical Report OSU-CISRC- 3/97-TR17, Ohio State University, Mar 1997.Google ScholarGoogle Scholar
  10. M. J. Harrold, R. Gupta, and M. L. Soffa. A methodology for controlling the size of a test suite. ACM Trans. Softw. Eng. and Meth., 2(3):270-285, July 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Hildebrandt and A. Zeller. Minimizing failure-inducing input. In Proc. Int'l. Symp. Softw. Testing and Analysis, pages 135-145, Aug. 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In Proc. Int'l. Conf. on Softw. Eng., pages 191-200, May 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. Kaner, J. Falk, and H. Q. Nguyeen. Testing Computer Software. Wiley and Sons, New York, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J.-M. Kim, A. Porter, and G. Rothermel. An empirical study of regression test application frequency. In Proc. 22nd Int'l. Conf. Softw. Eng., pages 126-135, June 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. E. Kit. Software Testing in the Real World. Addison-Wesley, Reading, MA, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. H. Leung and L. White. Insights into regression testing. In Proc. Conf. Softw. Maint., pages 60-69, Oct. 1989.Google ScholarGoogle Scholar
  17. H. Leung and L. White. A study of integration testing and software regression at the integration level. In Proc. Conf. Softw. Maint., pages 290-300, Nov. 1990.Google ScholarGoogle ScholarCross RefCross Ref
  18. D. Libes. Exploring Expect: A Tcl-Based Toolkit for Automating Interactive Programs. O'Reilly & Associates, Inc., Sebastopol, CA, Nov. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. D. C. Montgomery. Design and Analysis of Experiments. John Wiley and Sons, New York, fourth edition, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Offutt, J. Pan, and J. M. Voas. Procedures for reducing the size of coverage-based test sets. In Proc. Twelfth Int'l. Conf. Testing Computer Softw., pages 111-123, June 1995.Google ScholarGoogle Scholar
  21. K. Onoma, W.-T. Tsai, M. Poonawala, and H. Suganuma. Regression testing in an industrial environment. Comm. ACM, 41(5):81-86, May 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. T. Ostrand and M. Balcer. The category-partition method for specifying and generating functional tests. Comm. ACM, 31(6), June 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. C. Ramey and B. Fox. Bash Reference Manual. O'ReillyO'Reilly & Associates, Inc., Sebastopol, CA, 2.2 edition, 1998.Google ScholarGoogle Scholar
  24. G. Rothermel, S. Elbaum, A. Malishevsky, P. Kallakuri, and B. Davia. The impact of test suite granularity on the cost-effectiveness of regression testing. Technical Report 01-60-11, Oregon State University, Sept. 2001.Google ScholarGoogle Scholar
  25. G. Rothermel and M. Harrold. Analyzing regression test selection techniques. IEEE Trans. Softw. Eng., 22(8):529-551, Aug. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. G. Rothermel and M. Harrold. A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Meth., 6(2):173-210, Apr. 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. G. Rothermel, M. Harrold, and J. Dedhia. Regression test selection for C++ programs. J. Softw. Testing, Verif., Rel., 10(2), June 2000.Google ScholarGoogle Scholar
  28. G. Rothermel, M. Harrold, J. Ostrin, and C. Hong. An empirical study of the effects of minimization on the fault detection capabilities of test suites. In Proc. Int'l. Conf. Softw. Maint., pages 34-43, Nov. 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. G. Rothermel and M. J. Harrold. Empirical studies of a safe regression test selection technique. IEEE Trans. Softw. Eng., 24(6):401-419, June 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. G. Rothermel, R. Untch, C. Chu, and M. Harrold. Test case prioritization. IEEE Trans. Softw. Eng., Oct. 2001.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. W. Wong, J. Horgan, S. London, and H. Agrawal. A study of effective regression testing in practice. In Proc. Eighth Intl. Symp. Softw. Rel. Engr., pages 230-238, Nov. 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. W. E. Wong, J. R. Horgan, S. London, and A. P. Mathur. Effect of test set minimization on fault detection effectiveness. Softw. Pract. and Exp., 28(4):347-369, Apr. 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. The impact of test suite granularity on the cost-effectiveness of regression testing

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ICSE '02: Proceedings of the 24th International Conference on Software Engineering
            May 2002
            797 pages
            ISBN:158113472X
            DOI:10.1145/581339

            Copyright © 2002 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 19 May 2002

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • Article

            Acceptance Rates

            ICSE '02 Paper Acceptance Rate45of303submissions,15%Overall Acceptance Rate276of1,856submissions,15%

            Upcoming Conference

            ICSE 2025

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader