skip to main content
10.1145/1273463.1273483acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
Article

Pareto efficient multi-objective test case selection

Published:09 July 2007Publication History

ABSTRACT

Previous work has treated test case selection as a single objective optimisation problem. This paper introduces the concept of Pareto efficiency to test case selection. The Pareto efficient approach takes multiple objectives such as code coverage, past fault-detection history and execution cost, and constructs a group of non-dominating, equivalently optimal test case subsets. The paper describes the potential bene?ts of Pareto efficient multi-objective test case selection, illustrating with empirical studies of two and three objective formulations.

References

  1. S. Bates and S. Horwitz. Incremental program testing using program dependence graphs. In Conference Record of the Twentieth ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages pages 384--396, Charleston, South Carolina, Jan. 10-13, 1993. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. D. Binkley. Reducing the cost of regression testing by semantics guided test case selection. In ICSM '95: Proceedings of the International Conference on Software Maintenance page 251, Washington, DC, USA, 1995. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. A. Coello Coello, D. A. Van Veldhuizen, and G. B. Lamont. Evolutionary Algorithms for Solving Multi-Objective Problems KluwerAcademic Publishers, New York, May 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Y. Collette and P. Siarry. Multiobjective Optimization: Principles and Case Studies Springer, 2004.Google ScholarGoogle Scholar
  5. K. Deb. Multi-Objective Optimization Using Evolutionary Algorithms Wiley, Chichester, UK, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. K. Deb, S. Agrawal, A. Pratab, and T. Meyarivan. A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II. In Proceedings of the Parallel Problem Solving from Nature VI Conference pages 849--858, Paris, France, 2000. Springer. Lecture Notes in Computer Science No. 1917. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H. Do, S. G. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering: An International Journal 10(4):405--435, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. G. Elbaum, A. G. Malishevsky, and G. Rothermel. Prioritizing test cases for regression testing. In International Symposium on Software Testing and Analysis pages 102--112. ACM Press, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. J. Harrold, R. Gupta, and M. L. Soffa. A methodology for controlling the size of a test suite. ACM Trans. Softw. Eng. Methodol. 2(3):270--285, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. L. W. H. K. N. Leung. Insight into regressin testing. In Proceedings of the Conference on Software Maintenance October 1989.Google ScholarGoogle Scholar
  11. M. Hutchins, H. Foster, T. Goradia, and T. Ostrand. Experiments on the effectiveness of data ?ow-and control-?ow-based test adequacy criteria. In Proceedings of the 16th International Conference on Software Engineering pages 191--200. IEEE Computer Society Press, May 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Z. Li, M. Harman, and R. Hierons. Meta-heuristic search algorithms for regression test case prioritization. IEEE Transactions on Software Engineering To Appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. G. Malishevsky, J. R. Ruthruff, G. Rothermel, and S. Elbaum. Cost-cognizant test case prioritization. Technical report, Department of Computer Science and Engineering, University of Nebraska-Lincoln, March 2006.Google ScholarGoogle Scholar
  14. N. Nethercote and J. Seward. Valgrind: A program supervision framework. In Proceedings of the Third Workshop on Runtime Verification Colorado, USA, July 2003. Boulder.Google ScholarGoogle Scholar
  15. G. Rothermel and M. J. Harrold. Analyzing regression test selection techniques. IEEE Transactions on Software Engineering 22(8):529--551, Aug. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. G. Rothermel, M. J. Harrold, J. Ostrin, and C. Hong. An empirical study of the effects of minimization on the fault detection capabilities of test suites. In ICSM pages 34--43, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold. Test case prioritization: An empirical study. In Proceedings; IEEE International Conference on Software Maintenance pages 179--188, Los Alamitos, California, USA, 1999. IEEE Computer Society Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. F. Szidarovsky, M. E. Gershon, and L. Dukstein. Techniques for multiobjective decision making in systems management Elsevier, New York, 1986.Google ScholarGoogle Scholar
  19. P. Tonella, P. Avesani, and A. Susi. Using the case-based ranking methodology for test case prioritization. In ICSM '06: Proceedings of the 22nd IEEE International Conference on Software Maintenance pages 123--133, Washington, DC, USA, 2006. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. K. R. Walcott, M. L. Soffa, G. M. Kapfhammer, and R. S. Roos. Time aware test suite prioritization. In ISSTA '06: Proceedings of the 2006 international symposium on Software testing and analysis pages 1--12, New York, NY, USA, 2006. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. W. E. Wong, J. R. Horgan, S. London, and A. P. Mathur. Effect of test set minimization on fault detection effectiveness. Software - Practice and Experience 28(4):347--369, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. S. S. Yau and Z. Kishimoto. A method for revalidating modi?ed programs in the maintenance phase. In Proceedings of 11th International Computer Software and Applications Conference (COMPSAC '87)pages pp. 272--277, October 1987.Google ScholarGoogle Scholar

Index Terms

  1. Pareto efficient multi-objective test case selection

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ISSTA '07: Proceedings of the 2007 international symposium on Software testing and analysis
      July 2007
      258 pages
      ISBN:9781595937346
      DOI:10.1145/1273463

      Copyright © 2007 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 9 July 2007

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate58of213submissions,27%

      Upcoming Conference

      ISSTA '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader