skip to main content
10.1145/1146238.1146240acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
Article

TimeAware test suite prioritization

Published:21 July 2006Publication History

ABSTRACT

Regression test prioritization is often performed in a time constrained execution environment in which testing only occurs for a fixed time period. For example, many organizations rely upon nightly building and regression testing of their applications every time source code changes are committed to a version control repository. This paper presents a regression test prioritization technique that uses a genetic algorithm to reorder test suites in light of testing time constraints. Experiment results indicate that our prioritization approach frequently yields higher average percentage of faults detected (APFD) values, for two case study applications, when basic block level coverage is used instead of method level coverage. The experiments also reveal fundamental trade offs in the performance of time-aware prioritization. This paper shows that our prioritization technique is appropriate for many regression testing environments and explains how the baseline approach can be extended to operate in additional time constrained testing circumstances.

References

  1. http://www.clarkware.com/software/JDepend.html.Google ScholarGoogle Scholar
  2. http://monetdb.cwi.nl/.Google ScholarGoogle Scholar
  3. http://www.planet-lab.org/.Google ScholarGoogle Scholar
  4. J. H. Andrews, L. C. Briand, and Y. Labiche. Is mutation an appropriate tool for testing experiments? In Proc. of 27th ICSE, pages 402--411, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. G. Antoniol, M. D. Penta, and M. Harman. Search-based techniques applied to optimization of project planning for a massive maintenance project. In Proc. of the 21st ICSM, pages 240--249, Washington, DC, USA, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. Chu and J. Beasley. A genetic algorithm for the multidimensional knapsack problem. Journal of Heuristics, 4(1):63--86, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H. Do, G. Rothermel, and A. Kinneer. Empirical studies of test case prioritization in a JUnit testing environment. In Proc. of 15th ISSRE, pages 113--124, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. Elbaum, A. G. Malishevsky, and G. Rothermel. Test case prioritization: A family of empirical studies. IEEE Trans. Softw. Eng., 28(2):159--182, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. D. Fatiregun, M. Harman, and R. M. Hierons. Evolving transformation sequences using genetic algorithms. In Proc. of 4th SCAM, pages 66--75, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., New York, NY, USA, 1979. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. E. Goldberg. The Design of Innovation: Lessons from and for Competent Genetic Algorithms. Addison-Wesley, Reading, MA, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. D. E. Goldberg, B. Korb, and K. Deb. Messy genetic algorithms: Motivation, analysis, and first results. Complex Systems, 3(5):493--530, 1989.Google ScholarGoogle Scholar
  13. T. L. Graves, M. J. Harrold, J.-M. Kim, A. Porter, and G. Rothermel. An empirical study of regression test selection techniques. ACM Trans. on Softw. Eng. and Meth., 10(2), 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. G. M. Kapfhammer. Software testing. In The Computer Science Handbook, chapter 105. CRC Press, Boca Raton, FL, second edition, 2004.Google ScholarGoogle Scholar
  15. M. Kessis, Y. Ledru, and G. Vandome. Experiences in coverage testing of a Java middleware. In Proc. of 5th SEM, pages 39--45, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. P. A. Kulkarni, S. R. Hines, D. B. Whalley, J. D. Hiser, J. W. Davidson, and D. L. Jones. Fast and efficient searches for effective optimization-phase sequences. ACM Trans. Archit. Code Optim., 2(2):165--198, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. P. McMinn and M. Holcombe. Evolutionary testing of state-based programs. In Proc. of GECCO, pages 1013--1020, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. P. McNamee and M. Hall. Developing a tool for memoizing functions in C++. ACM SIGPLAN Not., 33(8):17--22, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. A. Memon, I. Banerjee, N. Hashmi, and A. Nagarajan. DART: A framework for regression testing \nightly/daily builds" of GUI applications. In Proc. of ICSM, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Misurda, J. Clause, J. L. Reed, P. Gandra, B. R. Childers, and M. L. Soffa. Jazz: A tool for demand-driven structural testing. In Proc. of 14th CC, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. I. Moore. Jester- a JUnit test tester. In Proc. of 2nd XP, pages 84--87, 2001.Google ScholarGoogle Scholar
  22. R. P. Pargas, M. J. Harrold, and R. R. Peck. Test-data generation using genetic algorithms. Soft. Testing, Verif. and Rel., 9(4):263--282, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  23. C. Poole and J. W. Huisman. Using extreme programming in a maintenance environment. IEEE Softw., 18(6):42--50, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. G. Rothermel, R. J. Untch, and C. Chu. Prioritizing test cases for regression testing. IEEE Trans. on Softw. Eng., 27(10):929--948, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. V. Roubtsov. Emma: a free java code coverage tool. http://emma.sourceforge.net/index.html, March 2005.Google ScholarGoogle Scholar
  26. A. Srivastava and J. Thiagarajan. Effectively prioritizing tests in development environment. In Proc. of ISSTA, pages 97--106, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. P. Tonella. Evolutionary testing of classes. In Proc. of ISSTA, pages 119--128, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. TimeAware test suite prioritization

    Recommendations

    Reviews

    Timothy R. Hopkins

    Overnight builds and regression testing for large software systems become problematic when there is not enough time to complete the testing before the start of work the next day. Given that regression testing is used to rapidly locate faults caused by changes to the code, it is reasonable to seek a means for generating an order of the test cases within the test suite to optimize the chance of locating newly introduced faults, while restricting the time taken to run these tests within a user-defined time limit. This paper considers a genetic algorithm to achieve this goal and reports on the results of applying this algorithm to a pair of small Java applications. The interesting feature of the current approach is the use of a fitness function that aims to maximize the block coverage in an attempt to improve the chances of finding faults quickly. The authors' system uses available tools for test-case maintenance (JUnit) and code coverage (Emma). The quality of the selected test cases is measured using a weighted average of the percentage of faults detected when the tests were applied to the applications after they had been seeded by faults using code mutation. This metric shows that the genetic algorithm performed much better than using a random selection of tests or running the tests either in the order they were written or in the reverse order. There are two major drawbacks. First, the two applications are very simple in that the complete test suite consists of just 28 and 53 tests, and these can all be executed in 7 and 5.5 seconds, respectively. Major applications are likely to consist of thousands of tests and take hours or days to run. It isn't clear how this approach would scale to such applications. Second, the algorithm takes around 9 and 14 hours to complete, even for a somewhat lax time requirement. These times are equivalent to over 5,000 runs of the complete suite, which I suspect makes the current method impossible to use on a large application. The paper will be difficult for a nonspecialist to read, as it contains some confusing details. Given that the results point to a method that isn't likely to produce a working solution in the near future, the potential readership is probably restricted to researchers in the field. Online Computing Reviews Service

    Access critical reviews of Computing literature here

    Become a reviewer for Computing Reviews.

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ISSTA '06: Proceedings of the 2006 international symposium on Software testing and analysis
      July 2006
      274 pages
      ISBN:1595932631
      DOI:10.1145/1146238
      • General Chair:
      • Lori Pollock,
      • Program Chair:
      • Mauro Pezzè

      Copyright © 2006 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 July 2006

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate58of213submissions,27%

      Upcoming Conference

      ISSTA '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader