skip to main content
10.1145/2771783.2771788acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article
Open Access

Empirical evaluation of pareto efficient multi-objective regression test case prioritisation

Published:13 July 2015Publication History

ABSTRACT

The aim of test case prioritisation is to determine an ordering of test cases that maximises the likelihood of early fault revelation. Previous prioritisation techniques have tended to be single objective, for which the additional greedy algorithm is the current state-of-the-art. Unlike test suite minimisation, multi objective test case prioritisation has not been thoroughly evaluated. This paper presents an extensive empirical study of the effectiveness of multi objective test case prioritisation, evaluating it on multiple versions of five widely-used benchmark programs and a much larger real world system of over 1 million lines of code. The paper also presents a lossless coverage compaction algorithm that dramatically scales the performance of all algorithms studied by between 2 and 4 orders of magnitude, making prioritisation practical for even very demanding problems.

References

  1. CLOC: count lines of code. http://cloc.sourceforge.net.Google ScholarGoogle Scholar
  2. MySQL, Online Bug Repository. http://bugs.mysql.com/.Google ScholarGoogle Scholar
  3. SIR: Software-artifact Infrastructure Repository. http://sir.unl.edu/.Google ScholarGoogle Scholar
  4. The MySQL Test Framework, Version 2.0. http://bugs.mysql.com/.Google ScholarGoogle Scholar
  5. A. Arcuri and L. Briand. A practical guide for using statistical tests to assess randomized algorithms in software engineering. In Proceedings of the 33rd International Conference on Software Engineering, ICSE ’11, pages 1–10, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. R. Assi and W. Masri. Lossless reduction of execution profiles using a genetic algorithm. In Proceedings of the 7th IEEE International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2014, pages 294–297, March 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. R. C. Bryce and C. J. Colbourn. Prioritized interaction testing for pair-wise coverage with seeding and constraints. Journal of Information and Software Technology, 48(10):960–970, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  8. J. T. de Souza, C. L. Maia, F. G. de Freitas, and D. P. Coutinho. The human competitiveness of search based software engineering. In Proceedings of 2nd International Symposium on Search based Software Engineering (SSBSE 2010), pages 143–152, Benevento, Italy, 2010. IEEE Computer Society Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Deb, S. Agrawal, A. Pratab, and T. Meyarivan. A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II. In Proceedings of the Parallel Problem Solving from Nature Conference, pages 849–858. Springer, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. H. Do, S. G. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4):405–435, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. H. Do, G. Rothermel, and A. Kinneer. Prioritizing junit test cases: An empirical assessment and cost-benefits analysis. Empirical Software Engineering, 11(1):33–70, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Elbaum, A. Malishevsky, and G. Rothermel. Test case prioritization: a family of empirical studies. IEEE Transactions on Software Engineering, 28(2):159–182, Feb 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. S. G. Elbaum, A. G. Malishevsky, and G. Rothermel. Prioritizing test cases for regression testing. In Proceedings of International Symposium on Software Testing and Analysis, pages 102–112, August 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. S. G. Elbaum, A. G. Malishevsky, and G. Rothermel. Incorporating varying test costs and fault severities into test case prioritization. In Proceedings of the International Conference on Software Engineering (ICSE 2001), pages 329–338. ACM Press, May 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Finkelstein, M. Harman, S. A. Mansouri, J. Ren, and Y. Zhang. ”fairness analysis” in requirements assignments. In Proceedings of the 16th IEEE International Requirements Engineering Conference (RE ’08), Barcelona, Catalunya, Spain, September 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1st edition, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. D. E. Goldberg and R. Lingle, Jr. Alleles loci and the traveling salesman problem. In Proceedings of the 1st International Conference on Genetic Algorithms, pages 154–159, Hillsdale, NJ, USA, 1985. L. Erlbaum Associates Inc. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. R. Grissom and J. Kim. Effect Sizes for Research: A Broad Practical Approach. Lawrence Erlbaum Associates, Inc., Publishers., 2005.Google ScholarGoogle Scholar
  19. Q. Gu, B. Tang, and D. Chen. Optimal regression testing based on selective coverage of test requirements. In International Symposium on Parallel and Distributed Processing with Applications (ISPA 10), pages 419 – 426, Sept. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Harman. Making the case for MORTO: Multi Objective Regression Test Optimization. In Proceedings of the 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, ICSTW ’11, pages 111–114, Washington, DC, USA, 2011. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. D. Jeffrey and N. Gupta. Test suite reduction with selective redundancy. In Proceedings of the 21st IEEE International Conference on Software Maintenance 2005 (ICSM’05), pages 549–558. IEEE Computer Society Press, September 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J.-M. Kim and A. Porter. A history-based test prioritization technique for regression testing in resource constrained environments. In Proceedings of the 24th International Conference on Software Engineering, pages 119–129. ACM Press, May 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Knowles, L. Thiele, and E. Zitzler. A tutorial on the performance assessment of stochastic multiobjective optimizers. 214, Computer Engineering and Networks Laboratory (TIK), ETH Zurich, Switzerland, Feb. 2006. revised version.Google ScholarGoogle Scholar
  24. Z. Li, Y. Bian, R. Zhao, and J. Cheng. A fine-grained parallel multi-objective test case prioritization on gpu. In G. Ruhe and Y. Zhang, editors, Search Based Software Engineering, volume 8084 of Lecture Notes in Computer Science, pages 111–125. Springer Berlin Heidelberg, 2013.Google ScholarGoogle Scholar
  25. Z. Li, M. Harman, and R. M. Hierons. Search Algorithms for Regression Test Case Prioritization. IEEE Transactions on Software Engineering, 33(4):225–237, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. G. Malishevsky, J. R. Ruthruff, G. Rothermel, and S. Elbaum. Cost-cognizant test case prioritization. Technical Report TR-UNL-CSE-2006-0004, Department of Computer Science and Engineering, University of Nebraska-Lincoln, March 2006.Google ScholarGoogle Scholar
  27. N. Nethercote and J. Seward. Valgrind: A program supervision framework. In Proceedings of ACM Conference on Programming Language Design and Implementation, pages 89–100. ACM Press, June 2007.Google ScholarGoogle Scholar
  28. Oracle Corporation. http://www.mysql.com.Google ScholarGoogle Scholar
  29. K. Praditwong and X. Yao. A new multi-objective evolutionary optimisation algorithm: The two-archive algorithm. In Proceedings of Computational Intelligence and Security, International Conference, volume 4456 of Lecture Notes in Computer Science, pages 95–104, November 2006.Google ScholarGoogle ScholarCross RefCross Ref
  30. G. Rothermel, S. Elbaum, A. Malishevsky, P. Kallakuri, and B. Davia. The impact of test suite granularity on the cost-effectiveness of regression testing. In Proceedings of the 24th International Conference on Software Engineering (ICSE 2002), pages 130–140. ACM Press, May 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold. Test case prioritization: An empirical study. In Proceedings of International Conference on Software Maintenance (ICSM 1999), pages 179–188. IEEE Computer Society Press, August 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. G. Rothermel, R. J. Untch, and C. Chu. Prioritizing test cases for regression testing. IEEE Transactions on Software Engineering, 27(10):929–948, October 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. J. P. Royston. An extension of shapiro and wilk’s w test for normality to large samples. Journal of the Royal Statistical Society. Series C (Applied Statistics), 31(2):115–124, 1982.Google ScholarGoogle Scholar
  34. P. G. Sapna and M. Hrushikesha. Automated test scenario selection based on levenshtein distance. In T. Janowski and H. Mohanty, editors, 6th Distributed Computing and Internet Technology (ICDCIT’10), volume 5966 of Lecture Notes in Computer Science (LNCS), pages 255–266. Springer-Verlag (New York), Bhubaneswar, India, Feb. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. A. Vargha and H. D. Delaney. A critique and improvement of the “CL” common language effect size statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics, 25(2):pp. 101–132, 2000.Google ScholarGoogle Scholar
  36. D. A. V. Veldhuizen and G. B. Lamont. Multiobjective evolutionary algorithm research: A history and analysis. Technical Report TR-98-03, Department of Electrical and Computer Engineering, Air Force Institute of Technology, 1998.Google ScholarGoogle Scholar
  37. S. Yoo and M. Harman. Pareto efficient multi-objective test case selection. In Proceedings of International Symposium on Software Testing and Analysis, pages 140–150. ACM Press, July 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. S. Yoo and M. Harman. Using hybrid algorithm for pareto effcient multi-objective test suite minimisation. Journal of Systems Software, 83(4):689–701, April 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. S. Yoo and M. Harman. Regression testing minimisation, selection and prioritisation: A survey. Software Testing, Verification, and Reliability, 22(2):67–120, March 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. S. Yoo, M. Harman, P. Tonella, and A. Susi. Clustering test cases to achieve effective & scalable prioritisation incorporating expert knowledge. In Proceedings of International Symposium on Software Testing and Analysis (ISSTA 2009), pages 201–211. ACM Press, July 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. S. Yoo, M. Harman, and S. Ur. Highly scalable multi-objective test suite minimisation using graphics card. In LNCS: Proceedings of the 3rd International Symposium on Search-Based Software Engineering, volume 6956 of SSBSE, pages 219–236, September 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. S. Yoo, M. Harman, and S. Ur. Gpgpu test suite minimisation: search based software engineering performance improvement using graphics cards. Empirical Software Engineering, 18(3):550–593, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  43. S. Yoo, R. Nilsson, and M. Harman. Faster fault finding at Google using multi objective regression test optimisation. In 8th European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE ’11), Szeged, Hungary, September 5th - 9th 2011.Google ScholarGoogle Scholar
  44. Industry Track.Google ScholarGoogle Scholar
  45. L. Zhang, S.-S. Hou, C. Guo, T. Xie, and H. Mei. Time-aware test-case prioritization using Integer Linear Programming. In Proceedings of the International Conference on Software Testing and Analysis (ISSTA 2009), pages 212–222. ACM Press, July 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. E. Zitzler and L. Thiele. Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE Transactions on Evolutionary Computation, 3(4):257–271, Nov. 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Empirical evaluation of pareto efficient multi-objective regression test case prioritisation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ISSTA 2015: Proceedings of the 2015 International Symposium on Software Testing and Analysis
      July 2015
      447 pages
      ISBN:9781450336208
      DOI:10.1145/2771783
      • General Chair:
      • Michal Young,
      • Program Chair:
      • Tao Xie

      Copyright © 2015 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 July 2015

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate58of213submissions,27%

      Upcoming Conference

      ISSTA '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader