Skip to main content
Erschienen in: Software Quality Journal 3/2013

01.09.2013

Test case prioritization: a systematic mapping study

verfasst von: Cagatay Catal, Deepti Mishra

Erschienen in: Software Quality Journal | Ausgabe 3/2013

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Test case prioritization techniques, which are used to improve the cost-effectiveness of regression testing, order test cases in such a way that those cases that are expected to outperform others in detecting software faults are run earlier in the testing phase. The objective of this study is to examine what kind of techniques have been widely used in papers on this subject, determine which aspects of test case prioritization have been studied, provide a basis for the improvement of test case prioritization research, and evaluate the current trends of this research area. We searched for papers in the following five electronic databases: IEEE Explorer, ACM Digital Library, Science Direct, Springer, and Wiley. Initially, the search string retrieved 202 studies, but upon further examination of titles and abstracts, 120 papers were identified as related to test case prioritization. There exists a large variety of prioritization techniques in the literature, with coverage-based prioritization techniques (i.e., prioritization in terms of the number of statements, basic blocks, or methods test cases cover) dominating the field. The proportion of papers on model-based techniques is on the rise, yet the growth rate is still slow. The proportion of papers that use datasets from industrial projects is found to be 64 %, while those that utilize public datasets for validation are only 38 %. On the basis of this study, the following recommendations are provided for researchers: (1) Give preference to public datasets rather than proprietary datasets; (2) develop more model-based prioritization methods; (3) conduct more studies on the comparison of prioritization methods; (4) always evaluate the effectiveness of the proposed technique with well-known evaluation metrics and compare the performance with the existing methods; (5) publish surveys and systematic review papers on test case prioritization; and (6) use datasets from industrial projects that represent real industrial problems.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
[[ref#]] refers to papers in Appendix A.
 
Literatur
Zurück zum Zitat Catal, C., & Diri, B. (2009). A systematic review of software fault prediction studies. Expert System with Applications, 36(4), 7346–7354.CrossRef Catal, C., & Diri, B. (2009). A systematic review of software fault prediction studies. Expert System with Applications, 36(4), 7346–7354.CrossRef
Zurück zum Zitat Do, H., Elbaum, S., & Rothermel, G. (2005). Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4), 405–435.CrossRef Do, H., Elbaum, S., & Rothermel, G. (2005). Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4), 405–435.CrossRef
Zurück zum Zitat Do, H., Mirarab, S., Tahvildari, L., & Rothermel, G. (2010). The effects of time constraints on test case prioritization: A series of controlled experiments. IEEE Transactions on Software Engineering SE-2, 36(5), 593–617.CrossRef Do, H., Mirarab, S., Tahvildari, L., & Rothermel, G. (2010). The effects of time constraints on test case prioritization: A series of controlled experiments. IEEE Transactions on Software Engineering SE-2, 36(5), 593–617.CrossRef
Zurück zum Zitat Do, H., Rothermel, G., & Kinneer, A. (2006). Prioritizing JUnit test cases: An empirical assessment and cost-benefits analysis. Empirical Software Engineering, 11(1), 33–70.CrossRef Do, H., Rothermel, G., & Kinneer, A. (2006). Prioritizing JUnit test cases: An empirical assessment and cost-benefits analysis. Empirical Software Engineering, 11(1), 33–70.CrossRef
Zurück zum Zitat Elbaum, S., Kallakuri, P., Malishevsky, A. G., Rothermel, G., & Kanduri, S. (2003). Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Journal of Software Testing, Verification, and Reliability, 13(2), 65–83.CrossRef Elbaum, S., Kallakuri, P., Malishevsky, A. G., Rothermel, G., & Kanduri, S. (2003). Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Journal of Software Testing, Verification, and Reliability, 13(2), 65–83.CrossRef
Zurück zum Zitat Elbaum, S. G., Malishevsky, A. G., & Rothermel, G. (2000). Prioritizing test cases for regression testing. In Proceedings of the 2000 ACM SIGSOFT international symposium on software testing and analysis (ISSTA 2000), ACM, New York, NY, pp. 102–112, 2000. Elbaum, S. G., Malishevsky, A. G., & Rothermel, G. (2000). Prioritizing test cases for regression testing. In Proceedings of the 2000 ACM SIGSOFT international symposium on software testing and analysis (ISSTA 2000), ACM, New York, NY, pp. 102–112, 2000.
Zurück zum Zitat Engström, E., Runeson, P., & Skoglund, M. (2010). A systematic review on regression test selection techniques. Information and Software Technology, 52(1), 14–30.CrossRef Engström, E., Runeson, P., & Skoglund, M. (2010). A systematic review on regression test selection techniques. Information and Software Technology, 52(1), 14–30.CrossRef
Zurück zum Zitat Engström, E., Skoglund, M., & Runeson, P. (2008). Empirical evaluations of regression test selection techniques: A systematic review. In Proceedings of the second ACM-IEEE international symposium on empirical software engineering and measurement (ESEM ‘08). ACM, New York, NY, USA, pp. 22–31. Engström, E., Skoglund, M., & Runeson, P. (2008). Empirical evaluations of regression test selection techniques: A systematic review. In Proceedings of the second ACM-IEEE international symposium on empirical software engineering and measurement (ESEM ‘08). ACM, New York, NY, USA, pp. 22–31.
Zurück zum Zitat Jorgensen, M., & Shepperd, M. (2007). A systematic review of software development cost estimation studies. IEEE Transactions on Software Engineering, 33(1), 33–53.CrossRef Jorgensen, M., & Shepperd, M. (2007). A systematic review of software development cost estimation studies. IEEE Transactions on Software Engineering, 33(1), 33–53.CrossRef
Zurück zum Zitat Kapfhammer, G. M. (2011). Empirically evaluating regression testing techniques: Challenges, solutions, and a potential way forward. In Proceedings of the 2011 IEEE fourth international conference on software testing, verification and validation workshops (ICSTW ‘11). IEEE Computer Society, Washington, DC, USA, pp. 99–102. Kapfhammer, G. M. (2011). Empirically evaluating regression testing techniques: Challenges, solutions, and a potential way forward. In Proceedings of the 2011 IEEE fourth international conference on software testing, verification and validation workshops (ICSTW ‘11). IEEE Computer Society, Washington, DC, USA, pp. 99–102.
Zurück zum Zitat Kapfhammer, G. M., & Soffa, M. L. (2007). Using coverage effectiveness to evaluate test suite prioritizations. Proceedings of 1st ACM international workshop on empirical assessment of software engineering languages and technologies: held in conjunction with the 22nd IEEE/ACM international conference on automated software engineering (ASE) 2007 (WEASELTech ‘07), pp. 19–20. Kapfhammer, G. M., & Soffa, M. L. (2007). Using coverage effectiveness to evaluate test suite prioritizations. Proceedings of 1st ACM international workshop on empirical assessment of software engineering languages and technologies: held in conjunction with the 22nd IEEE/ACM international conference on automated software engineering (ASE) 2007 (WEASELTech ‘07), pp. 19–20.
Zurück zum Zitat King, G. (2007). An Introduction to the dataverse network as an infrastructure for data sharing. Sociological Methods and Research, 36(2), 173–199.MathSciNetCrossRef King, G. (2007). An Introduction to the dataverse network as an infrastructure for data sharing. Sociological Methods and Research, 36(2), 173–199.MathSciNetCrossRef
Zurück zum Zitat Kitchenham, B., Brereton, O. P., Budgen, D., Turner, M., Bailey, J., & Linkman, S. (2009). Systematic literature reviews in software engineering: A systematic literature review. Information and Software Technology, 51(1), 7–15.CrossRef Kitchenham, B., Brereton, O. P., Budgen, D., Turner, M., Bailey, J., & Linkman, S. (2009). Systematic literature reviews in software engineering: A systematic literature review. Information and Software Technology, 51(1), 7–15.CrossRef
Zurück zum Zitat Kitchenham, B. A., Budgen, D., & Brereton, P. (2010). The value of mapping studies—A participant-observer case study. Proceedings of evaluation and assessment in software engineering (EASE 10), pp. 1–9. Kitchenham, B. A., Budgen, D., & Brereton, P. (2010). The value of mapping studies—A participant-observer case study. Proceedings of evaluation and assessment in software engineering (EASE 10), pp. 1–9.
Zurück zum Zitat Kitchenham, B. A., Charters, S. (2007). Guidelines for performing systematic literature reviews in software engineering. Technical report EBSE-2007-01. Kitchenham, B. A., Charters, S. (2007). Guidelines for performing systematic literature reviews in software engineering. Technical report EBSE-2007-01.
Zurück zum Zitat Kitchenham, B. A., Dyba, T., & Jorgensen, M. (2004). Evidence-based software engineering. Proceedings of the 26th international conference on software engineering (ICSE ‘04), pp. 273–281. Kitchenham, B. A., Dyba, T., & Jorgensen, M. (2004). Evidence-based software engineering. Proceedings of the 26th international conference on software engineering (ICSE ‘04), pp. 273–281.
Zurück zum Zitat Korel, B., & Koutsogiannakis, G. (2009). Experimental comparison of code-based and model-based test prioritization. Proceedings of the IEEE international conference on software testing, verification, and validation workshops (ICSTW ‘09), pp. 77–84. Korel, B., & Koutsogiannakis, G. (2009). Experimental comparison of code-based and model-based test prioritization. Proceedings of the IEEE international conference on software testing, verification, and validation workshops (ICSTW ‘09), pp. 77–84.
Zurück zum Zitat Korel, B., & Koutsogiannakis, G. (2009). Experimental comparison of code-based and model-based test prioritization. In Proceedings of the IEEE international conference on software testing, verification, and validation workshops (ICSTW ‘09). IEEE Computer Society, Washington, DC, USA, pp. 77–84. Korel, B., & Koutsogiannakis, G. (2009). Experimental comparison of code-based and model-based test prioritization. In Proceedings of the IEEE international conference on software testing, verification, and validation workshops (ICSTW ‘09). IEEE Computer Society, Washington, DC, USA, pp. 77–84.
Zurück zum Zitat Krishnamoorthi, R., & Sahaaya Arul Mary, S. A. (2009). Factor oriented requirement coverage based system test case prioritization of new and regression test cases. Information and Software Technology, 51(4), 799–808.CrossRef Krishnamoorthi, R., & Sahaaya Arul Mary, S. A. (2009). Factor oriented requirement coverage based system test case prioritization of new and regression test cases. Information and Software Technology, 51(4), 799–808.CrossRef
Zurück zum Zitat Ma, Z., & Zhao, J. (2008). Test case prioritization based on analysis of program structure. Proceedings of the 2008 15th Asia-Pacific software engineering conference, pp. 471–478. Ma, Z., & Zhao, J. (2008). Test case prioritization based on analysis of program structure. Proceedings of the 2008 15th Asia-Pacific software engineering conference, pp. 471–478.
Zurück zum Zitat Petersen, K., Feldt, R., Shahid, M., & Mattsson, M. (2008). systematic mapping studies in software engineering. Proceedings of 12th international conference on evaluation and assessment in software engineering (EASE), Department of Informatics, University of Bari, Italy. Petersen, K., Feldt, R., Shahid, M., & Mattsson, M. (2008). systematic mapping studies in software engineering. Proceedings of 12th international conference on evaluation and assessment in software engineering (EASE), Department of Informatics, University of Bari, Italy.
Zurück zum Zitat Qu, X., Cohen, M. B., & Woolf, K. M. (2007). Combinatorial interaction regression testing: A study of test case generation and prioritization. Proceedings of ICSM 2007, pp. 255–264. Qu, X., Cohen, M. B., & Woolf, K. M. (2007). Combinatorial interaction regression testing: A study of test case generation and prioritization. Proceedings of ICSM 2007, pp. 255–264.
Zurück zum Zitat Srikanth, H., & Williams, L. (2005). On the economics of requirements-based test case prioritization. Proceedings of the seventh international workshop on economics-driven software engineering research (EDSER ‘05), pp. 1–3. Srikanth, H., & Williams, L. (2005). On the economics of requirements-based test case prioritization. Proceedings of the seventh international workshop on economics-driven software engineering research (EDSER ‘05), pp. 1–3.
Zurück zum Zitat Yoo, S., & Harman, M. (2012). Regression testing minimization, selection and prioritization: A survey. Software Testing, Verification and Reliability, 22(2), 67–120.CrossRef Yoo, S., & Harman, M. (2012). Regression testing minimization, selection and prioritization: A survey. Software Testing, Verification and Reliability, 22(2), 67–120.CrossRef
Metadaten
Titel
Test case prioritization: a systematic mapping study
verfasst von
Cagatay Catal
Deepti Mishra
Publikationsdatum
01.09.2013
Verlag
Springer US
Erschienen in
Software Quality Journal / Ausgabe 3/2013
Print ISSN: 0963-9314
Elektronische ISSN: 1573-1367
DOI
https://doi.org/10.1007/s11219-012-9181-z

Weitere Artikel der Ausgabe 3/2013

Software Quality Journal 3/2013 Zur Ausgabe

EditorialNotes

In this issue

Premium Partner