ABSTRACT
Configurable software lets users customize applications in many ways, and is becoming increasingly prevalent. Researchers have created techniques for testing configurable software, but to date, only a little research has addressed the problems of regression testing configurable systems as they evolve. Whereas problems such as selective retesting and test prioritization at the test case level have been extensively researched, these problems have rarely been considered at the configuration level. In this paper we address the problem of providing configuration-aware regression testing for evolving software systems. We use combinatorial interaction testing techniques to model and generate configuration samples for use in regression testing. We conduct an empirical study on a non-trivial evolving software system to measure the impact of configurations on testing effectiveness, and to compare the effectiveness of different configuration prioritization techniques on early fault detection during regression testing. Our results show that configurations can have a large impact on fault detection and that prioritization of configurations can be effective.
- J. H. Andrews, L. C. Briand, Y. Labiche, and A. S. Namin. Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans. on Software Engineering, 32(8):608--624, 2006. Google ScholarDigital Library
- B. Beizer. Black-Box Testing. John Wiley and Sons, New York, NY, 1995.Google Scholar
- R. Binder. Testing Object-Oriented Systems. Addison Wesley, Reading, MA, 2000. Google ScholarDigital Library
- R. Brownlie, J. Prowse, and M. S. Phadke. Robust testing of AT&T PMX/StarMAIL using OATS. AT&T Technical Journal, 71(3):41--47, 1992.Google ScholarCross Ref
- R. Bryce and C. Colbourn. Prioritized interaction testing for pair-wise coverage with seeding and constraints. Journal of Information and Software Technology, 48(10):960--970, 2006.Google ScholarCross Ref
- Y. Chen, D. Rosenblum, and K. Vo. TestTube: A system for selective regression testing. In Proc. of the Intl. Conference on Software Engineering, pages 211--220, May 1994. Google ScholarDigital Library
- D. M. Cohen, S. R. Dalal, M. L. Fredman, and G. C. Patton. The AETG system: an approach to testing based on combinatorial design. IEEE Trans. on Software Engineering, 23(7):437--444, 1997. Google ScholarDigital Library
- M. B. Cohen, C. J. Colbourn, P. B. Gibbons, and W. B. Mugridge. Constructing test suites for interaction testing. In Proc. of the Intl. Conference on Software Engineering, pages 38--48, May 2003. Google ScholarDigital Library
- M. B. Cohen, M. B. Dwyer, and J. Shi. Interaction testing of highly-configurable systems in the presence of constraints. In Intl. Symposium on Software Testing and Analysis, pages 129--139, July 2007. Google ScholarDigital Library
- M. B. Cohen, J. Snyder, and G. Rothermel. Testing across configurations: implications for combinatorial testing. In Proc. of the Intl. Workshop on Advances in Model-based Testing, pages 1--9, Nov. 2006.Google ScholarDigital Library
- H. Do, S. G. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering: An International Journal, 10(4):405--435, 2005. Google ScholarDigital Library
- S. Elbaum, D. Gable, and G. Rothermel. The impact of software evolution on code coverage information. In Intl. Conference on Software Maintenance, pages 169--179, 2001. Google ScholarDigital Library
- S. Elbaum, A. Malishevsky, and G. Rothermel. Prioritizing test cases for regression testing. In Intl. Symposium on Software Testing and Analysis, pages 102--112, Aug. 2000. Google ScholarDigital Library
- S. Elbaum, A. G. Malishevsky, and G. Rothermel. Test case prioritization: A family of empirical studies. IEEE Trans. on Software Engineering, 28(2):159--182, Feb. 2002. Google ScholarDigital Library
- S. Fouché, M. B. Cohen, and A. Porter. Towards incremental adaptive covering arrays. In The Supplemental Proc. of ACM SIGSOFT Symp. on the Foundations of Software Engineering, pages 557--560, Sept. 2007. Google ScholarDigital Library
- Free Software Foundation. gcov. http://gcc.gnu.org/onlinedocs/gcc/Gcov.html, 2007.Google Scholar
- D. Kuhn and M. Reilly. An investigation of the applicability of design of experiments to software testing. In Proc. of the NASA/IEEE Software Engineering Workshop, pages 91--95, 2002. Google ScholarDigital Library
- D. Kuhn, D. R. Wallace, and A. M. Gallo. Software fault interactions and implications for software testing. IEEE Trans. on Software Engineering, 30(6):418--421, 2004. Google ScholarDigital Library
- H. Leung and L. White. Insights into regression testing. In Intl. Conference on Software Maintenance, pages 60--69, Oct. 1989.Google Scholar
- H. Leung and L. White. Insights into regression testing. In Intl. Conference on Software Maintenance, pages 60--69, Oct. 1989.Google Scholar
- B. Moolenaar. Vim. http://www.vim.org/, 2007Google Scholar
- K. Onoma, W.-T. Tsai, M. Poonawala, and H. Suganuma. Regression testing in an industrial environment. Communications of the ACM, 41(5):81--86, May 1988. Google ScholarDigital Library
- A. Orso, N. Shi, and M. J. Harrold. Scaling regression testing to large software systems. In ACM SIGSOFT Symposium on the Foundations of Software Engineering, Nov. 2004. Google ScholarDigital Library
- T. J. Ostrand and M. J. Balcer. The category-partition method for specifying and generating functional tests. Communications of the ACM, 31:678--686, 1988. Google ScholarDigital Library
- X. Qu, M. B. Cohen, and K. M. Woolf. Combinatorial interaction regression testing: A study of test case generation and prioritization. In Intl. Conference on Software Maintenance, pages 255--264, Oct. 2007.Google ScholarCross Ref
- X. Ren, F. Shah, F. Tip, B. G. Ryder, and O. Chesley. Chianti: A tool for change impact analysis of Java programs. In Intl. Conference on Object-Oriented Programming, Systems, Languages, and Applications, pages 432--448, Oct. 2004. Google ScholarDigital Library
- G. Rothermel and M. J. Harrold. A safe, efficient regression test selection technique. ACM Trans. on Software Engineering and Methodology, 6(2):173--210, Apr. 1997. Google ScholarDigital Library
- G. Rothermel, R. Untch, C. Chu, and M. J. Harrold. Prioritizing test cases for regression testing. IEEE Trans. on Software Engineering, 27(10):929--948, Oct. 2001. Google ScholarDigital Library
- A. Srivastava and J. Thiagarajan. Effectively prioritizing tests in development environment. In Intl. Symposium on Software Testing and Analysis, pages 97--106, July 2002., pages Google ScholarDigital Library
- K. R. Walcott, M. L. Soffa, G. M. Kapfhammer, and R. S. Roos. Time-aware test suite prioritization. In Intl. Symposium on Software Testing and Analysis, pages 1--11, July 2006. Google ScholarDigital Library
- D. Wheeler. SLOCCount. http://www.dwheeler.com/sloccount/, 2007.Google Scholar
- W. Wong, J. Horgan, S. London, and H. Agrawal. A study of effective regression testing in practice. In Intl. Symposium on Software Reliability Engineering, pages 230--238, Nov. 1997. Google ScholarDigital Library
- C. Yilmaz, M. B. Cohen, and A. Porter. Covering arrays for efficient fault characterization in complex configuration spaces. IEEE Trans. on Software Engineering, 31(1):20--34, Jan. 2006. Google ScholarDigital Library
Index Terms
- Configuration-aware regression testing: an empirical study of sampling and prioritization
Recommendations
Testing across configurations: implications for combinatorial testing
User configurable software systems allow users to customize functionality at run time. In essence, each such system consists of a family of potentially thousands or millions of program instantiations. Testing methods cannot test all of these ...
Dependent-test-aware regression testing techniques
ISSTA 2020: Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and AnalysisDevelopers typically rely on regression testing techniques to ensure that their changes do not break existing functionality. Unfortunately, these techniques suffer from flaky tests, which can both pass and fail when run multiple times on the same ...
Cost-priority cognizant regression testing
Any change in software requires all the test cases, developed earlier, to be re-executed. This is done to make sure that the changes have not affected the proper working of the software. Since it is usually impossible to re-run all the test cases, the ...
Comments