skip to main content
10.1145/2568225.2568248acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Unit test virtualization with VMVM

Published:31 May 2014Publication History

ABSTRACT

Testing large software packages can become very time intensive. To address this problem, researchers have investigated techniques such as Test Suite Minimization. Test Suite Minimization reduces the number of tests in a suite by removing tests that appear redundant, at the risk of a reduction in fault-finding ability since it can be difficult to identify which tests are truly redundant. We take a completely different approach to solving the same problem of long running test suites by instead reducing the time needed to execute each test, an approach that we call Unit Test Virtualization. With Unit Test Virtualization, we reduce the overhead of isolating each unit test with a lightweight virtualization container. We describe the empirical analysis that grounds our approach and provide an implementation of Unit Test Virtualization targeting Java applications. We evaluated our implementation, VMVM, using 20 real-world Java applications and found that it reduces test suite execution time by up to 97% (on average, 62%) when compared to traditional unit test execution. We also compared VMVM to a well known Test Suite Minimization technique, finding the reduction provided by VMVM to be four times greater, while still executing every test with no loss of fault-finding ability.

References

  1. Cookiesbasetest.java. http://svn.apache.org/repos/ asf/tomcat/trunk/test/org/apache/tomcat/util/ http/CookiesBaseTest.java.Google ScholarGoogle Scholar
  2. Junit: A programmer-oriented testing framework for java. http://junit.org/.Google ScholarGoogle Scholar
  3. Ohloh, inc. http://www.ohloh.net.Google ScholarGoogle Scholar
  4. J. Ansel, P. Marchenko, U. Erlingsson, E. Taylor, B. Chen, D. L. Schuff, D. Sehr, C. L. Biffle, and B. Yee. Language-independent sandboxing of just-in-time compilation and self-modifying code. In Proceedings of the 32nd ACM SIGPLAN conference on Programming language design and implementation, PLDI ’11, pages 355–366, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Apache Software Foundation. The apache ant project. http://ant.apache.org/.Google ScholarGoogle Scholar
  6. Apache Software Foundation. The apache maven project. http://maven.apache.org/.Google ScholarGoogle Scholar
  7. J. Bell and G. Kaiser. Vmvm: Unit test virtualization in java. https://github.com/Programming-Systems-Lab/vmvm. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Bell and G. Kaiser. Unit test virtualization with vmvm. Technical Report CUCS-021-13, Columbia University Dept of Computer Science, http://mice.cs.columbia.edu/getTechreport.php? techreportID=1549&format=pdf, September 2013.Google ScholarGoogle Scholar
  9. J. Black, E. Melachrinoudis, and D. Kaeli. Bi-criteria models for all-uses test suite reduction. In Proceedings of the 26th International Conference on Software Engineering, ICSE ’04, pages 106–115, Washington, DC, USA, 2004. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Black Duck Software. Black duck unveils ohloh open data initiative, launches beta code search capability. http://www.blackducksoftware.com/news/ releases/2012-07-18.Google ScholarGoogle Scholar
  11. A. Borg, W. Blau, W. Graetsch, F. Herrmann, and W. Oberle. Fault tolerance under unix. ACM Trans. Comput. Syst., 7(1):1–24, Jan. 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. E. Bruneton, R. Lenglet, and T. Coupaye. Asm: A code manipulation tool to implement adaptable systems. In In Adaptable and extensible component systems, 2002.Google ScholarGoogle Scholar
  13. G. Candea, S. Kawamoto, Y. Fujiki, G. Friedman, and A. Fox. Microreboot: A technique for cheap recovery. In Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6, OSDI’’04, pages 3–3, Berkeley, CA, USA, 2004. USENIX Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. K. M. Chandy and L. Lamport. Distributed snapshots: determining global states of distributed systems. ACM Trans. Comput. Syst., 3(1):63–75, Feb. 1985. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. T. Chen and M. Lau. A new heuristic for test suite reduction. Information and Software Technology, 40(5–6):347 – 354, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  16. T. Chen and M. Lau. A simulation study on some heuristics for test suite reduction. Information and Software Technology, 40(13):777 – 787, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  17. G.-M. Chiu and C.-R. Young. Efficient rollback-recovery technique in distributed computing systems. IEEE Trans. Parallel Distrib. Syst., 7(6):565–577, June 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. H. Do, S. G. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering: An International Journal, 10(4):405–435, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. H. Do, G. Rothermel, and A. Kinneer. Empirical studies of test case prioritization in a junit testing environment. In Software Reliability Engineering, 2004. ISSRE 2004. 15th International Symposium on, pages 113–124, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. Elbaum, A. Malishevsky, and G. Rothermel. Incorporating varying test costs and fault severities into test case prioritization. In Proceedings of the 23rd International Conference on Software Engineering, ICSE ’01, pages 329–338, Washington, DC, USA, 2001. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. E. N. Elnozahy and W. Zwaenepoel. Manetho: Transparent roll back-recovery with low overhead, limited rollback, and fast output commit. IEEE Trans. Comput., 41(5):526–531, May 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. E. Gelenbe. A model of roll-back recovery with multiple checkpoints. In Proceedings of the 2nd international conference on Software engineering, ICSE ’76, pages 251–255, Los Alamitos, CA, USA, 1976. IEEE Computer Society Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. D. Hao, L. Zhang, X. Wu, H. Mei, and G. Rothermel. On-demand test suite reduction. In Proceedings of the 2012 International Conference on Software Engineering, ICSE 2012, pages 738–748, Piscataway, NJ, USA, 2012. IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. M. J. Harrold, R. Gupta, and M. L. Soffa. A methodology for controlling the size of a test suite. ACM Trans. Softw. Eng. Methodol., 2(3):270–285, July 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. R. Holmes and D. Notkin. Identifying program, test, and environmental changes that affect behaviour. In Proceedings of the 33rd International Conference on Software Engineering, ICSE ’11, pages 371–380, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. H.-Y. Hsu and A. Orso. Mints: A general framework and tool for supporting test-suite minimization. In Proceedings of the 31st International Conference on Software Engineering, ICSE ’09, pages 419–429, Washington, DC, USA, 2009. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. S. Jain, F. Shafique, V. Djeric, and A. Goel. Application-level isolation and recovery with solitude. In Proceedings of the 3rd ACM SIGOPS/EuroSys European Conference on Computer Systems 2008, Eurosys ’08, pages 95–107, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. D. Jeffrey and N. Gupta. Improving fault detection capability by selectively retaining test cases during test suite reduction. IEEE Trans. Softw. Eng., 33(2):108–123, Feb. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. J. A. Jones and M. J. Harrold. Test-suite reduction and prioritization for modified condition/decision coverage. IEEE Trans. Softw. Eng., 29(3):195–209, Mar. 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Z. Liang, W. Sun, V. N. Venkatakrishnan, and R. Sekar. Alcatraz: An isolated environment for experimenting with untrusted software. Transactions on Information and System Security (TISSEC), 12(3):14:1–14:37, Jan. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. T. Lindholm, F. Yellin, G. Bracha, and A. Buckley. The Java Virtual Machine Specification, Java SE 7 edition, Feb 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. K. Mu¸slu, B. Soran, and J. Wuttke. Finding bugs by isolating unit tests. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering, ESEC/FSE ’11, pages 496–499, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. V. Nikolov, R. Kapitza, and F. J. Hauck. Recoverable class loaders for a fast restart of java applications. Mobile Networks and Applications, 14(1):53–64, Feb. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. M. Payer and T. R. Gross. Fine-grained user-space security through virtualization. In Proceedings of the 7th ACM SIGPLAN/SIGOPS international conference on Virtual execution environments, VEE ’11, pages 157–168, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. L. S. Pinto, S. Sinha, and A. Orso. Understanding myths and realities of test-suite evolution. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering, FSE ’12, pages 33:1–33:11, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. G. Rothermel, M. J. Harrold, J. Ostrin, and C. Hong. An empirical study of the effects of minimization on the fault detection capabilities of test suites. In In Proceedings of the International Conference on Software Maintenance, pages 34–43. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. G. Rothermel, R. Untch, C. Chu, and M. Harrold. Test case prioritization: an empirical study. In Proceedings of the IEEE International Conference on Software Maintenance (ICSM ’99), pages 179–188, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. A. Srivastava and J. Thiagarajan. Effectively prioritizing tests in development environment. In Proceedings of the 2002 ACM SIGSOFT international symposium on Software testing and analysis, ISSTA ’02, pages 97–106, New York, NY, USA, 2002. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. S. Tallam and N. Gupta. A concept analysis inspired greedy algorithm for test suite minimization. In Proceedings of the 6th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, PASTE ’05, pages 35–42, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. W. Wong, J. Horgan, A. Mathur, and A. Pasquini. Test set size minimization and fault detection effectiveness: a case study in a space application. In Computer Software and Applications Conference, 1997. COMPSAC ’97. Proceedings., The Twenty-First Annual International, pages 522–528, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. W. E. Wong, J. R. Horgan, S. London, and H. A. Bellcore. A study of effective regression testing in practice. In Proceedings of the Eighth International Symposium on Software Reliability Engineering, ISSRE ’97, Washington, DC, USA, 1997. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. W. E. Wong, J. R. Horgan, S. London, and A. P. Mathur. Effect of test set minimization on fault detection effectiveness. In Proceedings of the 17th international conference on Software engineering, ICSE ’95, pages 41–50, New York, NY, USA, 1995. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. G. Xu, A. Rountev, Y. Tang, and F. Qin. Efficient checkpointing of java software using context-sensitive capture and replay. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, ESEC-FSE ’07, pages 85–94, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. S. Yoo and M. Harman. Regression testing minimization, selection and prioritization: a survey. Software Testing, Verification and Reliability, 22(2):67–120, Mar. 2012. Google ScholarGoogle ScholarCross RefCross Ref
  45. L. Zhang, D. Marinov, L. Zhang, and S. Khurshid. An empirical study of junit test-suite reduction. In Software Reliability Engineering (ISSRE), 2011 IEEE 22nd International Symposium on, pages 170–179, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. S. Zhang, D. Jalali, J. Wuttke, K. Mu¸slu, W. Lam, M. Ernst, and D. Notkin. Empirically revisiting the test independence assumption. Technical Report 2014-01-01, University of Washington, ftp://ftp.cs.washington.edu/tr/2014/01/UW-CSE- 14-01-01.PDF, 2014.Google ScholarGoogle Scholar

Index Terms

  1. Unit test virtualization with VMVM

    Recommendations

    Reviews

    Massimiliano Masi

    In today's agile-driven software development processes, continuous integration and test-driven development are considered pillars by every software vendor. In such methodologies, unit tests play a crucial role. Regression tests are a particular type of unit test created by developers after fixing bugs to be sure that those bugs do not show up again. Unfortunately, in large software projects, regression tests and unit tests increase in size dramatically, thus increasing the execution time of the testing engine. This fact violates one of the concepts of continuous integration, which says that the developer should receive the results of the tests quickly. The authors report that time execution of test suites in industry may take several weeks to execute fully. Test suite minimization or prioritization techniques have been proposed to solve this problem. In particular, test suite minimization reduces the number of tests by seeking redundant tests by typically analyzing test coverage. However, identifying test redundancy can be inaccurate and it is an NP-complete problem. In this paper, the authors suggest another approach based on minimizing the total amount of time necessary to execute the test suite as a whole, rather than reducing the tests to be executed. This approach is named unit test virtualization. The approach implementation is named VmVm. The authors ran their VmVm tool against thousands of Java projects. They found that “for the [largest] applications each test executes in its own process, rather than executing multiple tests in the same process.” Thus, the application is initialized and terminated at each test, resulting in a very time-consuming process. Although this is a desired behavior, not all setup and teardown methods are implemented with efficiency in mind. The key insight of their approach is that in case of memory managed languages such as Java, it is not necessary to reinitialize the entire application to maintain isolation. It is feasible to analyze the software to find potential side effect-causing code and reinitialize automatically only the necessary parts. The work is corroborated by three critical motivation questions answered by analyzing the code of open-source projects to discover the developer's test behavior. VmVm requires no modification of the environment and is integrated with popular tools like Maven, Ant, and JUnit. It does not require access to the source code, but it is built on top of the ASM bytecode analyzer. Online Computing Reviews Service

    Access critical reviews of Computing literature here

    Become a reviewer for Computing Reviews.

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICSE 2014: Proceedings of the 36th International Conference on Software Engineering
      May 2014
      1139 pages
      ISBN:9781450327565
      DOI:10.1145/2568225

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 31 May 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate276of1,856submissions,15%

      Upcoming Conference

      ICSE 2025

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader