skip to main content
10.1145/1806799.1806835acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Test generation through programming in UDITA

Published:01 May 2010Publication History

ABSTRACT

We present an approach for describing tests using non-deterministic test generation programs. To write such programs, we introduce UDITA, a Java-based language with non-deterministic choice operators and an interface for generating linked structures. We also describe new algorithms that generate concrete tests by efficiently exploring the space of all executions of non-deterministic UDITA programs.

We implemented our approach and incorporated it into the official, publicly available repository of Java PathFinder (JPF), a popular tool for verifying Java programs. We evaluate our technique by generating tests for data structures, refactoring engines, and JPF itself. Our experiments show that test generation using UDITA is faster and leads to test descriptions that are easier to write than in previous frameworks. Moreover, the novel execution mechanism of UDITA is essential for making test generation feasible. Using UDITA, we have discovered a number of bugs in Eclipse, NetBeans, Sun javac, and JPF.

References

  1. UDITA website. http://mir.cs.illinois.edu/udita.Google ScholarGoogle Scholar
  2. K. Apt and M. G. Wallace. Constraint Logic Programming using Eclipse. CUP, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. T. Ball, D. Hoffman, F. Ruskey, R. Webber, and L. J. White. State generation and automated class testing. STVR, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  4. N. E. Beckman, A. V. Nori, S. K. Rajamani, and R. J. Simmons. Proofs from tests. In ISSTA, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Boyapati, S. Khurshid, and D. Marinov. Korat: Automated testing based on Java predicates. In ISSTA, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Cadar, V. Ganesh, P. M. Pawlowski, D. L. Dill, and D. R. Engler. EXE: Automatically generating inputs of death. In CCS, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. K. Claessen and J. Hughes. QuickCheck: A lightweight tool for random testing of Haskell programs. In ICFP, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. L. Clarke and D. Richardson. Symbolic evaluation methods for program analysis. In Program Flow Analysis: Theory and Applications, chapter 9. Prentice-Hall, Inc., 1981.Google ScholarGoogle Scholar
  9. C. Csallner, N. Tillmann, and Y. Smaragdakis. DySy: Dynamic symbolic execution for invariant inference. In ICSE, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. d'Amorim, C. Pacheco, T. Xie, D. Marinov, and M. D. Ernst. An empirical comparison of automated generation and classification techniques for object-oriented unit testing. In ASE, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. B. Daniel, D. Dig, K. Garcia, and D. Marinov. Automated testing of refactoring engines. In ESEC/FSE, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. L. de Moura and N. Bjørner. Z3: An efficient SMT solver. In TACAS, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. X. Deng, J. Lee, and Robby. Bogor/Kiasan: A k-bounded symbolic execution for checking strong heap properties of open systems. In ASE, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. B. Elkarablieh, D. Marinov, and S. Khurshid. Efficient solving of structural constraints. In ISSTA, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Fischer, O. Kiselyov, and C. Shan. Purely functional lazy non-deterministic programming. In ICFP, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. C. Flanagan. Automatic software model checking via constraint logic. J-SCP, 50(1--3), 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Gligoric, T. Gvero, V. Jagannath, S. Khurshid, V. Kuncak, and D. Marinov. On test generation through programming in UDITA. Tech. report LARA-REPORT-2009-005, EPFL, 2009.Google ScholarGoogle Scholar
  18. M. Gligoric, T. Gvero, S. Lauterburg, D. Marinov, and S. Khurshid. Optimizing generation of object graphs in Java PathFinder. In ICST, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. P. Godefroid, A. Kiezun, and M. Y. Levin. Grammar-based whitebox fuzzing. In PLDI, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated random testing. In PLDI, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. W. Grieskamp, X. Qu, X. Wei, N. Kicillof, and M. B. Cohen. Interaction coverage meets path coverage by SMT constraint solving. In TestCom/FATES, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. R. Iosif. Symmetry reduction criteria for software model checking. In SPIN, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. V. Jagannath, Y. Y. Lee, B. Daniel, and D. Marinov. Reducing the costs of bounded-exhaustive testing. In FASE, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Khurshid and D. Marinov. TestEra: Specification-based testing of Java programs using SAT. J-ASE, 11(4), 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. S. Khurshid, C. S. Pasareanu, and W. Visser. Generalized symbolic execution for model checking and testing. In TACAS, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. J. C. King. Symbolic execution and program testing. Commun. ACM, 19(7), 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. R. Lämmel and W. Schulte. Controllable combinatorial coverage in grammar-based testing. In TestCom, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. D. Marinov. Automatic Testing of Software with Structurally Complex Inputs. PhD thesis, MIT, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. D. Marinov, A. Andoni, D. Daniliuc, S. Khurshid, and M. Rinard. An evaluation of exhaustive testing for data structures. Technical Report MIT-LCS-TR-921, 2003.Google ScholarGoogle Scholar
  30. W. M. McKeeman. Differential testing for software. J-DTJ, 10(1), 1998.Google ScholarGoogle Scholar
  31. T. Noll and B. Schlich. Delayed nondeterminism in model checking embedded systems assembly code. In HVC, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. W. F. Opdyke and R. E. Johnson. Refactoring: an aid in designing application frameworks and evolving object-oriented systems. In SOOPPA, 1990.Google ScholarGoogle Scholar
  33. C. Pasareanu, R. Pelánek, and W. Visser. Predicate abstraction with under-approximation refinement. J-LMCS, 3(1), 2007.Google ScholarGoogle Scholar
  34. C. S. Pasareanu, P. C. Mehlitz, D. H. Bushnell, K. Gundy-Burlet, M. R. Lowry, S. Person, and M. Pape. Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In ISSTA, 2008.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. K. Sen, D. Marinov, and G. Agha. CUTE: A concolic unit testing engine for C. In ESEC/FSE, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. K. Sullivan, J. Yang, D. Coppit, S. Khurshid, and D. Jackson. Software assurance by bounded exhaustive testing. In ISSTA, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. S. Thummalapenta, T. Xie, N. Tillmann, J. de Halleux, and W. Schulte. MSeqGen: Object-oriented unit-test generation via mining source code. In ESEC/FSE, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. N. Tillmann and J. de Halleux. Pex---White box test generation for .NET. In TAP, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. F. Tip. Refactoring using type constraints. In SAS, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. W. Visser, K. Havelund, G. P. Brat, S. Park, and F. Lerda. Model checking programs. J-ASE, 10(2), 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. W. Visser, C. S. Pasareanu, and S. Khurshid. Test input generation with Java PathFinder. In ISSTA, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. W. Visser, C. S. Păsăreanu, and R. Pelánek. Test input generation for Java containers using state matching. In ISSTA, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ICSE '10: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1
    May 2010
    627 pages
    ISBN:9781605587196
    DOI:10.1145/1806799

    Copyright © 2010 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 1 May 2010

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate276of1,856submissions,15%

    Upcoming Conference

    ICSE 2025

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader