skip to main content
10.1145/1858996.1859084acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

How did you specify your test suite

Published:20 September 2010Publication History

ABSTRACT

Although testing is central to debugging and software certification, there is no adequate language to specify test suites over source code. Such a language should be simple and concise in daily use, feature a precise semantics, and of course, it has to facilitate suitable engines to compute test suites and assess the coverage achieved by a test suite.

This paper introduces the language FQL designed to fit these purposes. We achieve the necessary expressive power by a natural extension of regular expressions which matches test suites rather than individual executions. To evaluate the language, we show for a list of informal requirements how to express them in FQL. Moreover, we present a test case generation engine for C programs and perform practical experiments with the sample specifications.

References

  1. }}P. Ammann, J. Offutt, and W. Xu. Coverage criteria for state based specifications. In FORTEST, pages 118--156, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. }}T. Ball. A theory of predicate-complete test coverage and generation. In FMCO, pages 1--22, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. }}D. Beyer, A. J. Chlipala, T. A. Henzinger, R. Jhala, and R. Majumdar. Generating Tests from Counterexamples. In ICSE, pages 326--335, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. }}D. Beyer, A. J. Chlipala, T. A. Henzinger, R. Jhala, and R. Majumdar. The Blast Query Language for Software Verification. In SAS, pages 2--18, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. }}D. L. Bird and C. U. Munoz. Automatic generation of random self-checking test cases. IBM Systems Journal, 22(3):229--245, 1983. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. }}J. Blom, A. Hessel, B. Jonsson, and P. Pettersson. Specifying and generating test cases using observer automata. In FATES, pages 125--139, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. }}L. B. Briones, E. Brinksma, and M. Stoelinga. A semantic framework for test coverage. In ATVA, pages 399--414, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. }}BullseyeCoverage 7.11.15. http://www.bullseye.com/.Google ScholarGoogle Scholar
  9. }}C. Cadar, D. Dunbar, and D. R. Engler. Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In OSDI, pages 209--224, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. }}E. M. Clarke, D. Kroening, and F. Lerda. A Tool for Checking ANSI-C Programs. In TACAS, pages 168--176, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  11. }}CoverageMeter 5.0.3. http://www.coveragemeter.com/.Google ScholarGoogle Scholar
  12. }}CTC++ 6.5.3. http://www.verifysoft.com/en.html.Google ScholarGoogle Scholar
  13. }}G. Din. TTCN-3. In Model-Based Testing of Reactive Systems, pages 465--496, 2004.Google ScholarGoogle Scholar
  14. }}Software Considerations in Airborne Systems and Equipment Certification (DO-178B). RTCA, 1992.Google ScholarGoogle Scholar
  15. }}M. Friske, H. Schlingloff, and S. Weißleder. Composition of model-based test coverage criteria. In MBEES, 2008.Google ScholarGoogle Scholar
  16. }}A. Galloway, G. Lüttgen, J. T. Mühlberg, and R. Siminiceanu. Model-checking the linux virtual file system. In VMCAI, pages 74--88, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. }}P. Godefroid. Compositional dynamic test generation. In POPL, pages 47--54, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. }}P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In PLDI, pages 213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. }}B. S. Gulavani, T. A. Henzinger, Y. Kannan, A. V. Nori, and S. K. Rajamani. SYNERGY: a new algorithm for property checking. In SIGSOFT FSE, pages 117--127, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. }}T. A. Henzinger, R. Jhala, R. Majumdar, and G. Sutre. Lazy abstraction. In POPL, pages 58--70, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. }}A. Hessel, K. G. Larsen, M. Mikucionis, B. Nielsen, P. Pettersson, and A. Skou. Testing real-time systems using UPPAAL. In FORTEST, pages 77--117, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. }}A. Holzer, V. Januzaj, S. Kugele, C. Schallhart, M. Tautschnig, H. Veith, and B. Langer. Slope testing for activity diagrams and safety critical software. Technical Report TUD-CS-2009-0184, TU Darmstadt, 2009.Google ScholarGoogle Scholar
  23. }}A. Holzer, C. Schallhart, M. Tautschnig, and H. Veith. FShell: Systematic Test Case Generation for Dynamic Analysis and Measurement. In CAV, pages 209--213, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. }}A. Holzer, C. Schallhart, M. Tautschnig, and H. Veith. Query-Driven Program Testing. In VMCAI, pages 151--166, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. }}H. S. Hong, I. Lee, O. Sokolsky, and H. Ural. A temporal logic based theory of test coverage and generation. In TACAS, pages 327--341, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. }}J. C. Huang. An approach to program testing. ACM Comput. Surv., 7(3):113--128, 1975. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. }}G. Myers. The Art of Software Testing. Wiley, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. }}S. C. Ntafos. A comparison of some structural testing strategies. IEEE Trans. Software Eng., 14(6):868--874, 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. }}Rational Test RealTime 7.5. http://www.ibm.com/software/awdtools/test/realtime/.Google ScholarGoogle Scholar
  30. }}I. Schieferdecker, Z. R. Dai, J. Grabowski, and A. Rennoch. The UML 2.0 testing profile and its relation to TTCN-3. In TestCom, pages 79--94, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. }}K. Sen, D. Marinov, and G. Agha. CUTE: a concolic unit testing engine for C. In ESEC/SIGSOFT FSE, pages 263--272, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. }}S. A. Vilkomir and J. P. Bowen. From MC/DC to RC/DC: Formalization and analysis of control-flow testing criteria. In FORTEST, pages 240--270, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. }}M. Zolda, S. Bünte, and R. Kirner. Towards Adaptable Control Flow Segmentation for Measurement-Based Execution Time Analysis. In RTNS, 2009.Google ScholarGoogle Scholar

Index Terms

  1. How did you specify your test suite

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ASE '10: Proceedings of the 25th IEEE/ACM International Conference on Automated Software Engineering
      September 2010
      534 pages
      ISBN:9781450301169
      DOI:10.1145/1858996

      Copyright © 2010 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 20 September 2010

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate82of337submissions,24%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader