skip to main content
10.1145/1595696.1595725acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

MSeqGen: object-oriented unit-test generation via mining source code

Published:24 August 2009Publication History

ABSTRACT

An objective of unit testing is to achieve high structural coverage of the code under test. Achieving high structural overage of object-oriented code requires desirable method-call sequences that create and mutate objects. These sequences help generate target object states such as argument or receiver object states (in short as target states) of a method under test. Automatic generation of sequences for achieving target states is often challenging due to a large search space of possible sequences. On the other hand, code bases using object types (such as receiver or argument object types) include sequences that can be used to assist automatic test-generation approaches in achieving target states. In this paper, we propose a novel approach, called MSeqGen, that mines code bases and extracts sequences related to receiver or argument object types of a method under test. Our approach uses these extracted sequences to enhance two state-of-the-art test-generation approaches: random testing and dynamic symbolic execution. We conduct two evaluations to show the effectiveness of our approach. Using sequences extracted by our approach, we show that a random testing approach achieves 8.7% (with a maximum of 20.0% for one namespace) higher branch coverage and a dynamic-symbolic-execution-based approach achieves 17.4% (with a maximum of 22.5% for one namespace) higher branch coverage than without using our approach. Such an improvement is significant as the branches that are not covered by these state-of-the-art approaches are generally quite difficult to cover.

References

  1. M. Acharya, T. Xie, and J. Xu. Mining Interface Specifications for Generating Checkable Robustness Properties. In Proc. ISSRE, pages 311--320, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. R. Agrawal and R. Srikant. Fast algorithms for mining association rules in large databases. In Proc. VLDB, pages 487--499, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. L. Clarke. A System to Generate Test Data and Symbolically Execute Programs. IEEE Trans. Softw. Eng., 2(3):215--222, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. T. H. Cormen, C. Stein, R. L. Rivest, and C. E. Leiserson. Introduction to Algorithms. McGraw-Hill Higher Education, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Csallner and Y. Smaragdakis. JCrasher: an automatic robustness tester for Java. Softw. Pract. Exper., 34(11):1025--1050, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. Duran and M. Ntafos. An evaluation of random testing. IEEE Trans. Softw. Eng., 10(4):438--444, 1984.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Elbaum, H. N. Chin, M. B. Dwyer, and J. Dokulil. Carving differential unit test cases from system test cases. In Proc. FSE, pages 253--264, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. D. Engler, D. Y. Chen, S. Hallem, A. Chou, and B. Chelf. Bugs as deviant behavior: a general approach to inferring errors in systems code. In Proc. SOSP, pages 57--72, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Facebook developer toolkit, 2008. http://www.codeplex.com/FacebookToolkit.Google ScholarGoogle Scholar
  10. P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated random testing. In Proc. PLDI, pages 213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. K. Inkumsah and T. Xie. Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In Proc. ASE, pages 297--306, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Parasoft. Jtest manuals version 5.1. Online manual, 2006. http://www.parasoft.com.Google ScholarGoogle Scholar
  13. S. Khurshid, C. S. Pasareanu, and W. Visser. Generalized symbolic execution for model checking and testing. In Proc. TACAS, pages 553--568, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. C. King. Symbolic Execution and Program Testing. Communications of the ACM, 19(7):385--394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Koushik, M. Darko, and A. Gul. CUTE: a concolic unit testing engine for C. In Proc. ESEC/FSE, pages 263--272, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. X. Liu, H. Liu, B. Wang, P. Chen, and X. Cai. A unified fitness function calculation rule for flag conditions to improve evolutionary testing. In Proc. ASE, pages 337--341, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. A. Orso and B. Kennedy. Selective capture and replay of program executions. SIGSOFT Softw. Eng. Notes, 30(4):1--7, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Pacheco and M. D. Ernst. Eclat: Automatic generation and classification of test inputs. In Proc. ECOOP, pages 504--527, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball. Feedback-directed random test generation. In Proc. ICSE, pages 75--84, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. QuickGraph: A 100% C# graph library with Graphviz Support, Version 2.0, 2008. http://www.codeproject.com/KB/miscctrl/quickgraph.aspx.Google ScholarGoogle Scholar
  21. D. Saff, S. Artzi, J. H. Perkins, and M. D. Ernst. Automatic test factoring for Java. In Proc. ASE, pages 114--123, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Y. Song, S. Thummalapenta, and T. Xie. UnitPlus: Assisting developer testing in eclipse. In Proc. ETX, pages 26--30, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. S. Thummalapenta and T. Xie. PARSEWeb: A programmer assistant for reusing open source code on the web. In Proc. ASE, pages 204--213, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Thummalapenta and T. Xie. Mining exception-handling rules as sequence association rules. In Proc. ICSE, pages 496--506, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. N. Tillmann and J. de Halleux. Pex white box test generation for .NET. In Proc. TAP, pages 134--153, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. N. Tillmann and W. Schulte. Parameterized Unit Tests. In Proc. ESEC/FSE, pages 253--262, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. P. Tonella. Evolutionary testing of classes. In Proc. ISSTA, pages 119--128, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. J. Wang and J. Han. BIDE: Efficient mining of frequent closed sequences. In Proc. ICDE, pages 79--88, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. A. Wasylkowski, A. Zeller, and C. Lindig. Detecting object usage anomalies. In Proc. ESEC/FSE, pages 35--44, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. T. Xie, D. Marinov, and D. Notkin. Rostra: A framework for detecting redundant object-oriented unit tests. In Proc. ASE, pages 196--205, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. MSeqGen: object-oriented unit-test generation via mining source code

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ESEC/FSE '09: Proceedings of the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering
        August 2009
        408 pages
        ISBN:9781605580012
        DOI:10.1145/1595696

        Copyright © 2009 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 24 August 2009

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        ESEC/FSE '09 Paper Acceptance Rate32of217submissions,15%Overall Acceptance Rate112of543submissions,21%

        Upcoming Conference

        FSE '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader