skip to main content
10.1145/1146238.1146258acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
Article

From daikon to agitator: lessons and challenges in building a commercial tool for developer testing

Published:21 July 2006Publication History

ABSTRACT

Developer testing is of one of the most effective strategies for improving the quality of software, reducing its cost, and accelerating its development. Despite its widely recognized benefits, developer testing is practiced by only a minority of developers. The slow adoption of developer testing is primarily due to the lack of tools that automate some of the more tedious and time-consuming aspects of this practice. Motivated by the need for a solution, and helped and inspired by the research in software test automation, we created a developer testing tool based on software agitation. Software agitation is a testing technique that combines the results of research in test-input generation and dynamic invariant detection. We implemented software agitation in a commercial testing tool called Agitator. This paper gives a high-level overview of software agitation and its implementation in Agitator, focusing on the lessons and challenges of leveraging and applying the results of research to the implementation of a commercial product.

References

  1. R. S. Arnold. Sotware restructuring. In Proceedings of the IEEE, volume 77, pages 607--617, 1989.Google ScholarGoogle ScholarCross RefCross Ref
  2. K. Beck and E. Gamma. Test infected: Programmers love writing tests. In Java Report, volume 3, pages 37--50, 1998.Google ScholarGoogle Scholar
  3. R. E. Brooks. Commercial reality. Psychology of Programming Interest Group Mailing List, March 2005. http://www.mail-archive.com/[email protected]/msg00958.html.Google ScholarGoogle Scholar
  4. J. deRaeve and S. P. McCarron. Automated test generation technology. Technical report, X/Open Company Ltd., 1997. http://adl.opengroup.org/documents/Archive/adl10rep.pdf.Google ScholarGoogle Scholar
  5. Eclipse.org. Eclipse platform: technical overview, 2003. http://eclipse.org/white-papers/ eclipse-overview.pdf.Google ScholarGoogle Scholar
  6. M. D. Ernst. Dynamically discovering likely program invariants. PhD thesis, University of Washington, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. D. Ernst, J. Cockrell, W. G. Griswold, and D. Notkin. Dynamically discovering likely program invariants to support program evolution. In ICSE '99: 21st International Conference on Software Engineering, pages 213--224, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. Fowler. Refactoring: improving the design of existing code. Object Technology Series. Addison-Wesley, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In PLDI '05: 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. W. G. Griswold and D. Notkin. Automated assistance for program restructuring. ACM Transactions on Software Engineering and Methodology, 2(3):228--269, July 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. M. Harder, J. Mellen, and M. D. Ernst. Improving test suites via operational abstraction. In ICSE '03: 27th International Conference on Software Engineering, pages 60--73, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Henkel and A. Diwan. Discovering algebraic specifications from Java classes. In ECOOP '03: European Conference on Object-Oriented Programming, pages 431--456, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  13. Hybernate.org. Relational persistence for Java and .NET. http://www.hibernate.org/.Google ScholarGoogle Scholar
  14. D. Ingalls. Fabrik: A visual programming environment. In OOPSLA '88: International Conference on Object-Oriented Programming, Systems, Languages, and Applications, volume 23, pages 176--190, Nov. 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. JUnit. http://www.junit.org.Google ScholarGoogle Scholar
  16. G. J. Myers. The Art of Software Testing. Wiley - Interscience, New York, 1979. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. NUnit. http://www.nunit.org.Google ScholarGoogle Scholar
  18. W. F. Opdyke. Refactoring object-oriented frameworks. PhD thesis, University of Illinois at Urbana-Champaign, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. Pacheco and M. D. Ernst. Eclat: Automatic generation and classification of test inputs. In ECOOP '05: European Conference on object-Oriented Programming, pages 504--527, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. G. Parker, D. M. Weinstein, and C. R. Johnson. The SCIRun computational steering software system. In E. Arge, A. M. Bruaset, and H. P. Langtangen, editors, Modern Software Tools in Scientific Computing. Birkhauser Press, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. J. H. Perkins and M. D. Ernst. Efficient incremental algorithms for dynamic detection of likely invariants. In SIGSOFT '04/FSE-12: Proceedings of the 12th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 23--32, New York, NY, USA, 2004. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. N. H. Petschenik. Building awareness of system testing issues. In ICSE '85: 8th International Conference on Software Engineering, pages 182--188, Los Alamitos, CA, USA, 1985. IEEE Computer Society Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Robby, M. B. Dwyer, and J. Hatcliff. Bogor: an extensible and highly-modular software model checking framework. In ESEC/FSE-11: Proceedings of the 9th European Software Engineering Conference/11th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 267--276, New York, NY, USA, 2003. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. T. Robschink and G. Snelting. Efficient path conditions in dependence graphs. In ICSE '02: Proceedings of the 24th International Conference on Software Engineering, pages 478--488, New York, NY, USA, 2002. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Saff and M. D. Ernst. An experimental evaluation of continuous testing during development. In ISSTA '04: Proceedings of the 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 76--85, New York, NY, USA, 2004. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. S. Sankar and R. Hayes. Specifying and testing software components using ADL. Technical Report SMLI TR-94-23, Sun Microsystems Laboratories, April 1994. http://research.sun.com/techrep/ 1994/smli tr-94-23.pdf. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. K. Sen, D. Marinov, and G. Agha. CUTE: a concolic unit testing engine for C. In ESEC/FSE-13: 10th European Software Engineering Conference/13th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 263--272, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. SpringFramework.org. Spring application framework. http://www.springframework.org/.Google ScholarGoogle Scholar
  29. Sun Microsystems. Java platform, Enterprise Edition. http://java.sun.com/javaee/.Google ScholarGoogle Scholar
  30. The Apache Software Foundation. Apache Struts Project. http://struts.apache.org/.Google ScholarGoogle Scholar
  31. The Jakarta Project. Commons collections. http://jakarta.apache.org/commons/collections/.Google ScholarGoogle Scholar
  32. D. Thomas and A. Hunte. Mock objects. IEEE Software, 19(3):22--24, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. N. Tillmann and W. Schulte. Parameterized unit tests. In ESEC/FSE-13: 10th European Software Engineering Conference/13th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 253--262, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. R. Vanmegen and D. B. Meyerhoff. Costs and benefits of early defect detection-experiences from developing client-server and host applications. Software Quality Journal, 4(4):247--256, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  35. W. Visser, C. S. Pasareanu, and S. Khurshid. Test input generation with Java PathFinder. In ISSTA '04: Proceedings of the 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 97--107, New York, NY, USA, 2004. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. T. Xie, D. Marinov, W. Schulte, and D. Notkin. Symstra: A framework for generating object-oriented unit tests using symbolic execution. In TACAS '05: 11th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, volume 3440 of LNCS, pages 365--381. Springer-Verlag, Apr. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. T. Xie and D. Notkin. Tool-assisted unit test selection based on operational violations. In ASE '03: International Conference on Automated Software Engineering, pages 40--48, 2003.Google ScholarGoogle Scholar
  38. T. Xie and D. Notkin. Automatically identifying special and common unit tests for object-oriented programs. In ISSRE '05: International Symposium on Software Reliability Engineering, pages 277--287, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. From daikon to agitator: lessons and challenges in building a commercial tool for developer testing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ISSTA '06: Proceedings of the 2006 international symposium on Software testing and analysis
      July 2006
      274 pages
      ISBN:1595932631
      DOI:10.1145/1146238
      • General Chair:
      • Lori Pollock,
      • Program Chair:
      • Mauro Pezzè

      Copyright © 2006 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 July 2006

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate58of213submissions,27%

      Upcoming Conference

      ISSTA '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader