skip to main content
10.1145/1390630.1390643acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Finding errors in .net with feedback-directed random testing

Published:20 July 2008Publication History

ABSTRACT

We present a case study in which a team of test engineers at Microsoft applied a feedback-directed random testing tool to a critical component of the .NET architecture. Due to its complexity and high reliability requirements, the component had already been tested by 40 test engineers over five years, using manual testing and many automated testing techniques.

Nevertheless, the feedback-directed random testing tool found errors in the component that eluded previous testing, and did so two orders of magnitude faster than a typical test engineer (including time spent inspecting the results of the tool). The tool also led the test team to discover errors in other testing and analysis tools, and deficiencies in previous best-practice guidelines for manual testing. Finally, we identify challenges that random testing faces for continued effectiveness, including an observed decrease in the technique's error detection rate over time.

References

  1. Microsoft .NET Framework. http://www.microsoft.com/net/.Google ScholarGoogle Scholar
  2. Agitar. Agitator tool, http://www.agitar.com.Google ScholarGoogle Scholar
  3. B. Beizer. Software Testing Techniques. International Thomson Computer Press, 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Cadar, V. Ganesh, P. M. Pawlowski, D. L. Dill, and D. R. Engler. Exe: automatically generating inputs of death. In ACM Conference on Computer and Communications Security, pages 322--335, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. I. Ciupa and A. Leitner. Automatic testing based on design by contract. In Proceedings of Net.ObjectDays 2005 (6th Annual International Conference on Object--Oriented and Internet-based Technologies, Concepts, and Applications for a Networked World), pages 545--557, September 19-22 2005.Google ScholarGoogle Scholar
  6. K. Claessen and J. Hughes. QuickCheck: A lightweight tool for random testing of Haskell programs. In ICFP '00, Proceedings of the fifth ACM SIGPLAN International Conference on Functional Programming, pages 268--279, Montreal, Canada, Sept. 18-20, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. Csallner and Y. Smaragdakis. JCrasher: an automatic robustness tester for Java. Software: Practice and Experience, 34(11):1025--1050, Sept. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. R. Ferguson and B. Korel. The chaining approach for software test data generation. ACM Transactions on Software Engineering and Methodology, 5(1):63--86, Jan. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. E. Forrester and B. P. Miller. An empirical study of the robustness of Windows NT applications using random testing. In 4th USENIX Windows System Symposium, pages 59--68, Seattle, WA, USA, Aug. 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Godefroid, N. Klarlund, and K. Sen. Dart: Directed automated random testing. In Proceedings of the ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation, Chicago, IL, USA, June 13-15, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. D. Groce, G. Holzmann, and R. Joshi. Randomized differential testing as a prelude to formal verification. In ICSE '07: Proceedings of the 29th International Conference on Software Engineering, Minneapolis, MN, USA, 2007. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. D. Hamlet. Random testing. In Encyclopedia of Software Engineering. John Wiley and Sons, 1994.Google ScholarGoogle Scholar
  13. D. Hamlet and R. Taylor. Partition testing does not inspire confidence. IEEE Transactions on Software Engineering, 16(12):1402--1411, Dec. 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. C. King. Symbolic execution and program testing. Commun. ACM, 19(7):385--394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. D. Marinov, A. Andoni, D. Daniliuc, S. Khurshid, and M. Rinard. An evaluation of exhaustive testing for data structures. Technical Report MIT/LCS/TR-921, MIT Laboratory for Computer Science, Sept. 2003.Google ScholarGoogle Scholar
  16. B. P. Miller, L. Fredriksen, and B. So. An empirical study of the reliability of UNIX utilities. Communications of the ACM, 33(12):32--44, Dec. 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. G. J. Myers and C. Sandler. The Art of Software Testing. John Wiley & Sons, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. S. Ntafos. On random and partition testing. In ISSTA '98: Proceedings of the 1998 ACM SIGSOFT international symposium on Software testing and analysis, pages 42--48, New York, NY, USA, 1998. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. Oriat. Jartege: A tool for random generation of unit tests for Java classes. In QoSA/SOQUA, pages 242--256, Sept. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. Pacheco and M. D. Ernst. Eclat: Automatic generation and classification of test inputs. In ECOOP 2005 -- Object-Oriented Programming, 19th European Conference, Glasgow, Scotland, July 25-29, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. C. Pacheco and M. D. Ernst. Randoop: feedback-directed random testing for Java. In OOPSLA 2007 Companion, Montreal, Canada. ACM, Oct. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball. Feedback-directed random test generation. In ICSE '07: Proceedings of the 29th International Conference on Software Engineering, Minneapolis, MN, USA, 2007. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Parasoft. JTest manuals version 6.0. Online manual.Google ScholarGoogle Scholar
  24. K. Sen and G. Agha. CUTE and jCUTE: Concolic unit testing and explicit path model-checking tools. In 18th International Conference on Computer Aided Verification (CAV'06), volume 4144 of Lecture Notes in Computer Science, pages 419--423. Springer, 2006. (Tool Paper). Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. W. Visser, C. S. Puasuareanu, and R. Pelánek. Test input generation for Java containers using state matching. In ISSTA '06: Proceedings of the 2006 International Symposium on Software Testing and Analysis, pages 37--48, New York, NY, USA, 2006. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Finding errors in .net with feedback-directed random testing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ISSTA '08: Proceedings of the 2008 international symposium on Software testing and analysis
      July 2008
      324 pages
      ISBN:9781605580500
      DOI:10.1145/1390630

      Copyright © 2008 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 20 July 2008

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Author Tags

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate58of213submissions,27%

      Upcoming Conference

      ISSTA '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader