ABSTRACT
We present a case study in which a team of test engineers at Microsoft applied a feedback-directed random testing tool to a critical component of the .NET architecture. Due to its complexity and high reliability requirements, the component had already been tested by 40 test engineers over five years, using manual testing and many automated testing techniques.
Nevertheless, the feedback-directed random testing tool found errors in the component that eluded previous testing, and did so two orders of magnitude faster than a typical test engineer (including time spent inspecting the results of the tool). The tool also led the test team to discover errors in other testing and analysis tools, and deficiencies in previous best-practice guidelines for manual testing. Finally, we identify challenges that random testing faces for continued effectiveness, including an observed decrease in the technique's error detection rate over time.
- Microsoft .NET Framework. http://www.microsoft.com/net/.Google Scholar
- Agitar. Agitator tool, http://www.agitar.com.Google Scholar
- B. Beizer. Software Testing Techniques. International Thomson Computer Press, 1990. Google ScholarDigital Library
- C. Cadar, V. Ganesh, P. M. Pawlowski, D. L. Dill, and D. R. Engler. Exe: automatically generating inputs of death. In ACM Conference on Computer and Communications Security, pages 322--335, 2006. Google ScholarDigital Library
- I. Ciupa and A. Leitner. Automatic testing based on design by contract. In Proceedings of Net.ObjectDays 2005 (6th Annual International Conference on Object--Oriented and Internet-based Technologies, Concepts, and Applications for a Networked World), pages 545--557, September 19-22 2005.Google Scholar
- K. Claessen and J. Hughes. QuickCheck: A lightweight tool for random testing of Haskell programs. In ICFP '00, Proceedings of the fifth ACM SIGPLAN International Conference on Functional Programming, pages 268--279, Montreal, Canada, Sept. 18-20, 2000. Google ScholarDigital Library
- C. Csallner and Y. Smaragdakis. JCrasher: an automatic robustness tester for Java. Software: Practice and Experience, 34(11):1025--1050, Sept. 2004. Google ScholarDigital Library
- R. Ferguson and B. Korel. The chaining approach for software test data generation. ACM Transactions on Software Engineering and Methodology, 5(1):63--86, Jan. 1996. Google ScholarDigital Library
- J. E. Forrester and B. P. Miller. An empirical study of the robustness of Windows NT applications using random testing. In 4th USENIX Windows System Symposium, pages 59--68, Seattle, WA, USA, Aug. 2000. Google ScholarDigital Library
- P. Godefroid, N. Klarlund, and K. Sen. Dart: Directed automated random testing. In Proceedings of the ACM SIGPLAN 2005 Conference on Programming Language Design and Implementation, Chicago, IL, USA, June 13-15, 2005. Google ScholarDigital Library
- A. D. Groce, G. Holzmann, and R. Joshi. Randomized differential testing as a prelude to formal verification. In ICSE '07: Proceedings of the 29th International Conference on Software Engineering, Minneapolis, MN, USA, 2007. IEEE Computer Society. Google ScholarDigital Library
- D. Hamlet. Random testing. In Encyclopedia of Software Engineering. John Wiley and Sons, 1994.Google Scholar
- D. Hamlet and R. Taylor. Partition testing does not inspire confidence. IEEE Transactions on Software Engineering, 16(12):1402--1411, Dec. 1990. Google ScholarDigital Library
- J. C. King. Symbolic execution and program testing. Commun. ACM, 19(7):385--394, 1976. Google ScholarDigital Library
- D. Marinov, A. Andoni, D. Daniliuc, S. Khurshid, and M. Rinard. An evaluation of exhaustive testing for data structures. Technical Report MIT/LCS/TR-921, MIT Laboratory for Computer Science, Sept. 2003.Google Scholar
- B. P. Miller, L. Fredriksen, and B. So. An empirical study of the reliability of UNIX utilities. Communications of the ACM, 33(12):32--44, Dec. 1990. Google ScholarDigital Library
- G. J. Myers and C. Sandler. The Art of Software Testing. John Wiley & Sons, 2004. Google ScholarDigital Library
- S. Ntafos. On random and partition testing. In ISSTA '98: Proceedings of the 1998 ACM SIGSOFT international symposium on Software testing and analysis, pages 42--48, New York, NY, USA, 1998. ACM Press. Google ScholarDigital Library
- C. Oriat. Jartege: A tool for random generation of unit tests for Java classes. In QoSA/SOQUA, pages 242--256, Sept. 2005. Google ScholarDigital Library
- C. Pacheco and M. D. Ernst. Eclat: Automatic generation and classification of test inputs. In ECOOP 2005 -- Object-Oriented Programming, 19th European Conference, Glasgow, Scotland, July 25-29, 2005. Google ScholarDigital Library
- C. Pacheco and M. D. Ernst. Randoop: feedback-directed random testing for Java. In OOPSLA 2007 Companion, Montreal, Canada. ACM, Oct. 2007. Google ScholarDigital Library
- C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball. Feedback-directed random test generation. In ICSE '07: Proceedings of the 29th International Conference on Software Engineering, Minneapolis, MN, USA, 2007. IEEE Computer Society. Google ScholarDigital Library
- Parasoft. JTest manuals version 6.0. Online manual.Google Scholar
- K. Sen and G. Agha. CUTE and jCUTE: Concolic unit testing and explicit path model-checking tools. In 18th International Conference on Computer Aided Verification (CAV'06), volume 4144 of Lecture Notes in Computer Science, pages 419--423. Springer, 2006. (Tool Paper). Google ScholarDigital Library
- W. Visser, C. S. Puasuareanu, and R. Pelánek. Test input generation for Java containers using state matching. In ISSTA '06: Proceedings of the 2006 International Symposium on Software Testing and Analysis, pages 37--48, New York, NY, USA, 2006. ACM Press. Google ScholarDigital Library
Index Terms
- Finding errors in .net with feedback-directed random testing
Recommendations
Feedback-Directed Metamorphic Testing
Over the past decade, metamorphic testing has gained rapidly increasing attention from both academia and industry, particularly thanks to its high efficacy on revealing real-life software faults in a wide variety of application domains. On the basis of a ...
Generating focused random tests using directed swarm testing
ISSTA 2016: Proceedings of the 25th International Symposium on Software Testing and AnalysisRandom testing can be a powerful and scalable method for finding faults in software. However, sophisticated random testers usually test a whole program, not individual components. Writing random testers for individual components of complex programs may ...
Prioritizing random combinatorial test suites
SAC '17: Proceedings of the Symposium on Applied ComputingThe behaviour of a system under test can be influenced by several factors, such as system configurations, user inputs, and so on. It has also been observed that many failures are caused by only a small number of factors. Combinatorial testing aims at ...
Comments