skip to main content
10.1145/1138929.1138945acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
Article

The class-level mutants of MuJava

Published:23 May 2006Publication History

ABSTRACT

This paper presents results from empirical studies of object-oriented, class level mutation operators, using the automated analysis and testing tool MuJava. Class mutation operators modify OO programming language features such as inheritance, polymorphism, dynamic binding and encapsulation. This paper presents data from 866 classes in six open-source programs. Several new class-level mutation operators are defined in this paper and an analysis of the number of mutants generated is provided. Techniques for eliminating some equivalent mutants are described and data from an automated tool are provided. One important result is that class-level mutation operators yield far more equivalent mutants than traditional, statement-level, operators. Another is that there are far fewer class-level mutants than statement-level mutants. Together, these data suggest that mutation for inter-class testing can be practically affordable.

References

  1. J. H. Andrews, L. C. Briand, and Y. Labiche. Is mutation an appropriate tool for testing experiments? In A. Press, editor, Proc. of the 27th International Conference on Software Engineering, pages 402--411, St. Louis, MO, USA, May 2005. ACM Press. SESSION: Empirical evaluation of testing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. D. Baldwin and F. Sayward. Heuristics for determining equivalence of program mutations. Technical Report 161, Yale University, Dept. of Computer Science, 1979.Google ScholarGoogle ScholarCross RefCross Ref
  3. R. V. Binder. Testing object-oriented software: A survey. Software Testing, Verification and Reliability, 6(3/4):125--252, 1996.Google ScholarGoogle ScholarCross RefCross Ref
  4. T. A. Budd and D. Angluin. Two notions of correctness and their relation to testing. Acta Informatica, 18:31--45, November 1982.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. P. Chevalley. Applying mutation analysis for object-oriented programs using a reflective approach. In Proceedings of the 8th Asia-Pacific Software Engineering Conference (APSEC 2001), pages 267--270, Macau SAR, China, December 2001. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. Chevalley and P. Thévenod-Fosse. A mutation analysis tool for Java programs. Journal on Software Tools for Technology Transfer (STTT), pages 1--14, December 2002.Google ScholarGoogle Scholar
  7. R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Hints on test data selection: Help for the practicing programmer. IEEE Computer, 11(4):34--41, April 1978.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. R. A. DeMillo and A. J. Offutt. Constraint-based automatic test data generation. IEEE Transactions on Software Engineering, 17(9):900--910, September 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. P. G. Frankl, S. N. Weiss, and C. Hu. All-uses versus mutation testing: An experimental comparison of effectiveness. Journal of Systems and Software, 38(3):235--253, September 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. L. Gallagher and J. Offutt. Integration testing of object-oriented components using finite state machines. Software Testing, Verification, and Reliability, 2006. Accepted for publication. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. M. Hierons, M. Harman, and S. Danicic. Using program slicing to assist in the detection of equivalent mutants. Software Testing, Verification and Reliability, 9(4):233--262, December 1999.Google ScholarGoogle ScholarCross RefCross Ref
  12. S. Kim, J. Clark, and J. McDermid. Class mutation: Mutation testing for object-oriented programs. In Net.ObjectDays Conference on Object-Oriented Software Systems, October 2000.Google ScholarGoogle Scholar
  13. Y. S. Ma, Y. R. Kwon, and J. Offutt. Inter-class mutation operators for Java. In 13th International Symposium on Software Reliability Engineering, pages 352--363, Annapolis MD, November 2002. IEEE Computer Society Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Y. S. Ma, A. J. Offutt, and Y. R. Kwon. MuJava: An automated class mutation system. Software Testing, Verification and Reliability, 15(2):97--133, June 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Y.-S. Ma, J. Offutt, and Y.-R. Kwon. MuJava home page. online, 2005. http://ise.gmu.edu/~offutt/mujava/, http://salmosa.kaist.ac.kr/LAB/MuJava/, last access November 2005.Google ScholarGoogle Scholar
  16. A. J. Offutt and W. M. Craft. Using compiler optimization techniques to detect equivalent mutants. Software Testing, Verification, and Reliability, 4(3):131--54, September 1994.Google ScholarGoogle Scholar
  17. A. J. Offutt, A. Lee, G. Rothermel, R. Untch, and C. Zapf. An experimental determination of sufficient mutation operators. ACM Transactions on Software Engineering Methodology, 5(2):99--118, April 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. A. J. Offutt and J. Pan. Detecting equivalent mutants and the feasible path problem. Software Testing, Verification, and Reliability, 7(3):165--192, September 1997.Google ScholarGoogle Scholar
  19. A. J. Offutt, J. Pan, K. Tewary, and T. Zhang. An experimental evaluation of data flow and mutation testing. Software-Practice and Experience, 26(2):165--176, February 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Offutt, R. Alexander, Y. Wu, Q. Xiao, and C. Hutchinson. A fault model for subtype inheritance and polymorphism. In Proceedings of the 12th International Symposium on Software Reliability Engineering, pages 84--93, Hong Kong China, November 2001. IEEE Computer Society Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. J. Offutt, Z. Jin, and J. Pan. The dynamic domain reduction approach to test data generation. Software-Practice and Experience, 29(2):167--193, January 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. The class-level mutants of MuJava

        Recommendations

        Reviews

        Andrew Brooks

        Mutation testing involves writing test cases to kill mutants, program versions containing small changes reflecting typical programming mistakes. This paper describes 29 class-level mutation operators used to generate mutants and defines equivalency conditions for 16. A mutant is equivalent to the original program if it produces the same output for the same input. An empirical analysis, making use of the tool MuJava and automated equivalence detection, of 866 classes in six open source programs provides an abundance of descriptive statistics. The average number of mutants produced per class, after equivalence detection, was 46. This is far fewer than would be generated by unit-level mutation operators, though, exceptionally, one class-level operator, for one class, produced over 1,000 mutants. Almost 75 percent of the class-level mutants generated were deemed equivalent compared to no more than 15 percent found with unit-level mutants. These descriptive statistics are important: without automated equivalence detection, mutation testing at the class-level would require more effort. Nothing is explicitly stated, however, about the manual elimination of equivalent mutants for those operators not covered by automated equivalence detection, leaving the status of some descriptive statistics uncertain. Reading over the equivalency conditions and knowing that almost 75 percent of generated class-level mutants are equivalent, the reader can be left questioning the object-oriented paradigm. Sometimes it really matters that certain features are used in certain ways; sometimes it doesn't matter at all. From an engineering perspective, this seems weak, a conjecture the authors, perhaps surprisingly, do not take up. This paper is strongly recommended to the software engineering community. Online Computing Reviews Service

        Access critical reviews of Computing literature here

        Become a reviewer for Computing Reviews.

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          AST '06: Proceedings of the 2006 international workshop on Automation of software test
          May 2006
          128 pages
          ISBN:1595934081
          DOI:10.1145/1138929

          Copyright © 2006 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 23 May 2006

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • Article

          Upcoming Conference

          ICSE 2025

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader