skip to main content
article
Free Access

The ASTOOT approach to testing object-oriented programs

Published:01 April 1994Publication History
Skip Abstract Section

Abstract

This article describes a new approach to the unit testing of object-oriented programs, a set of tools based on this approach, and two case studies. In this approach, each test case consists of a tuple of sequences of messages, along with tags indicating whether these sequences should put objects of the class under test into equivalent states and/or return objects that are in equivalent states. Tests are executed by sending the sequences to objects of the class under test, then invoking a user-supplied equivalence-checking mechanism. This approach allows for substantial automation of many aspects of testing, including test case generation, test driver generation, test execution, and test checking. Experimental prototypes of tools for test generation and test execution are described. The test generation tool requires the availability of an algebraic specification of the abstract data type being tested, but the test execution tool can be used when no formal specification is available. Using the test execution tools, case studies involving execution of tens of thousands of test cases, with various sequence lengths, parameters, and combinations of operations were performed. The relationships among likelihood of detecting an error and sequence length, range of parameters, and relative frequency of various operations were investigated for priority queue and sorted-list implementations having subtle errors. In each case, long sequences tended to be more likely to detect the error, provided that the range of parameters was sufficiently large and likelihood of detecting an error tended to increase up to a threshold value as the parameter range increased.

References

  1. ~ANTOY, S. 1989. Systematic design of algebraic specifications. In Proceedings of the 5th ~Internatmnal Workshop on Software Specification and Design. ACM, New York, 278-280. Google ScholarGoogle Scholar
  2. ~ANTOY, S. AND HAMLET, D. 1992. Automatically checking an implementation against its formal ~specification. Tech. Rep. TR 91-1, Rev. 1, Portland State Univ., Portland, Ore.Google ScholarGoogle Scholar
  3. ~BARTUSSEK, W. AND PARNAS, n. L. 1986. Using assertions about traces to write abstract ~specifications for software modules. In Software Specification Techniques. Addison-Wesley, ~Reading, Mass., 111-130.Google ScholarGoogle Scholar
  4. ~BERNOT, G., GAUDEL, M. C., AND MARRE, B. 1991. Software testing based on formal specifica- ~tions: A theory and a tool. Softw. Eng. J. 6, 6 (Nov.), 387-405. Google ScholarGoogle Scholar
  5. ~CHOQUET, N. 1986. Test data generation using a prolog with constraints. In Proceedmgs ofthe ~Workshop on Software Testing. IEEE Computer Society, Washington, D.C., 132-141.Google ScholarGoogle Scholar
  6. ~DOONG, R.-K. 1993. An approach to testing object-oriented programs. Ph.D. thesis, Polytechnic ~Univ., Brooklyn, N.Y. Also appeared as Computer Science Dept. Tech. Rep. No. PUCS-110-92. Google ScholarGoogle Scholar
  7. ~DOONG, R.-K. AND FRANKL, P.G. 1991. Case studies on testing object-oriented progTams. In ~Proceedings of the Symposium on Testing, Analysis, and Verification ( TA V4 ). ACM, New York, ~165-177. Google ScholarGoogle Scholar
  8. ~GANNON, J. D., HAMLET, R. G., AND MILLS, H.D. 1987. Theory of modules. IEEE Trans. Softw. ~Eng. 13, 7 (July), 820 829. Google ScholarGoogle Scholar
  9. ~GANNON, J. D., MCMULHN, P. R., AND HAMLET, R. 1981. Data-abstraction implementation, ~specification, and testing. ACM Trans. Program. Lang. Syst. 3, 3 (July), 211-223. Google ScholarGoogle Scholar
  10. ~GAUDEL, M. AND MARRE, B. 1988. Generation of test data from algebraic specifications. In ~Proceedings of the 2nd Workshop on Software Testing, Verification, and Analysis. IEEE ~Computer Society, Washington, D.C., 138-139.Google ScholarGoogle Scholar
  11. ~GOGUEN, J. A. AND WIN}~LER, T. 1988. Introducing OBJ3. Tech. Rep. SRI-CSL-88-9, Computer ~Science Lab., SRI Int., Menlo Park, Calif.Google ScholarGoogle Scholar
  12. ~GOGUEN, J. A., THATCHER, J. W., AND WAGNER, E.G. 1978. An initial algebra approach to the ~specification, correctness, and implementation of abstract data types. Current Trends Program. ~Meth. 4, 80-149.Google ScholarGoogle Scholar
  13. ~GOLDBERG, A. AND ROBSON, D. 1983. Smalltalk-80: The Language and its Implementation. ~Addison-Wesley, Reading, Mass. Google ScholarGoogle Scholar
  14. ~GUTTAG, J.J. 1980. Notes on type abstraction (version 2). IEEE Trans. Softw. Eng. 6, 1 (Jan.), ~13 23.Google ScholarGoogle Scholar
  15. ~GUTTAG, J.J. 1977. Abstract data types and the development of data structures. Commun. ~ACM 20, 6 (June), 396-404. Google ScholarGoogle Scholar
  16. ~GUTTAG, J. J. AND HORNING, J.J. 1978. The algebraic specification of abstract data types. Acta ~Inf. 10, 1, 27 52.Google ScholarGoogle Scholar
  17. ~GUTTA% J. J., HOROWITZ, E., AND MUSSER, D.R. 1978. Abstract data types and software ~validation. Commun. ACM 21, 12 (Dec.), 1048 1064. Google ScholarGoogle Scholar
  18. ~GUTTAG, J. J., HOROWITZ, E., AND MUSSER, D.R. 1977. Some extensions to algebraic specifica- ~tions. In Proceedings of Language Design for Reliable Software. ACM, New York, 63-67. Google ScholarGoogle Scholar
  19. ~HOFFMAN, D. AND BREALEY, C. 1989. Module test case generation. In Proceedings of ACM ~SIGSOFT '89 3rd Symposium on Software Testing, Analysis and Verification. ACM Press, ~New York, 97 102. Google ScholarGoogle Scholar
  20. ~HOFFMAN, n. AND SNODGRASS, a. 1988. Trace specifications: Methodology and models. IEEE ~Trans. Softw. Eng. 14, 9 (Sept.), 1243-1252. Google ScholarGoogle Scholar
  21. HOFFMAN, D. M. AND STROOPER, P. 1991. Automated module testing in Prolog. IEEE Trans. ~Softw. Eng. 17, 9 (Sept.), 934-943. Google ScholarGoogle Scholar
  22. ~JALOTE, P. 1989. Testing the completeness of specifications. IEEE Trans. Softw. Eng. 15, 5 ~(May), 526 531. Google ScholarGoogle Scholar
  23. ~JALOTE, P. AND CABALLERO, M.G. 1988. Automated testcase generation for data abstraction. In ~Proceedings of COMPSAC 88. IEEE Computer Society, Waghington, D C., 205-210.Google ScholarGoogle Scholar
  24. ~KNUTH, D. E. AND BENDIX, P. B. 1970. Simple word problems in universal algebras. In ~Computational Problems in Abstract Algebra. Pergamon Press, Elmsford, N.Y., 263-297.Google ScholarGoogle Scholar
  25. ~LISKOV, B. H. AND Z~LLES, S.N. 1975. Specification techniques for data abstractions. IEEE ~Trans. Softw. Eng. 1, i (Mar.), 7-19.Google ScholarGoogle Scholar
  26. MCLEAN, J.M. 1984. A formal method for the abstract specification of software. J. ACM 31, 3 ~(July), 600 627. Google ScholarGoogle Scholar
  27. MEYER, B. 1988. Object-Orzented Software Constructzon. Prentice-Hall International, New ~York. Google ScholarGoogle Scholar
  28. ~MUSSER, D.R. 1980. Abstract data type specification in the AFFIRM system. IEEE Trans. ~Softw. Eng. 6, 1 (Jan.), 24-32.Google ScholarGoogle Scholar
  29. ~STROUSTRUP, B. 1991. The C++ Programmzng Language. 2nd ed. Addison-Wesley~ Reading, ~Mass Google ScholarGoogle Scholar
  30. ~WEYUKER, E.J. 1982. On testing non-testable programs. Comput. J. 25, 4, 465 470.Google ScholarGoogle Scholar

Index Terms

  1. The ASTOOT approach to testing object-oriented programs

                Recommendations

                Reviews

                David A. Gustafson

                ASTOOT is a software testing tool that automates many parts of the testing process for object-oriented software. This excellent paper presents a new twist in software testing. It is well worth the effort of reading and understanding it. Researchers in software testing will find several interesting ideas, including generating test cases from an algebraic specification and self-checking tests. The description of the tool is also valuable. The basis for the methodology is a specification for an abstract data type (ADT) that indicates equivalency between different sequences of operations. For example, in many stack ADTs, the sequence of push followed by pop is equivalent to a null operation. Using these equivalencies, one sequence of operations can be converted to an equivalent sequence. The self-checking test case consists of running both sequences and checking that the results (output and resulting object) are equivalent. The objects are shown to be equivalent by recursively removing items. The unfortunate part is that the ASTOOT approach gives no guidance to the general problem of testing object-oriented software. The title of the paper is accurate. It describes the ASTOOT approach. When I first saw the title, I was hoping for information that I could use for testing object-oriented software, but the test cases are not selected on any coverage or criterion basis. In fact, ASTOOT does not select test cases; it only allows the tester to run test cases more efficiently and thus to run more test cases. The contribution of ASTOOT is in making testing easier.

                Access critical reviews of Computing literature here

                Become a reviewer for Computing Reviews.

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in

                Full Access

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader