skip to main content
article
Free Access

Software unit test coverage and adequacy

Published:01 December 1997Publication History
Skip Abstract Section

Abstract

Objective measurement of test quality is one of the key issues in software testing. It has been a major research focus for the last two decades. Many test criteria have been proposed and studied for this purpose. Various kinds of rationales have been presented in support of one criterion or another. We survey the research work in this area. The notion of adequacy criteria is examined together with its role in software dynamic testing. A review of criteria classification is followed by a summary of the methods for comparison and assessment of criteria.

References

  1. ADRION, W. R., BRANSTAD, M. A., AND CHERNI- AVSKY, J. C. 1982. Validation, verification, and testing of computer software. Comput. Surv. 14, 2 (June), 159-192.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. AFIFI, F. H., WHITE, L. J., AND ZEIL, S.J. 1992. Testing for linear errors in nonlinear computer programs. In Proceedings of the 14th IEEE International Conference on Software Engineering (May), 81-91.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. AMLA, N. AND AMMANN, P. 1992. Using Z specifications in category partition testing. In Proceedings of the Seventh Annual Conference on Computer Assurance (June), IEEE, 3-10.]]Google ScholarGoogle ScholarCross RefCross Ref
  4. AMMANN, P. AND OFFUTT, J. 1994. Using formal methods to derive test frames in categorypartition testing. In Proceedings of the Ninth Annual Conference on Computer Assurance (Gaithersburg, MD, June), IEEE, 69-79.]]Google ScholarGoogle Scholar
  5. BACHE, R. AND MULLERBURG, M. 1990. Measures of testability as a basis for quality assurance. Softw. Eng. J. (March), 86-92.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. BAKER, A. L., HOWATT, J. W., AND BIEMAN, J. M. 1986. Criteria for finite sets of paths that characterize control flow. In Proceedings of the 19th Annual Hawaii International Conference on System Sciences, 158-163.]]Google ScholarGoogle Scholar
  7. BASILI, V. R. AND RAMSEY, g. 1984. Structural coverage of functional testing. Tech. Rep. TR- 1442, Department of Computer Science, University of Maryland at College Park, Sept.]]Google ScholarGoogle Scholar
  8. BASILI, V. R. AND SELBY, R. W. 1987. Comparing the effectiveness of software testing. IEEE Trans. Softw. Eng. SE-13, 12 (Dec.), 1278-1296.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. BAZZICHI, F. AND SPADAFORA, I. 1982. An automatic generator for compiler testing. IEEE Trans. Softw. Eng. SE-8, 4 (July), 343-353.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. BEIZER, B. 1983. Software Testing Techniques. Van Nostrand Reinhold, New York.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. BEIZER, B. 1984. Software System Testing and Quality Assurance. Van Nostrand Reinhold, New York.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. BENGTSON, N.M. 1987. Measuring errors in operational analysis assumptions, IEEE Trans. Softw. Eng. SE-13, 7 (July), 767-776.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. BENTLY, W. G. AND MILLER, E.F. 1993. CT coverage-initial results. Softw. Quality J. 2, 1, 29-47.]]Google ScholarGoogle ScholarCross RefCross Ref
  14. BERNOT, G., GAUDEL, M. C., AND MARRE, B. 1991. Software testing based on formal specifications: A theory and a tool. Softw. Eng. J. (Nov.), 387-405.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. BIEMAN, J. M. AND SCHULTZ, J.L. 1992. An empirical evaluation (and specification) of the all du-paths testing criterion. Softw. Eng. J. (Jan.), 43-51.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. BIRD, D. L. AND MUNOZ, C.U. 1983. Automatic generation of random self-checking test cases. IBM Syst. J. 22, 3.]]Google ScholarGoogle ScholarCross RefCross Ref
  17. BOUGE, L., CHOQUET, N., FRIBOURG, L., AND GAU- DEL, M.-C. 1986. Test set generation from algebraic specifications using logic programming. J. Syst. Softw. 6, 343-360.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. BUDD, T.A. 1981. Mutation analysis: Ideas, examples, problems and prospects. In Computer Program Testing, Chandrasekaran and Radicchi, Eds., North Holland, 129-148.]]Google ScholarGoogle Scholar
  19. BUDD, T. A. AND ANGLUIN, D. 1982. Two notions of correctness and their relation to testing. Acta Inf. 18, 31-45.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. BUDD, T. A., LIPTON, R. J., SAYWARD, F. G., AND DEMILLO, R.A. 1978. The design of a prototype mutation system for program testing. In Proceedings of National Computer Conference, 623-627.]]Google ScholarGoogle Scholar
  21. CARVER, R. AND KuO-CHUNG, T. 1991. Replay and testing for concurrent programs. IEEE Softw. (March), 66-74.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. CHAAR, J. K., HALLIDAY, M. J., BHANDARI, I. S., AND CHILLAREGE, R. 1993. In-process evaluation for software inspection and test. IEEE Trans. Softw. Eng. 19, II, 1055-1070.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. CHANDRASEKARAN, B. AND RADICCHI, S. (EDS.) 1981. Computer Program Testing, North- Holland.]]Google ScholarGoogle Scholar
  24. CHANG, C. C. AND KEISLER, H.J. 1973. Model Theory. North-Holland, Amsterdam.]]Google ScholarGoogle Scholar
  25. CHANG, Y.-F. AND AOYAMA, M. 1991. Testing the limits of test technology. IEEE Softw. (March), 9-11.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. CHERNIAVSKY, J. C. AND SMITH, C. H. 1987. A recursion theoretic approach to program testing. IEEE Trans. Softw. Eng. SE-13, 7 (July), 777-784.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. CHERNIAVSKY, J. C. AND SMITH, C.H. 1991. On Weyuker's axioms for software complexity measures. IEEE Trans. Softw. Eng. SE-17, 6 (June), 636-638.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. CHOI, B., MATHUR, A., AND PATTISON, B. 1989. PMothra: Scheduling mutants for execution on a hypercube. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis and Verification 3 (Dec.) 58-65.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. CHUSHO, T. 1987. Test data selection and quality estimation based on the concept of essential branches for path testing. IEEE Trans. Softw. Eng. SE-13, 5 (May), 509-517.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. CLARKE, L. A., HASSELL, J., AND RICHARDSON, D. J. 1982. A close look at domain testing. IEEE Trans. Softw. Eng. SE-8, 4 (July), 380-390.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. CLARKE, L. A., PODGURSKI, A., RICHARDSON, D. J., AND ZEIL, S.J. 1989. A formal evaluation of data flow path selection criteria. IEEE Trans. Softw. Eng. 15, 11 (Nov.), 1318-1332.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. CURRIT, P. A., DYER, M., AND MILLS, H.D. 1986. Certifying the reliability of software. IEEE Trans. Softw. Eng. SE-6, 1 (Jan.) 2-13.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. DAVIS, M. AND WEYUKER, E. 1988. Metric spacebased test-data adequacy criteria. Comput. J. 13, 1 (Feb.), 17-24.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. DEMILLO, R. A. AND MATHUR, A. P. 1990. On the use of software artifacts to evaluate the effectiveness of mutation analysis for detecting errors in production software. In Proceedings of 13th Minnowbrook Workshop on Software Engineering (July 24-27, Blue Mountain Lake, NY), 75-77.]]Google ScholarGoogle Scholar
  35. DEMILLO, R. A. AND OFFUTT, A. J. 1991. Constraint-based automatic test data generation. IEEE Trans. Softw. Eng. 17, 9 (Sept.), 900- 910.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. DEMILLO, R. A. AND OFFUTT, A.J. 1993. Experi mental results from an automatic test case generator. ACM Trans. Softw. Eng. Methodol. 2, 2 (April), 109-127.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. DEMILLO, R. A., GUINDI, D. S., MCCRACKEN, W. M., OFFUTT, A. J., AND KING, K. N. 1988. An extended overview of the Mothra software testing environment. In Proceedings of SIG- SOFT Symposium on Software Testing, Analysis and Verification 2, (July), 142-151.]]Google ScholarGoogle ScholarCross RefCross Ref
  38. DEMILLO, R. A., LIPTON, R. J., AND SAYWARD, F.G. 1978. Hints on test data selection: Help for the practising programmer. Computer 11, (April), 34-41.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. DEMILLO, R. A., MCCRACKEN, W. M., MATIN, R. J., AND PASSUFIUME, J.F. 1987. Software Testing and Evaluation, Benjamin-Cummings, Redwood City, CA.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. DENNEY, R. 1991. Test-case generation from Prolog-based specifications. IEEE Softw. (March), 49-57.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. DIJKSTRA, E.W. 1972. Notes on structured programming. In Structured Programming, by O.-J. Dahl, E. W. Dijkstra, and C. A. R. Hoare, Academic Press.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. DOWNS, T. 1985. An approach to the modelling of software testing with some applications. IEEE Trans. Softw. Eng. SE-11, 4 (April), 375-386.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. DOWNS, T. 1986. Extensions to an approach to the modelling of software testing with some performance comparisons. IEEE Trans. Softw. Eng. SE-12, 9 (Sept.), 979-987.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. DOWNS, T. AND GARRONE, P. 1991. Some new models of software testing with performance comparisons. IEEE Trans. Rel. 40, 3 (Aug.), 322-328.]]Google ScholarGoogle ScholarCross RefCross Ref
  45. DUNCAN, I. M. M. AND ROBSON, D.J. 1990. Ordered mutation testing. ACM SIGSOFT Softw. Eng. Notes 15, 2 (April), 29-30.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. DURAN, J. W. AND NTAFOS, S. 1984. An evaluation of random testing. IEEE Trans. Softw. Eng. SE-IO, 4 (July), 438-444.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. FENTON, N. 1992. When a software measure is not a measure. Softw. Eng. J. (Sept.), 357- 362.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. FENTON, N.E. 1991. Software metrics--a rigorous approach. Chapman & Hall, London.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. FENTON, N. E., WHITTY, R. W., AND KAPOSI, A. A. 1985. A generalised mathematical theory of structured programming. Theor. Comput. Sci. 36, 145-171.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. FORMAN, I. R. 1984. An algebra for data flow anomaly detection. In Proceedings of the Seventh International Conference on Software Engineering (Orlando, FL), 250-256.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. FOSTER, Z. n. 1980. Error sensitive test case analysis (ESTCA). IEEE Trans. Softw. Eng. SE-6, 3 (May), 258-264.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. FRANKL, P. G. AND WEISS, S.N. 1993. An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Trans. Softw. Eng. 19, 8 (Aug.), 774-787.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. FRANKL, P. G. AND WEYUKER, J. E. 1988. An applicable family of data flow testing criteria. IEEE Trans. Softw. Eng. SE-14, 10 (Oct.), 1483-1498.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. FRANKL, P. G. AND WEYUKER, J. E. 1993a. A formal analysis of the fault-detecting ability of testing methods. IEEE Trans. Softw. Eng. 19, 3 (March), 202-213.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. FRANKL, P. G. AND WEYUKER, E.J. 1993b. Provable improvements on branch testing. IEEE Trans. Softw. Eng. 19, 10, 962-975.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. FREEDMAN, R. S. 1991. Testability of software components. IEEE Trans. Softw. Eng. SE-17, 6 (June), 553-564.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. FRITZSON, P., GYIMOTHY, T., KAMKAR, M., AND SHAHMEHRI, N. 1991. Generalized algorithmic debugging and testing. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation (Toronto, June 26-28).]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. FUJIWARA, S., v. BOCHMANN, G., KHENDEK, F., AMALOU, M., AND GHEDAMSI, A. 1991. Test selection based on finite state models. IEEE Trans. Softw. Eng. SE-17, 6 (June), 591-603.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. GAUDEL, M.-C. AND MARRE, B. 1988. Algebraic specifications and software testing: Theory and application. In Rapport LRI 407.]]Google ScholarGoogle Scholar
  60. GELPERIN, D. AND HETZEL, B. 1988. The growth of software testing. Commun. ACM 31, 6 (June), 687-695.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. GIRGIS, M. R. 1992. An experimental evaluation of a symbolic execution system. Softw. Eng. J. (July), 285-290.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. GOLD, E. M. 1967. Language identification in the limit. Inf. Cont. 10, 447-474.]]Google ScholarGoogle ScholarCross RefCross Ref
  63. GOODENOUGH, J. B. AND GERHART, S. L. 1975. Toward a theory of test data selection. IEEE Trans. Softw. Eng. SE-3 (June).]]Google ScholarGoogle Scholar
  64. GOODENOUGH, J. B. AND GERHART, S. L. 1977. Toward a theory of testing: Data selection criteria. In Current Trends in Programming Methodology, Vol. 2, R. T. Yeh, Ed., Prentice- Hall, Englewood Cliffs, NJ, 44-79.]]Google ScholarGoogle Scholar
  65. GOPAL, A. AND BUDD, T. 1983. Program testing by specification mutation. Tech. Rep. TR 83- 17, University of Arizona, Nov.]]Google ScholarGoogle Scholar
  66. GOURLAY, J. 1983. A mathematical framework for the investigation of testing. IEEE Trans. Softw. Eng. SE-9, 6 (Nov.), 686-709.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. HALL, P. A. V. 1991. Relationship between specifications and testing. Inf. Softw. Technol. 33, 1 (Jan./Feb.), 47-52.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. HALL, P. A. V. AND HIERONS, R. 1991. Formal methods and testing. Tech. Rep. 91/16, Dept. of Computing, The Open University.]]Google ScholarGoogle Scholar
  69. HAMLET, D. AND TAYLOR, R. 1990. Partition testing does not inspire confidence. IEEE Trans. Softw. Eng. 16 (Dec.), 206-215.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. HAMLET, D., GIFFORD, B., AND NIKOLIK, B. 1993. Exploring dataflow testing of arrays. In Proceedings of 15th ICSE (May), 118-129.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. HAMLET, R. 1989. Theoretical comparison of testing methods. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 3 (Dec.), 28-37.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. HAMLET, R.G. 1977. Testing programs with the aid of a compiler. IEEE Trans. Softw. Eng. 3, 4 (July), 279-290.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. HARROLD, M. J., MCGREGOR, J. D., AND FITZ- PATRICK, K.g. 1992. Incremental testing of object-oriented class structures. In Proceedings of 14th ICSE (May) 68-80.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. HARROLD, M. g. AND SOFFA, M.L. 1990. Interprocedural data flow testing. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 3 (Dec.), 158-167.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. HARROLD, M. J. AND SOFFA, M.L. 1991. Selecting and using data for integration testing. IEEE Softw. (March), 58-65.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. HARTWICK, D. 1977. Test planning. In Proceedings of National Computer Conference, 285- 294.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. HAYES, I. J. 1986. Specification directed module testing. IEEE Trans. Softw. Eng. SE-12, 1 (Jan.), 124-133.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. HENNELL, M. A., HEDLEY, D., AND RIDDELL, I. J. 1984. Assessing a class of software tools. In Proceedings of the Seventh ICSE, 266-277.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. HERMAN, P. 1976. A data flow analysis approach to program testing. Aust. Comput. J. 8, 3 (Nov.), 92-96.]]Google ScholarGoogle Scholar
  80. HETZEL, W. 1984. The Complete Guide to Software Testing, Collins.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. HIERONS, R. 1992. Software testing from formal specification. Ph.D. Thesis, Brunel University, UK.]]Google ScholarGoogle Scholar
  82. HOFFMAN, D. M. AND STROOPER, P. 1991. Automated module testing in Prolog. IEEE Trans. Softw. Eng. 17, 9 (Sept.), 934-943.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. HORGAN, J. R. AND LONDON, S. 1991. Data flow coverage and the C language. In Proceedings of TAV4 (Oct.), 87-97.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. HORGAN, J. R. AND MATHUR, A.P. 1992. Assessing testing tools in research and education. IEEE Softw. (May), 61-69.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. HOWDEN, W.E. 1975. Methodology for the generation of program test data. IEEE Trans. Comput. 24, 5 (May), 554-560.]]Google ScholarGoogle Scholar
  86. HOWDEN, W. E. 1976. Reliability of the path analysis testing strategy. IEEE Trans. Softw. Eng. SE-2, (Sept.), 208-215.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. HOWDEN, W.E. 1977. Symbolic testing and the DISSECT symbolic evaluation system. IEEE Trans. Softw. Eng. SE-3 (July), 266-278.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. HOWDEN, W.E. 1978a. Algebraic program testing. ACTA Inf. 10, 53-66.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. HOWDEN, W.E. 1978b. Theoretical and empirical studies of program testing. IEEE Trans. Softw. Eng. SE-4, 4 (July), 293-298.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. HOWDEN, W.E. 1978c. An evaluation of the effectiveness of symbolic testing. Softw. Pract. Exper. 8, 381-397.]]Google ScholarGoogle ScholarCross RefCross Ref
  91. HOWDEN, W. E. 1980a. Functional program testing. IEEE Trans. Softw. Eng. SE-6, 2 (March), 162-169.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. HOWDEN, W. E. 1980b. Functional testing and design abstractions. J. Syst. Softw. 1, 307-313.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. HOWDEN, W.E. 1981. Completeness criteria for testing elementary program functions. In Proceedings of Fifth International Conference on Software Engineering (March), 235-243.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. HOWDEN, W. E. 1982a. Validation of scientific programs. Comput. Surv. 14, 2 (June), 193-227.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. HOWDEN, W. E. 1982b. Weak mutation testing and completeness of test sets. IEEE Trans. Softw. Eng. SE-8, 4 (July), 371-379.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. HOWDEN, W.E. 1985. The theory and practice of functional testing. IEEE Softw. (Sept.), 6-17.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. HOWDEN, W.E. 1986. A functional approach to program testing and analysis. IEEE Trans. Softw. Eng. SE-12, 10 (Oct.), 997-1005.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. HOWDEN, W.E. 1987. Functional program testing and analysis. McGraw-Hill, New York.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  99. HUTCHINS, M., FOSTER, H., GORADIA, T., AND OS- TRAND, T. 1994. Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In Proceedings of 16th IEEE International Conference on Software Engineering (May).]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. INCE, D. C. 1987. The automatic generation of test data. Comput. J. 30, 1, 63-69.]]Google ScholarGoogle ScholarCross RefCross Ref
  101. INCE, D.C. 1991. Software testing. In Software Engineer's Reference Book, J. A. McDermid, Ed., Butterworth-Heinemann (Chapter 19).]]Google ScholarGoogle Scholar
  102. KARASIK, M. S. 1985. Environmental testing techniques for software certification. IEEE Trans. Softw. Eng. SE-11, 9 (Sept.), 934-938.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. KEMMERER, R.A. 1985. Testing formal specifications to detect design errors. IEEE Trans. Softw. Eng. SE-11, 1 (Jan.), 32-43.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. KERNIGHAN, B. W. AND PLAUGER, P. J. 1981. Software Tools in Pascal, Addison-Wesley, Reading, MA.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. KING, K. N. AND OFFUTT, A. J. 1991. A FOR- TRAN language system for mutation-based software testing. Softw. Pract. Exper. 21, 7 (July), 685-718.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  106. KOREL, B., WEDDE, H., AND FERGUSON, R. 1992. Dynamic method of test data generation for distributed software. Inf. Softw. Tech. 34, 8 (Aug.), 523-532.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. KOSARAJU, S. 1974. Analysis of structured programs. J. Comput. Syst. Sci. 9, 232-255.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  108. KRANTZ, D. H., LUCE, R. D., SUPPES, P., AND TVER- SKY, A. 1971. Foundations of Measurement, Vol. 1: Additive and Polynomial Representations. Academic Press, New York.]]Google ScholarGoogle Scholar
  109. KRAUSER, E. W., MATHUR, A. P., AND REGO, V. J. 1991. High performance software testing on SIMD machines. IEEE Trans. Softw. Eng. SE-17, 5 (May), 403-423.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. LASKI, J. 1989. Testing in the program development cycle. Softw. Eng. J. (March), 95-106.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. LASKI, J. AND KOREL, B. 1983. A data flow oriented program testing strategy. IEEE Trans. Softw. Eng. SE-9, (May), 33-43.]]Google ScholarGoogle Scholar
  112. LASKI, J., SZERMER, W., AND LUCZYCKI, P. 1993. Dynamic mutation testing in integrated regression analysis. In Proceedings of 15th International Conference on Software Engineering (May), 108-117.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. LAUTERBACH, L. AND RANDALL, W. 1989. Experimental evaluation of six test techniques. In Proceedings of COMPASS 89 (Washington, DC, June), 36-41.{]]Google ScholarGoogle ScholarCross RefCross Ref
  114. LEVENDEL, Y. 1991. Improving quality with a manufacturing process. IEEE Softw. (March), 13-25.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. LINDQUIST, T. E. AND JENKINS, g.R. 1987. Test case generation with IOGEN. In Proceedings of the 20th Annual Hawaii International Conference on System Sciences, 478-487.]]Google ScholarGoogle Scholar
  116. LITTLEWOOD, B. AND STRIGINI, L. 1993. Validation of ultra-high dependability for softwarebased systems. C ACM 36, 11 (Nov.), 69-80.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. LIu, L.-L. AND ROBSON, D. J. 1989. Symbolic evaluation in software testing, the final report. Computer Science Tech. Rep. 10/89, School of Engineering and Applied Science, University of Durham, June.]]Google ScholarGoogle Scholar
  118. LUCE, R. D., KRANTZ, D. H., SUPPES, P., AND TVER- SKY, A. 1990. Foundations of Measurement, Vol. 3: Representation, Axiomatization, and Invariance. Academic Press, San Diego.]]Google ScholarGoogle Scholar
  119. MALAIYA, Y. Z., VONMAYRHAUSER, A., AND SRIMANI, P.K. 1993. An examination of fault exposure ratio. IEEE Trans. Softw. Eng. 19, 11, 1087-1094.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  120. MARICK, B. 1991. The weak mutation hypothesis. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 4 (Oct.), 190-199.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. MATHUR, A. P. 1991. Performance, effectiveness, and reliability issues in software testing. In Proceedings of the 15th Annual International Computer Software and Applications Conference (Tokyo, Sept.), 604-605.]]Google ScholarGoogle ScholarCross RefCross Ref
  122. MARSHALL, A. C. 1991. A Conceptual model of software testing. J. Softw. Test. Ver. Rel. 1, 3 (Dec.), 5-16.]]Google ScholarGoogle Scholar
  123. MCCABE, T. g. 1976. A complexity measure. IEEE Trans. Softw. Eng. SE-2, 4, 308-320.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. MCCABE, T. J. (ED.) 1983. Structured Testing. IEEE Computer Society Press, Los Alamitos, CA.]]Google ScholarGoogle Scholar
  125. MCCABE, T. J. AND SCHULMEYER, G. G. 1985. System testing aided by structured analysis: A practical experience. IEEE Trans. Softw. Eng. SE-11, 9 (Sept.), 917-921.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  126. MCMULLIN, P. R. AND GANNON, J.D. 1983. Combining testing with specifications: A case study. IEEE Trans. Softw. Eng. SE-9, 3 (May), 328-334.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  127. MEEK, B. AND SIU, K.K. 1988. The effectiveness of error seeding. Alvey Project SE/064: Quality evaluation of programming language processors, Report No. 2, Computing Centre, King's College London, Oct.]]Google ScholarGoogle Scholar
  128. MILLER, E. AND HOWDEN, W.E. 1981. Tutorial: Software Testing and Validation Techniques, (2nd ed.). IEEE Computer Society Press, Los Alamitos, CA.]]Google ScholarGoogle Scholar
  129. MILLER, K. W., MORELL, L. J., NOONAN, R. E., PARK, S. K., NICOL, D. M., MURRILL, B. W., AND VOAS, J.M. 1992. Estimating the probability of failure when testing reveals no failures. IEEE Trans. Softw. Eng. 18, 1 (Jan.), 33-43.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  130. MORELL, L. J. 1990. A theory of fault-based testing. IEEE Trans. Softw. Eng. 16, 8 (Aug.), 844-857.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. MYERS, G.g. 1977. An extension to the cyclomatic measure of program complexity. SIG- PLAN No. 12, 10, 61-64.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  132. MYERS, G.J. 1979. The Art of Software Testing. John Wiley and Sons, New York.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  133. MYERS, J. P., JR. 1992. The complexity of software testing. Softw. Eng. J. (Jan.), 13-24.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  134. NTAFOS, S. C. 1984. An evaluation of required element testing strategies. In Proceedings of the Seventh International Conference on Software Engineering, 250-256.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  135. NTAFOS, S.C. 1984. On required element testing. IEEE Trans. Softw. Eng. SE-IO, 6 (Nov.), 795-803.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  136. NTAFOS, S. C. 1988. A comparison of some structural testing strategies. IEEE Trans. Softw. Eng. SE-14 (June), 868-874.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. OFFUTT, A.J. 1989. The coupling effect: Fact or fiction. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 3 (Dec. 13-15), 131-140.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  138. OFFUTT, A.J. 1992. Investigations of the software testing coupling effect. ACM Trans. Softw. Eng. Methodol. 1, 1 (Jan.), 5-20.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  139. OFFUTT, A. J. AND LEE, S.D. 1991. How strong is weak mutation? In Proceedings of SIG- SOFT Symposium on Software Testing, Analysis, and Verification 4 (Oct.), 200-213.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  140. OFFUTT, n. J., ROTHERMEL, G., AND ZAPF, C. 1993. An experimental evaluation of selective mutation. In Proceedings of 15th ICSE (May), 100-107.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. OSTERWEIL, L. AND CLARKE, L.A. 1992. A proposed testing and analysis research initiative. IEEE Softw. (Sept.), 89-96.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  142. OSTRAND, T. J. AND BALCER, M. J. 1988. The category-partition method for specifying and generating functional tests. Commun. ACM 31, 6 (June), 676-686.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  143. OSTRAND, T. J. AND WEYUKER, E.J. 1991. Dataflow-based test adequacy analysis for languages with pointers. In Proceedings of SIG- SOFT Symposium on Software Testing, Analysis, and Verification 4, (Oct.), 74-86.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  144. OULD, M. A. AND UNWIN, C., EDS. 1986. Testing in Software Development. Cambridge University Press, New York.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  145. PAIGE, M. R. 1975. Program graphs, an algebra, and their implication for programming. IEEE Trans. Softw. Eng. SE-1, 3, (Sept.), 286-291.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  146. PAIGE, M. R. 1978. An analytical approach to software testing. In Proceedings COMP- SAC'78, 527-532.]]Google ScholarGoogle ScholarCross RefCross Ref
  147. PANDI, H. D., RYDER, B. G., AND LANDI, W. 1991. Interprocedural Def-Use associations in C programs. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 4, (Oct.), 139-153.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  148. PARRISH, A. AND ZWEBEN, S.H. 1991. Analysis and refinement of software test data adequacy properties. IEEE Trans. Softw. Eng. SE-17, 6 (June), 565-581.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  149. PARRISH, A. S. AND ZWEBEN, S.H. 1993. Clarifying some fundamental concepts in software testing. IEEE Trans. Softw. Eng. 19, 7 (July), 742-746.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  150. PETSCHENIK, N.H. 1985. Practical priorities in system testing. IEEE Softw. (Sept.), 18-23.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  151. PIWOWARSKI, P., OHBA, M., AND CARUSO, J. 1993. Coverage measurement experience during function testing. In Proceedings of the 15th ICSE (May), 287-301.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  152. PODGURSKI, A. AND CLARKE, L. 1989. The implications of program dependences for software testing, debugging and maintenance. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 3, (Dec.), 168-178.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  153. PODGURSKI, A. AND CLARKE, L.A. 1990. A formal model of program dependences and its implications for software testing, debugging and maintenance. IEEE Trans. Softw. Eng. 16, 9 (Sept.), 965-979.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  154. PRATHER, R. E. AND MYERS, g.P. 1987. The path prefix software testing strategy. IEEE Trans. Softw. Eng. SE-13, 7 (July).]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  155. PROGRAM ANALYSIS LTD., UK. 1992. Testbed technical description. May.]]Google ScholarGoogle Scholar
  156. RAPPS, S. AND WEYUKER, E. J. 1985. Selecting software test data using data flow information. IEEE Trans. Softw. Eng. SE-11, 4 (April), 367-375.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  157. RICHARDSON, D. J. AND CLARKE, L. A. 1985. Partition analysis: A method combining testing and verification. IEEE Trans. Softw. Eng. SE-11, 12 (Dec.), 1477-1490.{]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  158. RICHARDSON, D. J., AHA, S. L., AND O'MALLEY, T. O. 1992. Specification-based test oracles for reactive systems. In Proceedings of 14th International Conference on Software Engineering (May), 105-118.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  159. RICHARDSON, D. J. AND THOMPSON, M. C. 1988. The RELAY model of error detection and its application. In Proceedings of SIG- SOFT Symposium on Software Testing, Analysis, and Verification 2 (July).]]Google ScholarGoogle ScholarCross RefCross Ref
  160. RICHARDSON, D. J. AND THOMPSON, M. C. 1993. An analysis of test data selection criteria using the relay model of fault detection. IEEE Trans. Softw. Eng. 19, 6, 533-553.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  161. RIDDELL, I. J., HENNELL, M. A., WOODWARD, M. R., AND HEDLEY, D. 1982. Practical aspects of program mutation. Tech. Rep., Dept. of Computational Science, University of Liverpool, UK.]]Google ScholarGoogle Scholar
  162. ROBERTS, F.S. 1979. Measurement Theory, Encyclopedia of Mathematics and Its Applications, Vol. 7. Addison-Wesley, Reading, MA.]]Google ScholarGoogle Scholar
  163. ROE, R. P. AND ROWLAND, J.H. 1987. Some theory concerning certification of mathematical subroutines by black box testing. IEEE Trans. Softw. Eng. SE-13, 6 (June), 677-682.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  164. ROUSSOPOULOS, N. AND YEH, R.T. 1985. SEES: A software testing environment support system. IEEE Trans. Softw. Eng. SE-11, 4, (April), 355- 366.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  165. RUDNER, B. 1977. Seeding/tagging estimation of software errors: Models and estimates. Rome Air Development Centre, Rome, NY, RADC-TR-77-15, also AD-A036 655.]]Google ScholarGoogle Scholar
  166. SARIKAYA, B., BOCHMANN, G. V., AND CERNY, E. 1987. A test design methodology for protocol testing. IEEE Trans. Softw. Eng. SE-13, 5 (May), 518-531.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  167. SHERER, S.A. 1991. A cost-effective approach to testing. IEEE Softw. (March), 34-40.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  168. SOFTWARE RESEARCH. 1992. Software Test- Works--Software Testers Workbench System. Software Research, Inc.]]Google ScholarGoogle Scholar
  169. SOLHEIM, J. A. AND ROWLAND, J. H. 1993. An empirical-study of testing and integration strategies using artificial software systems. IEEE Trans. Softw. Eng. 19, 10, 941-949.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  170. STOCKS, P. A. AND CARRINGTON, D.A. 1993. Test templates: A specification-based testing framework. In Proceedings of 15th International Conference on Software Engineering (May), 405-414.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  171. Su, J. AND RITTER, P. R. 1991. Experience in testing the Motif interface. IEEE Softw. (March), 26-33.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  172. SUPPES, P., KRANTZ, D. H., LUCE, R. D., AND TVER- SKY, A. 1989. Foundations of Measurement, Vol. 2: Geometrical, Threshold, and Probabilistic Representations. Academic Press, San Diego.]]Google ScholarGoogle Scholar
  173. TAI, K.-C. 1993. Predicate-based test generation for computer programs. In Proceedings of 15th International Conference on Software Engineering (May), 267-276.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  174. TAKAHASHI, M. AND KAMAYACHI, Y. 1985. An empirical study of a model for program error prediction. IEEE, 330-336.]]Google ScholarGoogle Scholar
  175. THAYER, R., LIPOW, M., AND NELSON, E. 1978. Software Reliability. North-Holland.]]Google ScholarGoogle Scholar
  176. TSAI, W. T., VOLOVIK, D., AND KEEFE, T. F. 1990. Automated test case generation for programs specified by relational algebra queries. IEEE Trans. Softw. Eng. 16, 3 (March), 316-324.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  177. TSOUKALAS, M. Z., DURAN, J. W., AND NTAFOS, S.C. 1993. On some reliability estimation problems in random and partition testing. IEEE Trans. Softw. Eng. 19, 7 (July), 687-697.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  178. URAL, H. AND YANG, B. 1988. A structural test selection criterion. Inf. Process. Lett. 28, 3 (July), 157-163.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  179. URAL, H. AND YANG, B. 1993. Modeling software for accurate data flow representation. In Proceedings of 15th International Conference on Software Engineering (May), 277-286.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  180. VALIANT, L.C. 1984. A theory of the learnable. Commun. ACM 27, 11, 1134-1142.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  181. VOAS, J., MORRELL, L., AND MILLER, K. 1991. Predicting where faults can hide from testing. IEEE Softw. (March), 41-48.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  182. WEISER, M. D., GANNON, J. D., AND MCMULLIN, P.R. 1985. Comparison of structural test coverage metrics. IEEE Softw. (March), 80-85.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  183. WEISS, S. N. AND WEYUKER, E.J. 1988. An extended domain-based model of software reliability. IEEE Trans. Softw. Eng. SE-14, 10 (Oct.), 1512-1524.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  184. WEYUKER, E.J. 1979a. The applicability of program schema results to programs. Int. J. Comput. Inf. Sci. 8, 5, 387-403.]]Google ScholarGoogle ScholarCross RefCross Ref
  185. WEYUKER, E.J. 1979b. Translatability and decidability questions for restricted classes of program schema. SIAM J. Comput. 8, 5, 587- 598.]]Google ScholarGoogle ScholarCross RefCross Ref
  186. WEYUKER, E.J. 1982. On testing non-testable programs. Comput. J. 25, 4, 465-470.]]Google ScholarGoogle ScholarCross RefCross Ref
  187. WEYUKER, E.J. 1983. Assessing test data adequacy through program inference. ACM Trans. Program. Lang. Syst. 5, 4, (Oct.), 641-655.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  188. WEYUKER, E. J. 1986. Axiomatizing software test data adequacy. IEEE Trans. Softw. Eng. SE-12, 12, (Dec.), 1128-1138.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  189. WEYUKER, E.J. 1988a. The evaluation of program-based software test data adequacy criteria. Commun. ACM 31, 6, (June), 668-675.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  190. WEYUKER, E. J. 1988b. Evaluating software complexity measures. IEEE Trans. Softw. Eng. SE-14, 9, (Sept.), 1357-1365.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  191. WEYUKER, E. J. 1988c. An empirical study of the complexity of data flow testing. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 2 (July), 188-195.]]Google ScholarGoogle ScholarCross RefCross Ref
  192. WEYUKER, E. J. 1993. More experience with data-flow testing. IEEE Trans. Softw. Eng. 19, 9, 912-919.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  193. WEYUKER, E. J. AND DAVIS, M. 1983. A formal notion of program-based test data adequacy. Inf Cont. 56, 52-71.]]Google ScholarGoogle ScholarCross RefCross Ref
  194. WEYUKER, E. g. AND JENG, B. 1991. Analyzing partition testing strategies. IEEE Trans. Softw. Eng. 17, 7 (July), 703-711.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  195. WEYUKER, E. J. AND OSTRAND, T.J. 1980. Theories of program testing and the application of revealing sub-domains. IEEE Trans. Softw. Eng. SE-6, 3 (May), 236-246.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  196. WHITE, L.J. 1981. Basic mathematical definitions and results in testing. In Computer Program Testing, B. Chandrasekaran and S. Radicchi, Eds., North-Holland, 13-24.]]Google ScholarGoogle Scholar
  197. WHITE, L. J. AND COHEN, E.I. 1980. A domain strategy for computer program testing. IEEE Trans. Softw. Eng. SE-6, 3 (May), 247-257.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  198. WHITE, L. J. AND WISZNIEWSKI, B. 1991. Path testing of computer programs with loops using a tool for simple loop patterns. Softw. Pract. Exper. 21, 10 (Oct.).]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  199. WICHMANN, B.A. 1993. Why are there no measurement standards for software testing? Comput. Stand. Interfaces 15, 4, 361-364.]]Google ScholarGoogle ScholarCross RefCross Ref
  200. WICHMANN, B. A. AND COX, M. G. 1992. Problems and strategies for software component testing standards. J. Softw. Test. Ver. Rel. 2, 167-185.]]Google ScholarGoogle ScholarCross RefCross Ref
  201. WILD, C., ZEIL, S., CHEN, J., AND FENG, G. 1992. Employing accumulated knowledge to refine test cases. J. Softw. Test. Ver. Rel. 2, 2 (July), 53-68.]]Google ScholarGoogle Scholar
  202. WISZNIEWSKI, B.W. 1985. Can domain testing overcome loop analysis? IEEE, 304-309.]]Google ScholarGoogle Scholar
  203. WOODWARD, M. R. 1991. Concerning ordered mutation testing of relational operators. J. Softw. Test. Ver. Rel. 1, 3 (Dec.), 35-40.]]Google ScholarGoogle Scholar
  204. WOODWARD, M. R. 1993. Errors in algebraic specifications and an experimental mutation testing tool. Softw. Eng. J. (July), 211-224.]]Google ScholarGoogle ScholarCross RefCross Ref
  205. WOODWARD, M. R. AND HALEWOOD, K. 1988. From weak to strong--dead or alive? An analysis of some mutation testing issues. In Proceedings of Second Workshop on Software Testing, Verification and Analysis (July) 152- 158.]]Google ScholarGoogle ScholarCross RefCross Ref
  206. WOODWARD, M. R., HEDLEY, D., AND HENNEL, M.A. 1980. Experience with path analysis and testing of programs. IEEE Trans. Softw. Eng. SE-6, 5 (May), 278-286.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  207. WOODWARD, M. R., HENNEL, M. A., AND HEDLEY, D. 1980. A limited mutation approach to program testing. Tech. Rep. Dept. of Computational Science, University of Liverpool.]]Google ScholarGoogle Scholar
  208. YOUNG, M. AND TAYLOR, R.N. 1988. Combining static concurrency analysis with symbolic execution. IEEE Trans. Softw. Eng. SE-14, 10 (Oct.), 1499-1511.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  209. ZEIL, S. J. 1983. Testing for perturbations of program statements. IEEE Trans. Softw. Eng. SE-9, 3, (May), 335-346.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  210. ZEIL, S.J. 1984. Perturbation testing for computation errors. In Proceedings of Seventh International Conference on Software Engineering (Orlando, FL), 257-265.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  211. ZEIL, S. J. 1989. Perturbation techniques for detecting domain errors. IEEE Trans. Softw. Eng. 15, 6 (June), 737-746.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  212. ZEIL, S. J., AFIFI, F. H., AND WHITE, L. J. 1992. Detection of linear errors via domain testing. A CM Trans. Softw. Eng. Methodol. 1, 4, (Oct.), 422-451.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  213. ZHU, H. 1995a. Axiomatic assessment of control flow based software test adequacy criteria. Softw. Eng. J. (Sept.), 194-204.]]Google ScholarGoogle Scholar
  214. ZHU, H. 1995b. An induction theory of software testing. Sci. China 38 (Supp.) (Sept.), 58-72.]]Google ScholarGoogle Scholar
  215. ZHU, H. 1996a. A formal analysis of the subsume relation between software test adequacy criteria. IEEE Trans. Softw. Eng. 22, 4 (April), 248-255.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  216. ZHU, H. 1996b. A formal interpretation of software testing as inductive inference. J. Softw. Test. Ver. Rel. 6 (July), 3-31.]]Google ScholarGoogle ScholarCross RefCross Ref
  217. ZHU, H. AND HALL, P. A. V. 1992a. Test data adequacy with respect to specifications and related properties. Tech. Rep. 92/06, Department of Computing, The Open University, UK, Jan.]]Google ScholarGoogle Scholar
  218. ZHU, H. AND HALL, P. A.V. 1992b. Testability of programs: Properties of programs related to test data adequacy criteria. Tech. Rep. 92/05, Department of Computing, The Open University, UK, Jan.]]Google ScholarGoogle Scholar
  219. ZHU, H. AND HALL, P. A. V. 1993. Test data adequacy measurements. Softw. Eng. J. 8, 1 (Jan.), 21-30.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  220. ZHU, H., HALL, P. A. V., AND MAY, J. 1992. Inductive inference and software testing. J. Softw. Test. Ver. Rel. 2, 2 (July), 69-82.]]Google ScholarGoogle Scholar
  221. ZHU, H., HALL, P. A. V., AND MAY, J. 1994. Understanding software test adequacy--an axiomatic and measurement approach. In Mathematics of Dependable Systems, Proceedings of IMA First Conference (Sept., London), Oxford University Press, Oxford.]]Google ScholarGoogle Scholar
  222. ZWEBEN, S. H. AND GOURLAY, J.S. 1989. On the adequacy of Weyuker's test data adequacy axioms. IEEE Trans. Softw. Eng. SE-15, 4, (April), 496-501.]] Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Software unit test coverage and adequacy

    Recommendations

    Reviews

    Curtis Roger Cook

    The authors survey research in test data adequacy criteria, or criteria that test sets should satisfy. Since finding adequacy criteria that are both reliable and valid is impractical, research has shifted to finding practical approximations for stopping rules or measures of the quality of the testing. Classifying test adequacy criteria by the underlying testing approach provides an excellent organizational framework for this paper. The three basic testing approaches, which correspond to the main sections of the paper, are structural testing, fault-based testing, and error-based testing. Adequacy criteria for structural testing include control flow and data flow for both programs and specifications. Fault-based criteria, which include error seeding, mutation testing, and perturbation testing, measure the fault-detecting ability of the criteria. Error-based criteria measure how well the test set checks error-prone points. The authors make the important point that research into test adequacy criteria has been done mostly by academics, and industry has been slow to accept test adequacy measurement. The paper contains a section comparing the various adequacy criteria according to their fault-detecting ability, software reliability, and test cost. It indicates that much more work needs to be done if test adequacy criteria are to be more widely adopted by industry. The final section of the paper is an axiomatic study of the properties of adequacy criteria. Although the title “Software Unit Test Adequacy” would better reflect the contents of this paper, it is a readable, well-organized survey of test adequacy criteria.

    Access critical reviews of Computing literature here

    Become a reviewer for Computing Reviews.

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 29, Issue 4
      Dec. 1997
      129 pages
      ISSN:0360-0300
      EISSN:1557-7341
      DOI:10.1145/267580
      Issue’s Table of Contents

      Copyright © 1997 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 December 1997
      Published in csur Volume 29, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader