Abstract
Objective measurement of test quality is one of the key issues in software testing. It has been a major research focus for the last two decades. Many test criteria have been proposed and studied for this purpose. Various kinds of rationales have been presented in support of one criterion or another. We survey the research work in this area. The notion of adequacy criteria is examined together with its role in software dynamic testing. A review of criteria classification is followed by a summary of the methods for comparison and assessment of criteria.
- ADRION, W. R., BRANSTAD, M. A., AND CHERNI- AVSKY, J. C. 1982. Validation, verification, and testing of computer software. Comput. Surv. 14, 2 (June), 159-192.]] Google ScholarDigital Library
- AFIFI, F. H., WHITE, L. J., AND ZEIL, S.J. 1992. Testing for linear errors in nonlinear computer programs. In Proceedings of the 14th IEEE International Conference on Software Engineering (May), 81-91.]] Google ScholarDigital Library
- AMLA, N. AND AMMANN, P. 1992. Using Z specifications in category partition testing. In Proceedings of the Seventh Annual Conference on Computer Assurance (June), IEEE, 3-10.]]Google ScholarCross Ref
- AMMANN, P. AND OFFUTT, J. 1994. Using formal methods to derive test frames in categorypartition testing. In Proceedings of the Ninth Annual Conference on Computer Assurance (Gaithersburg, MD, June), IEEE, 69-79.]]Google Scholar
- BACHE, R. AND MULLERBURG, M. 1990. Measures of testability as a basis for quality assurance. Softw. Eng. J. (March), 86-92.]] Google ScholarDigital Library
- BAKER, A. L., HOWATT, J. W., AND BIEMAN, J. M. 1986. Criteria for finite sets of paths that characterize control flow. In Proceedings of the 19th Annual Hawaii International Conference on System Sciences, 158-163.]]Google Scholar
- BASILI, V. R. AND RAMSEY, g. 1984. Structural coverage of functional testing. Tech. Rep. TR- 1442, Department of Computer Science, University of Maryland at College Park, Sept.]]Google Scholar
- BASILI, V. R. AND SELBY, R. W. 1987. Comparing the effectiveness of software testing. IEEE Trans. Softw. Eng. SE-13, 12 (Dec.), 1278-1296.]] Google ScholarDigital Library
- BAZZICHI, F. AND SPADAFORA, I. 1982. An automatic generator for compiler testing. IEEE Trans. Softw. Eng. SE-8, 4 (July), 343-353.]]Google ScholarDigital Library
- BEIZER, B. 1983. Software Testing Techniques. Van Nostrand Reinhold, New York.]] Google ScholarDigital Library
- BEIZER, B. 1984. Software System Testing and Quality Assurance. Van Nostrand Reinhold, New York.]] Google ScholarDigital Library
- BENGTSON, N.M. 1987. Measuring errors in operational analysis assumptions, IEEE Trans. Softw. Eng. SE-13, 7 (July), 767-776.]] Google ScholarDigital Library
- BENTLY, W. G. AND MILLER, E.F. 1993. CT coverage-initial results. Softw. Quality J. 2, 1, 29-47.]]Google ScholarCross Ref
- BERNOT, G., GAUDEL, M. C., AND MARRE, B. 1991. Software testing based on formal specifications: A theory and a tool. Softw. Eng. J. (Nov.), 387-405.]] Google ScholarDigital Library
- BIEMAN, J. M. AND SCHULTZ, J.L. 1992. An empirical evaluation (and specification) of the all du-paths testing criterion. Softw. Eng. J. (Jan.), 43-51.]] Google ScholarDigital Library
- BIRD, D. L. AND MUNOZ, C.U. 1983. Automatic generation of random self-checking test cases. IBM Syst. J. 22, 3.]]Google ScholarCross Ref
- BOUGE, L., CHOQUET, N., FRIBOURG, L., AND GAU- DEL, M.-C. 1986. Test set generation from algebraic specifications using logic programming. J. Syst. Softw. 6, 343-360.]] Google ScholarDigital Library
- BUDD, T.A. 1981. Mutation analysis: Ideas, examples, problems and prospects. In Computer Program Testing, Chandrasekaran and Radicchi, Eds., North Holland, 129-148.]]Google Scholar
- BUDD, T. A. AND ANGLUIN, D. 1982. Two notions of correctness and their relation to testing. Acta Inf. 18, 31-45.]]Google ScholarDigital Library
- BUDD, T. A., LIPTON, R. J., SAYWARD, F. G., AND DEMILLO, R.A. 1978. The design of a prototype mutation system for program testing. In Proceedings of National Computer Conference, 623-627.]]Google Scholar
- CARVER, R. AND KuO-CHUNG, T. 1991. Replay and testing for concurrent programs. IEEE Softw. (March), 66-74.]] Google ScholarDigital Library
- CHAAR, J. K., HALLIDAY, M. J., BHANDARI, I. S., AND CHILLAREGE, R. 1993. In-process evaluation for software inspection and test. IEEE Trans. Softw. Eng. 19, II, 1055-1070.]] Google ScholarDigital Library
- CHANDRASEKARAN, B. AND RADICCHI, S. (EDS.) 1981. Computer Program Testing, North- Holland.]]Google Scholar
- CHANG, C. C. AND KEISLER, H.J. 1973. Model Theory. North-Holland, Amsterdam.]]Google Scholar
- CHANG, Y.-F. AND AOYAMA, M. 1991. Testing the limits of test technology. IEEE Softw. (March), 9-11.]] Google ScholarDigital Library
- CHERNIAVSKY, J. C. AND SMITH, C. H. 1987. A recursion theoretic approach to program testing. IEEE Trans. Softw. Eng. SE-13, 7 (July), 777-784.]] Google ScholarDigital Library
- CHERNIAVSKY, J. C. AND SMITH, C.H. 1991. On Weyuker's axioms for software complexity measures. IEEE Trans. Softw. Eng. SE-17, 6 (June), 636-638.]] Google ScholarDigital Library
- CHOI, B., MATHUR, A., AND PATTISON, B. 1989. PMothra: Scheduling mutants for execution on a hypercube. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis and Verification 3 (Dec.) 58-65.]] Google ScholarDigital Library
- CHUSHO, T. 1987. Test data selection and quality estimation based on the concept of essential branches for path testing. IEEE Trans. Softw. Eng. SE-13, 5 (May), 509-517.]] Google ScholarDigital Library
- CLARKE, L. A., HASSELL, J., AND RICHARDSON, D. J. 1982. A close look at domain testing. IEEE Trans. Softw. Eng. SE-8, 4 (July), 380-390.]]Google ScholarDigital Library
- CLARKE, L. A., PODGURSKI, A., RICHARDSON, D. J., AND ZEIL, S.J. 1989. A formal evaluation of data flow path selection criteria. IEEE Trans. Softw. Eng. 15, 11 (Nov.), 1318-1332.]] Google ScholarDigital Library
- CURRIT, P. A., DYER, M., AND MILLS, H.D. 1986. Certifying the reliability of software. IEEE Trans. Softw. Eng. SE-6, 1 (Jan.) 2-13.]] Google ScholarDigital Library
- DAVIS, M. AND WEYUKER, E. 1988. Metric spacebased test-data adequacy criteria. Comput. J. 13, 1 (Feb.), 17-24.]] Google ScholarDigital Library
- DEMILLO, R. A. AND MATHUR, A. P. 1990. On the use of software artifacts to evaluate the effectiveness of mutation analysis for detecting errors in production software. In Proceedings of 13th Minnowbrook Workshop on Software Engineering (July 24-27, Blue Mountain Lake, NY), 75-77.]]Google Scholar
- DEMILLO, R. A. AND OFFUTT, A. J. 1991. Constraint-based automatic test data generation. IEEE Trans. Softw. Eng. 17, 9 (Sept.), 900- 910.]] Google ScholarDigital Library
- DEMILLO, R. A. AND OFFUTT, A.J. 1993. Experi mental results from an automatic test case generator. ACM Trans. Softw. Eng. Methodol. 2, 2 (April), 109-127.]] Google ScholarDigital Library
- DEMILLO, R. A., GUINDI, D. S., MCCRACKEN, W. M., OFFUTT, A. J., AND KING, K. N. 1988. An extended overview of the Mothra software testing environment. In Proceedings of SIG- SOFT Symposium on Software Testing, Analysis and Verification 2, (July), 142-151.]]Google ScholarCross Ref
- DEMILLO, R. A., LIPTON, R. J., AND SAYWARD, F.G. 1978. Hints on test data selection: Help for the practising programmer. Computer 11, (April), 34-41.]]Google ScholarDigital Library
- DEMILLO, R. A., MCCRACKEN, W. M., MATIN, R. J., AND PASSUFIUME, J.F. 1987. Software Testing and Evaluation, Benjamin-Cummings, Redwood City, CA.]] Google ScholarDigital Library
- DENNEY, R. 1991. Test-case generation from Prolog-based specifications. IEEE Softw. (March), 49-57.]] Google ScholarDigital Library
- DIJKSTRA, E.W. 1972. Notes on structured programming. In Structured Programming, by O.-J. Dahl, E. W. Dijkstra, and C. A. R. Hoare, Academic Press.]] Google ScholarDigital Library
- DOWNS, T. 1985. An approach to the modelling of software testing with some applications. IEEE Trans. Softw. Eng. SE-11, 4 (April), 375-386.]] Google ScholarDigital Library
- DOWNS, T. 1986. Extensions to an approach to the modelling of software testing with some performance comparisons. IEEE Trans. Softw. Eng. SE-12, 9 (Sept.), 979-987.]] Google ScholarDigital Library
- DOWNS, T. AND GARRONE, P. 1991. Some new models of software testing with performance comparisons. IEEE Trans. Rel. 40, 3 (Aug.), 322-328.]]Google ScholarCross Ref
- DUNCAN, I. M. M. AND ROBSON, D.J. 1990. Ordered mutation testing. ACM SIGSOFT Softw. Eng. Notes 15, 2 (April), 29-30.]] Google ScholarDigital Library
- DURAN, J. W. AND NTAFOS, S. 1984. An evaluation of random testing. IEEE Trans. Softw. Eng. SE-IO, 4 (July), 438-444.]]Google ScholarDigital Library
- FENTON, N. 1992. When a software measure is not a measure. Softw. Eng. J. (Sept.), 357- 362.]] Google ScholarDigital Library
- FENTON, N.E. 1991. Software metrics--a rigorous approach. Chapman & Hall, London.]] Google ScholarDigital Library
- FENTON, N. E., WHITTY, R. W., AND KAPOSI, A. A. 1985. A generalised mathematical theory of structured programming. Theor. Comput. Sci. 36, 145-171.]] Google ScholarDigital Library
- FORMAN, I. R. 1984. An algebra for data flow anomaly detection. In Proceedings of the Seventh International Conference on Software Engineering (Orlando, FL), 250-256.]] Google ScholarDigital Library
- FOSTER, Z. n. 1980. Error sensitive test case analysis (ESTCA). IEEE Trans. Softw. Eng. SE-6, 3 (May), 258-264.]]Google ScholarDigital Library
- FRANKL, P. G. AND WEISS, S.N. 1993. An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Trans. Softw. Eng. 19, 8 (Aug.), 774-787.]] Google ScholarDigital Library
- FRANKL, P. G. AND WEYUKER, J. E. 1988. An applicable family of data flow testing criteria. IEEE Trans. Softw. Eng. SE-14, 10 (Oct.), 1483-1498.]] Google ScholarDigital Library
- FRANKL, P. G. AND WEYUKER, J. E. 1993a. A formal analysis of the fault-detecting ability of testing methods. IEEE Trans. Softw. Eng. 19, 3 (March), 202-213.]] Google ScholarDigital Library
- FRANKL, P. G. AND WEYUKER, E.J. 1993b. Provable improvements on branch testing. IEEE Trans. Softw. Eng. 19, 10, 962-975.]] Google ScholarDigital Library
- FREEDMAN, R. S. 1991. Testability of software components. IEEE Trans. Softw. Eng. SE-17, 6 (June), 553-564.]] Google ScholarDigital Library
- FRITZSON, P., GYIMOTHY, T., KAMKAR, M., AND SHAHMEHRI, N. 1991. Generalized algorithmic debugging and testing. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation (Toronto, June 26-28).]] Google ScholarDigital Library
- FUJIWARA, S., v. BOCHMANN, G., KHENDEK, F., AMALOU, M., AND GHEDAMSI, A. 1991. Test selection based on finite state models. IEEE Trans. Softw. Eng. SE-17, 6 (June), 591-603.]] Google ScholarDigital Library
- GAUDEL, M.-C. AND MARRE, B. 1988. Algebraic specifications and software testing: Theory and application. In Rapport LRI 407.]]Google Scholar
- GELPERIN, D. AND HETZEL, B. 1988. The growth of software testing. Commun. ACM 31, 6 (June), 687-695.]] Google ScholarDigital Library
- GIRGIS, M. R. 1992. An experimental evaluation of a symbolic execution system. Softw. Eng. J. (July), 285-290.]] Google ScholarDigital Library
- GOLD, E. M. 1967. Language identification in the limit. Inf. Cont. 10, 447-474.]]Google ScholarCross Ref
- GOODENOUGH, J. B. AND GERHART, S. L. 1975. Toward a theory of test data selection. IEEE Trans. Softw. Eng. SE-3 (June).]]Google Scholar
- GOODENOUGH, J. B. AND GERHART, S. L. 1977. Toward a theory of testing: Data selection criteria. In Current Trends in Programming Methodology, Vol. 2, R. T. Yeh, Ed., Prentice- Hall, Englewood Cliffs, NJ, 44-79.]]Google Scholar
- GOPAL, A. AND BUDD, T. 1983. Program testing by specification mutation. Tech. Rep. TR 83- 17, University of Arizona, Nov.]]Google Scholar
- GOURLAY, J. 1983. A mathematical framework for the investigation of testing. IEEE Trans. Softw. Eng. SE-9, 6 (Nov.), 686-709.]]Google ScholarDigital Library
- HALL, P. A. V. 1991. Relationship between specifications and testing. Inf. Softw. Technol. 33, 1 (Jan./Feb.), 47-52.]] Google ScholarDigital Library
- HALL, P. A. V. AND HIERONS, R. 1991. Formal methods and testing. Tech. Rep. 91/16, Dept. of Computing, The Open University.]]Google Scholar
- HAMLET, D. AND TAYLOR, R. 1990. Partition testing does not inspire confidence. IEEE Trans. Softw. Eng. 16 (Dec.), 206-215.]] Google ScholarDigital Library
- HAMLET, D., GIFFORD, B., AND NIKOLIK, B. 1993. Exploring dataflow testing of arrays. In Proceedings of 15th ICSE (May), 118-129.]] Google ScholarDigital Library
- HAMLET, R. 1989. Theoretical comparison of testing methods. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 3 (Dec.), 28-37.]] Google ScholarDigital Library
- HAMLET, R.G. 1977. Testing programs with the aid of a compiler. IEEE Trans. Softw. Eng. 3, 4 (July), 279-290.]]Google ScholarDigital Library
- HARROLD, M. J., MCGREGOR, J. D., AND FITZ- PATRICK, K.g. 1992. Incremental testing of object-oriented class structures. In Proceedings of 14th ICSE (May) 68-80.]] Google ScholarDigital Library
- HARROLD, M. g. AND SOFFA, M.L. 1990. Interprocedural data flow testing. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 3 (Dec.), 158-167.]] Google ScholarDigital Library
- HARROLD, M. J. AND SOFFA, M.L. 1991. Selecting and using data for integration testing. IEEE Softw. (March), 58-65.]] Google ScholarDigital Library
- HARTWICK, D. 1977. Test planning. In Proceedings of National Computer Conference, 285- 294.]]Google ScholarDigital Library
- HAYES, I. J. 1986. Specification directed module testing. IEEE Trans. Softw. Eng. SE-12, 1 (Jan.), 124-133.]] Google ScholarDigital Library
- HENNELL, M. A., HEDLEY, D., AND RIDDELL, I. J. 1984. Assessing a class of software tools. In Proceedings of the Seventh ICSE, 266-277.]] Google ScholarDigital Library
- HERMAN, P. 1976. A data flow analysis approach to program testing. Aust. Comput. J. 8, 3 (Nov.), 92-96.]]Google Scholar
- HETZEL, W. 1984. The Complete Guide to Software Testing, Collins.]] Google ScholarDigital Library
- HIERONS, R. 1992. Software testing from formal specification. Ph.D. Thesis, Brunel University, UK.]]Google Scholar
- HOFFMAN, D. M. AND STROOPER, P. 1991. Automated module testing in Prolog. IEEE Trans. Softw. Eng. 17, 9 (Sept.), 934-943.]] Google ScholarDigital Library
- HORGAN, J. R. AND LONDON, S. 1991. Data flow coverage and the C language. In Proceedings of TAV4 (Oct.), 87-97.]] Google ScholarDigital Library
- HORGAN, J. R. AND MATHUR, A.P. 1992. Assessing testing tools in research and education. IEEE Softw. (May), 61-69.]] Google ScholarDigital Library
- HOWDEN, W.E. 1975. Methodology for the generation of program test data. IEEE Trans. Comput. 24, 5 (May), 554-560.]]Google Scholar
- HOWDEN, W. E. 1976. Reliability of the path analysis testing strategy. IEEE Trans. Softw. Eng. SE-2, (Sept.), 208-215.]]Google ScholarDigital Library
- HOWDEN, W.E. 1977. Symbolic testing and the DISSECT symbolic evaluation system. IEEE Trans. Softw. Eng. SE-3 (July), 266-278.]]Google ScholarDigital Library
- HOWDEN, W.E. 1978a. Algebraic program testing. ACTA Inf. 10, 53-66.]]Google ScholarDigital Library
- HOWDEN, W.E. 1978b. Theoretical and empirical studies of program testing. IEEE Trans. Softw. Eng. SE-4, 4 (July), 293-298.]]Google ScholarDigital Library
- HOWDEN, W.E. 1978c. An evaluation of the effectiveness of symbolic testing. Softw. Pract. Exper. 8, 381-397.]]Google ScholarCross Ref
- HOWDEN, W. E. 1980a. Functional program testing. IEEE Trans. Softw. Eng. SE-6, 2 (March), 162-169.]]Google ScholarDigital Library
- HOWDEN, W. E. 1980b. Functional testing and design abstractions. J. Syst. Softw. 1, 307-313.]]Google ScholarDigital Library
- HOWDEN, W.E. 1981. Completeness criteria for testing elementary program functions. In Proceedings of Fifth International Conference on Software Engineering (March), 235-243.]] Google ScholarDigital Library
- HOWDEN, W. E. 1982a. Validation of scientific programs. Comput. Surv. 14, 2 (June), 193-227.]] Google ScholarDigital Library
- HOWDEN, W. E. 1982b. Weak mutation testing and completeness of test sets. IEEE Trans. Softw. Eng. SE-8, 4 (July), 371-379.]]Google ScholarDigital Library
- HOWDEN, W.E. 1985. The theory and practice of functional testing. IEEE Softw. (Sept.), 6-17.]]Google ScholarDigital Library
- HOWDEN, W.E. 1986. A functional approach to program testing and analysis. IEEE Trans. Softw. Eng. SE-12, 10 (Oct.), 997-1005.]] Google ScholarDigital Library
- HOWDEN, W.E. 1987. Functional program testing and analysis. McGraw-Hill, New York.]] Google ScholarDigital Library
- HUTCHINS, M., FOSTER, H., GORADIA, T., AND OS- TRAND, T. 1994. Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In Proceedings of 16th IEEE International Conference on Software Engineering (May).]] Google ScholarDigital Library
- INCE, D. C. 1987. The automatic generation of test data. Comput. J. 30, 1, 63-69.]]Google ScholarCross Ref
- INCE, D.C. 1991. Software testing. In Software Engineer's Reference Book, J. A. McDermid, Ed., Butterworth-Heinemann (Chapter 19).]]Google Scholar
- KARASIK, M. S. 1985. Environmental testing techniques for software certification. IEEE Trans. Softw. Eng. SE-11, 9 (Sept.), 934-938.]] Google ScholarDigital Library
- KEMMERER, R.A. 1985. Testing formal specifications to detect design errors. IEEE Trans. Softw. Eng. SE-11, 1 (Jan.), 32-43.]]Google ScholarDigital Library
- KERNIGHAN, B. W. AND PLAUGER, P. J. 1981. Software Tools in Pascal, Addison-Wesley, Reading, MA.]] Google ScholarDigital Library
- KING, K. N. AND OFFUTT, A. J. 1991. A FOR- TRAN language system for mutation-based software testing. Softw. Pract. Exper. 21, 7 (July), 685-718.]] Google ScholarDigital Library
- KOREL, B., WEDDE, H., AND FERGUSON, R. 1992. Dynamic method of test data generation for distributed software. Inf. Softw. Tech. 34, 8 (Aug.), 523-532.]] Google ScholarDigital Library
- KOSARAJU, S. 1974. Analysis of structured programs. J. Comput. Syst. Sci. 9, 232-255.]]Google ScholarDigital Library
- KRANTZ, D. H., LUCE, R. D., SUPPES, P., AND TVER- SKY, A. 1971. Foundations of Measurement, Vol. 1: Additive and Polynomial Representations. Academic Press, New York.]]Google Scholar
- KRAUSER, E. W., MATHUR, A. P., AND REGO, V. J. 1991. High performance software testing on SIMD machines. IEEE Trans. Softw. Eng. SE-17, 5 (May), 403-423.]] Google ScholarDigital Library
- LASKI, J. 1989. Testing in the program development cycle. Softw. Eng. J. (March), 95-106.]] Google ScholarDigital Library
- LASKI, J. AND KOREL, B. 1983. A data flow oriented program testing strategy. IEEE Trans. Softw. Eng. SE-9, (May), 33-43.]]Google Scholar
- LASKI, J., SZERMER, W., AND LUCZYCKI, P. 1993. Dynamic mutation testing in integrated regression analysis. In Proceedings of 15th International Conference on Software Engineering (May), 108-117.]] Google ScholarDigital Library
- LAUTERBACH, L. AND RANDALL, W. 1989. Experimental evaluation of six test techniques. In Proceedings of COMPASS 89 (Washington, DC, June), 36-41.{]]Google ScholarCross Ref
- LEVENDEL, Y. 1991. Improving quality with a manufacturing process. IEEE Softw. (March), 13-25.]] Google ScholarDigital Library
- LINDQUIST, T. E. AND JENKINS, g.R. 1987. Test case generation with IOGEN. In Proceedings of the 20th Annual Hawaii International Conference on System Sciences, 478-487.]]Google Scholar
- LITTLEWOOD, B. AND STRIGINI, L. 1993. Validation of ultra-high dependability for softwarebased systems. C ACM 36, 11 (Nov.), 69-80.]] Google ScholarDigital Library
- LIu, L.-L. AND ROBSON, D. J. 1989. Symbolic evaluation in software testing, the final report. Computer Science Tech. Rep. 10/89, School of Engineering and Applied Science, University of Durham, June.]]Google Scholar
- LUCE, R. D., KRANTZ, D. H., SUPPES, P., AND TVER- SKY, A. 1990. Foundations of Measurement, Vol. 3: Representation, Axiomatization, and Invariance. Academic Press, San Diego.]]Google Scholar
- MALAIYA, Y. Z., VONMAYRHAUSER, A., AND SRIMANI, P.K. 1993. An examination of fault exposure ratio. IEEE Trans. Softw. Eng. 19, 11, 1087-1094.]] Google ScholarDigital Library
- MARICK, B. 1991. The weak mutation hypothesis. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 4 (Oct.), 190-199.]] Google ScholarDigital Library
- MATHUR, A. P. 1991. Performance, effectiveness, and reliability issues in software testing. In Proceedings of the 15th Annual International Computer Software and Applications Conference (Tokyo, Sept.), 604-605.]]Google ScholarCross Ref
- MARSHALL, A. C. 1991. A Conceptual model of software testing. J. Softw. Test. Ver. Rel. 1, 3 (Dec.), 5-16.]]Google Scholar
- MCCABE, T. g. 1976. A complexity measure. IEEE Trans. Softw. Eng. SE-2, 4, 308-320.]]Google ScholarDigital Library
- MCCABE, T. J. (ED.) 1983. Structured Testing. IEEE Computer Society Press, Los Alamitos, CA.]]Google Scholar
- MCCABE, T. J. AND SCHULMEYER, G. G. 1985. System testing aided by structured analysis: A practical experience. IEEE Trans. Softw. Eng. SE-11, 9 (Sept.), 917-921.]] Google ScholarDigital Library
- MCMULLIN, P. R. AND GANNON, J.D. 1983. Combining testing with specifications: A case study. IEEE Trans. Softw. Eng. SE-9, 3 (May), 328-334.]]Google ScholarDigital Library
- MEEK, B. AND SIU, K.K. 1988. The effectiveness of error seeding. Alvey Project SE/064: Quality evaluation of programming language processors, Report No. 2, Computing Centre, King's College London, Oct.]]Google Scholar
- MILLER, E. AND HOWDEN, W.E. 1981. Tutorial: Software Testing and Validation Techniques, (2nd ed.). IEEE Computer Society Press, Los Alamitos, CA.]]Google Scholar
- MILLER, K. W., MORELL, L. J., NOONAN, R. E., PARK, S. K., NICOL, D. M., MURRILL, B. W., AND VOAS, J.M. 1992. Estimating the probability of failure when testing reveals no failures. IEEE Trans. Softw. Eng. 18, 1 (Jan.), 33-43.]] Google ScholarDigital Library
- MORELL, L. J. 1990. A theory of fault-based testing. IEEE Trans. Softw. Eng. 16, 8 (Aug.), 844-857.]] Google ScholarDigital Library
- MYERS, G.g. 1977. An extension to the cyclomatic measure of program complexity. SIG- PLAN No. 12, 10, 61-64.]] Google ScholarDigital Library
- MYERS, G.J. 1979. The Art of Software Testing. John Wiley and Sons, New York.]] Google ScholarDigital Library
- MYERS, J. P., JR. 1992. The complexity of software testing. Softw. Eng. J. (Jan.), 13-24.]] Google ScholarDigital Library
- NTAFOS, S. C. 1984. An evaluation of required element testing strategies. In Proceedings of the Seventh International Conference on Software Engineering, 250-256.]] Google ScholarDigital Library
- NTAFOS, S.C. 1984. On required element testing. IEEE Trans. Softw. Eng. SE-IO, 6 (Nov.), 795-803.]]Google ScholarDigital Library
- NTAFOS, S. C. 1988. A comparison of some structural testing strategies. IEEE Trans. Softw. Eng. SE-14 (June), 868-874.]] Google ScholarDigital Library
- OFFUTT, A.J. 1989. The coupling effect: Fact or fiction. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 3 (Dec. 13-15), 131-140.]] Google ScholarDigital Library
- OFFUTT, A.J. 1992. Investigations of the software testing coupling effect. ACM Trans. Softw. Eng. Methodol. 1, 1 (Jan.), 5-20.]] Google ScholarDigital Library
- OFFUTT, A. J. AND LEE, S.D. 1991. How strong is weak mutation? In Proceedings of SIG- SOFT Symposium on Software Testing, Analysis, and Verification 4 (Oct.), 200-213.]] Google ScholarDigital Library
- OFFUTT, n. J., ROTHERMEL, G., AND ZAPF, C. 1993. An experimental evaluation of selective mutation. In Proceedings of 15th ICSE (May), 100-107.]] Google ScholarDigital Library
- OSTERWEIL, L. AND CLARKE, L.A. 1992. A proposed testing and analysis research initiative. IEEE Softw. (Sept.), 89-96.]] Google ScholarDigital Library
- OSTRAND, T. J. AND BALCER, M. J. 1988. The category-partition method for specifying and generating functional tests. Commun. ACM 31, 6 (June), 676-686.]] Google ScholarDigital Library
- OSTRAND, T. J. AND WEYUKER, E.J. 1991. Dataflow-based test adequacy analysis for languages with pointers. In Proceedings of SIG- SOFT Symposium on Software Testing, Analysis, and Verification 4, (Oct.), 74-86.]] Google ScholarDigital Library
- OULD, M. A. AND UNWIN, C., EDS. 1986. Testing in Software Development. Cambridge University Press, New York.]] Google ScholarDigital Library
- PAIGE, M. R. 1975. Program graphs, an algebra, and their implication for programming. IEEE Trans. Softw. Eng. SE-1, 3, (Sept.), 286-291.]]Google ScholarDigital Library
- PAIGE, M. R. 1978. An analytical approach to software testing. In Proceedings COMP- SAC'78, 527-532.]]Google ScholarCross Ref
- PANDI, H. D., RYDER, B. G., AND LANDI, W. 1991. Interprocedural Def-Use associations in C programs. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 4, (Oct.), 139-153.]] Google ScholarDigital Library
- PARRISH, A. AND ZWEBEN, S.H. 1991. Analysis and refinement of software test data adequacy properties. IEEE Trans. Softw. Eng. SE-17, 6 (June), 565-581.]] Google ScholarDigital Library
- PARRISH, A. S. AND ZWEBEN, S.H. 1993. Clarifying some fundamental concepts in software testing. IEEE Trans. Softw. Eng. 19, 7 (July), 742-746.]] Google ScholarDigital Library
- PETSCHENIK, N.H. 1985. Practical priorities in system testing. IEEE Softw. (Sept.), 18-23.]]Google ScholarDigital Library
- PIWOWARSKI, P., OHBA, M., AND CARUSO, J. 1993. Coverage measurement experience during function testing. In Proceedings of the 15th ICSE (May), 287-301.]] Google ScholarDigital Library
- PODGURSKI, A. AND CLARKE, L. 1989. The implications of program dependences for software testing, debugging and maintenance. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 3, (Dec.), 168-178.]] Google ScholarDigital Library
- PODGURSKI, A. AND CLARKE, L.A. 1990. A formal model of program dependences and its implications for software testing, debugging and maintenance. IEEE Trans. Softw. Eng. 16, 9 (Sept.), 965-979.]] Google ScholarDigital Library
- PRATHER, R. E. AND MYERS, g.P. 1987. The path prefix software testing strategy. IEEE Trans. Softw. Eng. SE-13, 7 (July).]] Google ScholarDigital Library
- PROGRAM ANALYSIS LTD., UK. 1992. Testbed technical description. May.]]Google Scholar
- RAPPS, S. AND WEYUKER, E. J. 1985. Selecting software test data using data flow information. IEEE Trans. Softw. Eng. SE-11, 4 (April), 367-375.]] Google ScholarDigital Library
- RICHARDSON, D. J. AND CLARKE, L. A. 1985. Partition analysis: A method combining testing and verification. IEEE Trans. Softw. Eng. SE-11, 12 (Dec.), 1477-1490.{]]Google ScholarDigital Library
- RICHARDSON, D. J., AHA, S. L., AND O'MALLEY, T. O. 1992. Specification-based test oracles for reactive systems. In Proceedings of 14th International Conference on Software Engineering (May), 105-118.]] Google ScholarDigital Library
- RICHARDSON, D. J. AND THOMPSON, M. C. 1988. The RELAY model of error detection and its application. In Proceedings of SIG- SOFT Symposium on Software Testing, Analysis, and Verification 2 (July).]]Google ScholarCross Ref
- RICHARDSON, D. J. AND THOMPSON, M. C. 1993. An analysis of test data selection criteria using the relay model of fault detection. IEEE Trans. Softw. Eng. 19, 6, 533-553.]] Google ScholarDigital Library
- RIDDELL, I. J., HENNELL, M. A., WOODWARD, M. R., AND HEDLEY, D. 1982. Practical aspects of program mutation. Tech. Rep., Dept. of Computational Science, University of Liverpool, UK.]]Google Scholar
- ROBERTS, F.S. 1979. Measurement Theory, Encyclopedia of Mathematics and Its Applications, Vol. 7. Addison-Wesley, Reading, MA.]]Google Scholar
- ROE, R. P. AND ROWLAND, J.H. 1987. Some theory concerning certification of mathematical subroutines by black box testing. IEEE Trans. Softw. Eng. SE-13, 6 (June), 677-682.]] Google ScholarDigital Library
- ROUSSOPOULOS, N. AND YEH, R.T. 1985. SEES: A software testing environment support system. IEEE Trans. Softw. Eng. SE-11, 4, (April), 355- 366.]] Google ScholarDigital Library
- RUDNER, B. 1977. Seeding/tagging estimation of software errors: Models and estimates. Rome Air Development Centre, Rome, NY, RADC-TR-77-15, also AD-A036 655.]]Google Scholar
- SARIKAYA, B., BOCHMANN, G. V., AND CERNY, E. 1987. A test design methodology for protocol testing. IEEE Trans. Softw. Eng. SE-13, 5 (May), 518-531.]] Google ScholarDigital Library
- SHERER, S.A. 1991. A cost-effective approach to testing. IEEE Softw. (March), 34-40.]] Google ScholarDigital Library
- SOFTWARE RESEARCH. 1992. Software Test- Works--Software Testers Workbench System. Software Research, Inc.]]Google Scholar
- SOLHEIM, J. A. AND ROWLAND, J. H. 1993. An empirical-study of testing and integration strategies using artificial software systems. IEEE Trans. Softw. Eng. 19, 10, 941-949.]] Google ScholarDigital Library
- STOCKS, P. A. AND CARRINGTON, D.A. 1993. Test templates: A specification-based testing framework. In Proceedings of 15th International Conference on Software Engineering (May), 405-414.]] Google ScholarDigital Library
- Su, J. AND RITTER, P. R. 1991. Experience in testing the Motif interface. IEEE Softw. (March), 26-33.]] Google ScholarDigital Library
- SUPPES, P., KRANTZ, D. H., LUCE, R. D., AND TVER- SKY, A. 1989. Foundations of Measurement, Vol. 2: Geometrical, Threshold, and Probabilistic Representations. Academic Press, San Diego.]]Google Scholar
- TAI, K.-C. 1993. Predicate-based test generation for computer programs. In Proceedings of 15th International Conference on Software Engineering (May), 267-276.]] Google ScholarDigital Library
- TAKAHASHI, M. AND KAMAYACHI, Y. 1985. An empirical study of a model for program error prediction. IEEE, 330-336.]]Google Scholar
- THAYER, R., LIPOW, M., AND NELSON, E. 1978. Software Reliability. North-Holland.]]Google Scholar
- TSAI, W. T., VOLOVIK, D., AND KEEFE, T. F. 1990. Automated test case generation for programs specified by relational algebra queries. IEEE Trans. Softw. Eng. 16, 3 (March), 316-324.]] Google ScholarDigital Library
- TSOUKALAS, M. Z., DURAN, J. W., AND NTAFOS, S.C. 1993. On some reliability estimation problems in random and partition testing. IEEE Trans. Softw. Eng. 19, 7 (July), 687-697.]] Google ScholarDigital Library
- URAL, H. AND YANG, B. 1988. A structural test selection criterion. Inf. Process. Lett. 28, 3 (July), 157-163.]] Google ScholarDigital Library
- URAL, H. AND YANG, B. 1993. Modeling software for accurate data flow representation. In Proceedings of 15th International Conference on Software Engineering (May), 277-286.]] Google ScholarDigital Library
- VALIANT, L.C. 1984. A theory of the learnable. Commun. ACM 27, 11, 1134-1142.]] Google ScholarDigital Library
- VOAS, J., MORRELL, L., AND MILLER, K. 1991. Predicting where faults can hide from testing. IEEE Softw. (March), 41-48.]] Google ScholarDigital Library
- WEISER, M. D., GANNON, J. D., AND MCMULLIN, P.R. 1985. Comparison of structural test coverage metrics. IEEE Softw. (March), 80-85.]]Google ScholarDigital Library
- WEISS, S. N. AND WEYUKER, E.J. 1988. An extended domain-based model of software reliability. IEEE Trans. Softw. Eng. SE-14, 10 (Oct.), 1512-1524.]] Google ScholarDigital Library
- WEYUKER, E.J. 1979a. The applicability of program schema results to programs. Int. J. Comput. Inf. Sci. 8, 5, 387-403.]]Google ScholarCross Ref
- WEYUKER, E.J. 1979b. Translatability and decidability questions for restricted classes of program schema. SIAM J. Comput. 8, 5, 587- 598.]]Google ScholarCross Ref
- WEYUKER, E.J. 1982. On testing non-testable programs. Comput. J. 25, 4, 465-470.]]Google ScholarCross Ref
- WEYUKER, E.J. 1983. Assessing test data adequacy through program inference. ACM Trans. Program. Lang. Syst. 5, 4, (Oct.), 641-655.]] Google ScholarDigital Library
- WEYUKER, E. J. 1986. Axiomatizing software test data adequacy. IEEE Trans. Softw. Eng. SE-12, 12, (Dec.), 1128-1138.]] Google ScholarDigital Library
- WEYUKER, E.J. 1988a. The evaluation of program-based software test data adequacy criteria. Commun. ACM 31, 6, (June), 668-675.]] Google ScholarDigital Library
- WEYUKER, E. J. 1988b. Evaluating software complexity measures. IEEE Trans. Softw. Eng. SE-14, 9, (Sept.), 1357-1365.]] Google ScholarDigital Library
- WEYUKER, E. J. 1988c. An empirical study of the complexity of data flow testing. In Proceedings of SIGSOFT Symposium on Software Testing, Analysis, and Verification 2 (July), 188-195.]]Google ScholarCross Ref
- WEYUKER, E. J. 1993. More experience with data-flow testing. IEEE Trans. Softw. Eng. 19, 9, 912-919.]] Google ScholarDigital Library
- WEYUKER, E. J. AND DAVIS, M. 1983. A formal notion of program-based test data adequacy. Inf Cont. 56, 52-71.]]Google ScholarCross Ref
- WEYUKER, E. g. AND JENG, B. 1991. Analyzing partition testing strategies. IEEE Trans. Softw. Eng. 17, 7 (July), 703-711.]] Google ScholarDigital Library
- WEYUKER, E. J. AND OSTRAND, T.J. 1980. Theories of program testing and the application of revealing sub-domains. IEEE Trans. Softw. Eng. SE-6, 3 (May), 236-246.]]Google ScholarDigital Library
- WHITE, L.J. 1981. Basic mathematical definitions and results in testing. In Computer Program Testing, B. Chandrasekaran and S. Radicchi, Eds., North-Holland, 13-24.]]Google Scholar
- WHITE, L. J. AND COHEN, E.I. 1980. A domain strategy for computer program testing. IEEE Trans. Softw. Eng. SE-6, 3 (May), 247-257.]]Google ScholarDigital Library
- WHITE, L. J. AND WISZNIEWSKI, B. 1991. Path testing of computer programs with loops using a tool for simple loop patterns. Softw. Pract. Exper. 21, 10 (Oct.).]] Google ScholarDigital Library
- WICHMANN, B.A. 1993. Why are there no measurement standards for software testing? Comput. Stand. Interfaces 15, 4, 361-364.]]Google ScholarCross Ref
- WICHMANN, B. A. AND COX, M. G. 1992. Problems and strategies for software component testing standards. J. Softw. Test. Ver. Rel. 2, 167-185.]]Google ScholarCross Ref
- WILD, C., ZEIL, S., CHEN, J., AND FENG, G. 1992. Employing accumulated knowledge to refine test cases. J. Softw. Test. Ver. Rel. 2, 2 (July), 53-68.]]Google Scholar
- WISZNIEWSKI, B.W. 1985. Can domain testing overcome loop analysis? IEEE, 304-309.]]Google Scholar
- WOODWARD, M. R. 1991. Concerning ordered mutation testing of relational operators. J. Softw. Test. Ver. Rel. 1, 3 (Dec.), 35-40.]]Google Scholar
- WOODWARD, M. R. 1993. Errors in algebraic specifications and an experimental mutation testing tool. Softw. Eng. J. (July), 211-224.]]Google ScholarCross Ref
- WOODWARD, M. R. AND HALEWOOD, K. 1988. From weak to strong--dead or alive? An analysis of some mutation testing issues. In Proceedings of Second Workshop on Software Testing, Verification and Analysis (July) 152- 158.]]Google ScholarCross Ref
- WOODWARD, M. R., HEDLEY, D., AND HENNEL, M.A. 1980. Experience with path analysis and testing of programs. IEEE Trans. Softw. Eng. SE-6, 5 (May), 278-286.]]Google ScholarDigital Library
- WOODWARD, M. R., HENNEL, M. A., AND HEDLEY, D. 1980. A limited mutation approach to program testing. Tech. Rep. Dept. of Computational Science, University of Liverpool.]]Google Scholar
- YOUNG, M. AND TAYLOR, R.N. 1988. Combining static concurrency analysis with symbolic execution. IEEE Trans. Softw. Eng. SE-14, 10 (Oct.), 1499-1511.]] Google ScholarDigital Library
- ZEIL, S. J. 1983. Testing for perturbations of program statements. IEEE Trans. Softw. Eng. SE-9, 3, (May), 335-346.]]Google ScholarDigital Library
- ZEIL, S.J. 1984. Perturbation testing for computation errors. In Proceedings of Seventh International Conference on Software Engineering (Orlando, FL), 257-265.]] Google ScholarDigital Library
- ZEIL, S. J. 1989. Perturbation techniques for detecting domain errors. IEEE Trans. Softw. Eng. 15, 6 (June), 737-746.]] Google ScholarDigital Library
- ZEIL, S. J., AFIFI, F. H., AND WHITE, L. J. 1992. Detection of linear errors via domain testing. A CM Trans. Softw. Eng. Methodol. 1, 4, (Oct.), 422-451.]] Google ScholarDigital Library
- ZHU, H. 1995a. Axiomatic assessment of control flow based software test adequacy criteria. Softw. Eng. J. (Sept.), 194-204.]]Google Scholar
- ZHU, H. 1995b. An induction theory of software testing. Sci. China 38 (Supp.) (Sept.), 58-72.]]Google Scholar
- ZHU, H. 1996a. A formal analysis of the subsume relation between software test adequacy criteria. IEEE Trans. Softw. Eng. 22, 4 (April), 248-255.]] Google ScholarDigital Library
- ZHU, H. 1996b. A formal interpretation of software testing as inductive inference. J. Softw. Test. Ver. Rel. 6 (July), 3-31.]]Google ScholarCross Ref
- ZHU, H. AND HALL, P. A. V. 1992a. Test data adequacy with respect to specifications and related properties. Tech. Rep. 92/06, Department of Computing, The Open University, UK, Jan.]]Google Scholar
- ZHU, H. AND HALL, P. A.V. 1992b. Testability of programs: Properties of programs related to test data adequacy criteria. Tech. Rep. 92/05, Department of Computing, The Open University, UK, Jan.]]Google Scholar
- ZHU, H. AND HALL, P. A. V. 1993. Test data adequacy measurements. Softw. Eng. J. 8, 1 (Jan.), 21-30.]] Google ScholarDigital Library
- ZHU, H., HALL, P. A. V., AND MAY, J. 1992. Inductive inference and software testing. J. Softw. Test. Ver. Rel. 2, 2 (July), 69-82.]]Google Scholar
- ZHU, H., HALL, P. A. V., AND MAY, J. 1994. Understanding software test adequacy--an axiomatic and measurement approach. In Mathematics of Dependable Systems, Proceedings of IMA First Conference (Sept., London), Oxford University Press, Oxford.]]Google Scholar
- ZWEBEN, S. H. AND GOURLAY, J.S. 1989. On the adequacy of Weyuker's test data adequacy axioms. IEEE Trans. Softw. Eng. SE-15, 4, (April), 496-501.]] Google ScholarDigital Library
Index Terms
- Software unit test coverage and adequacy
Recommendations
A Formal Analysis of the Subsume Relation Between Software Test Adequacy Criteria
Software test adequacy criteria are rules to determine whether a software system has been adequately tested. A central question in the study of test adequacy criteria is how they relate to fault detecting ability. In this paper, we identify two ...
State coverage: a structural test adequacy criterion for behavior checking
ESEC-FSE '07: Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineeringWe propose a new language-independent, structural test adequacy criterion called state coverage. State coverage measures whether unit-level tests check the outputs and sideeffects of a program.
State coverage differs in several respects from existing ...
State coverage: a structural test adequacy criterion for behavior checking
ESEC-FSE companion '07: The 6th Joint Meeting on European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering: companion papersWe propose a new language-independent, structural test adequacy criterion called state coverage. State coverage measures whether unit-level tests check the outputs and side effects of a program.
State coverage differs in several respects from existing ...
Comments