skip to main content
10.1145/503209.503243acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
Article

Pursuing failure: the distribution of program failures in a profile space

Authors Info & Claims
Published:01 September 2001Publication History

ABSTRACT

Observation-based testing calls for analyzing profiles of executions induced by potential test cases, in order to select a subset of executions to be checked for conformance to requirements. A family of techniques for selecting such a subset is evaluated experimentally. These techniques employ automatic cluster analysis to partition executions, and they use various sampling techniques to select executions from clusters. The experimental results support the hypothesis that with appropriate profiling, failures often have unusual profiles that are revealed by cluster analysis. The results also suggest that failures often form small clusters or chains in sparsely-populated areas of the profile space. A form of adaptive sampling called failure-pursuit sampling is proposed for revealing failures in such regions, and this sampling method is evaluated experimentally. The results suggest that failure-pursuit sampling is effective.

References

  1. 1.Borg, I. and Groenen, P. Modern Multidimensional Scaling: Theory and Applications, Springer, 1997.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. 2.Covey, J. Musical scores at Jeff Covey's homepage. http://www4.smart.net/jcovey/scores (accessed September 2000).Google ScholarGoogle Scholar
  3. 3.Dickinson, W., Leon, D., and Podgurski, A. Finding failures by cluster analysis of execution profiles. Proceedings of the 2001 International Conference on Software Engineering (Toronto, May 2001). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. 4.Elbaum, S., Malishevsky, A.G., and Rothermel, G. Prioritizing test cases for regression testing. Proceedings of the 2000 International Symposium on Software Testing and Analysis (Portland, Oregon, August 2000), 102-112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. 5.Free Software Foundation. The GCC home page. http://www.gnu.org/software/gcc/gcc.html, 2000.Google ScholarGoogle Scholar
  6. 6.Greenacre, M.J. Theory and Applications of Correspondence Analysis, Academic Press, 1984.Google ScholarGoogle Scholar
  7. 7.Han, J. and Kamber, M. Data Mining: Concepts and Techniques, Morgan Kaufmann, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. 8.Harrold, M.J., Rothermel, G., Wu, R., and Yi, L. An empirical investigation of program spectra. ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering (Montreal, June 1998), 83- 90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. 9.Jain, A.K. and Dubes, R.C. Algorithms for Clustering Data, Prentice Hall, 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. 10.Latterman, D. Musical scores for GNU LilyPond. http://www.alqualonde.de/lilypond.html (accessed September 2000).Google ScholarGoogle Scholar
  11. 11.Leon, D., Podgurski, A., and White, L.J. Multivariate visualization in observation-based testing. Proceedings of the 2000 International Conference on Software Engineering (Limerick, Ireland, June 2000), ACM Press, 116-125. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12.The Mutopia Project. Mutopia. http://www.mutopiaproject.org, (accessed September 2000).Google ScholarGoogle Scholar
  13. 13.Nienhuys, H. and Nieuwenhuizen, J. GNU LilyPond. http://www.cs.uu.nl/hanwen/lilypond, 2000.Google ScholarGoogle Scholar
  14. 14.Pavlopoulou, C. and Young, M. Residual test coverage monitoring. Proceedings of the 21st International Conference on Software Engineering (Los Angeles, May 1999), ACM Press, 277-284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. 15.Podgurski, A., Masri, W., McCleese, Y., Wolff, F.G., and Yang, C. Estimation of software reliability by stratified sampling. ACM Transactions on Software Engineering and Methodology 8, 9 (July, 1999), 263-283. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. 16.The Pub Scouts. The Pub Scouts' Tune List. http://rigel.csuchico.edu/~pubscout/songs.html (accessed September 2000).Google ScholarGoogle Scholar
  17. 17.Reps, T., Ball, T., Das, M., and Larus, J. The use of program profiling for software maintenance with applications to the Year 2000 Problem. Proceedings of the 6th European Software Engineering Conference and 5th ACM SIGSOFT Symposium on the Foundations of Software Engineering (Zurich, September 1997), ACM Press, 432-449. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18.Robbins, J. Debugging Applications, Microsoft Press, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. 19.Robinson, R. Richard Robinson's tune book. http://www.leeds.ac.uk/music/Info/RRTuneBk/tunebook.htm l (accessed September 2000).Google ScholarGoogle Scholar
  20. 20.Rothermel, G., Untch, R., Chu, C., and Harrold, M.J. Testcase prioritization: an empirical study. Proceedings of the International Conference on Software Maintenance (August, 1999), 179-188. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. 21.Shlien, S. Runabc.tcl. http://ifdo.pubmarks.com/seymor/runabc/top.html (accessed September 2000).Google ScholarGoogle Scholar
  22. 22.Steven, J., Chandra, P., Fleck, B., and Podgurski, A. jRapture: a capture/replay tool for observation-based testing. Proceedings of the 2000 International Symposium on Software Testing & Analysis (Portland, Oregon, August 2000), ACM Press, 158-167. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. 23.Thompson, S. K. and Seber, G.A. Adaptive Sampling, Wiley, 1996.Google ScholarGoogle Scholar
  24. 24.Walshaw, C., The ABC musical notation language. http://www.gre.ac.uk/c.walshaw/abc, 2000.Google ScholarGoogle Scholar

Index Terms

  1. Pursuing failure: the distribution of program failures in a profile space

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE-9: Proceedings of the 8th European software engineering conference held jointly with 9th ACM SIGSOFT international symposium on Foundations of software engineering
      September 2001
      329 pages
      ISBN:1581133901
      DOI:10.1145/503209
      • Conference Chairs:
      • A. Min Tjoa,
      • Volker Gruhn
      • cover image ACM SIGSOFT Software Engineering Notes
        ACM SIGSOFT Software Engineering Notes  Volume 26, Issue 5
        Sept. 2001
        329 pages
        ISSN:0163-5948
        DOI:10.1145/503271
        Issue’s Table of Contents

      Copyright © 2001 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 September 2001

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader