skip to main content
10.1145/2384616.2384651acmconferencesArticle/Chapter ViewAbstractPublication PagessplashConference Proceedingsconference-collections
research-article

Maple: a coverage-driven testing tool for multithreaded programs

Published:19 October 2012Publication History

ABSTRACT

Testing multithreaded programs is a hard problem, because it is challenging to expose those rare interleavings that can trigger a concurrency bug. We propose a new thread interleaving coverage-driven testing tool called Maple that seeks to expose untested thread interleavings as much as possible. It memoizes tested interleavings and actively seeks to expose untested interleavings for a given test input to increase interleaving coverage. We discuss several solutions to realize the above goal. First, we discuss a coverage metric based on a set of interleaving idioms. Second, we discuss an online technique to predict untested interleavings that can potentially be exposed for a given test input. Finally, the predicted untested interleavings are exposed by actively controlling the thread schedule while executing for the test input. We discuss our experiences in using the tool to expose several known and unknown bugs in real-world applications such as Apache and MySQL.

References

  1. P. Barford and M. Crovella. Generating representative web workloads for network and server performance evaluation. In SIGMETRICS, pages 151--160, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Bron, E. Farchi, Y. Magid, Y. Nir, and S. Ur. Applications of synchronization coverage. In PPOPP, pages 206--212, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Burckhardt, P. Kothari, M. Musuvathi, and S. Nagarakatte. A randomized scheduler with probabilistic guarantees of finding bugs. In ASPLOS, pages 167--178, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Cadar, D. Dunbar, and D. R. Engler. Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In OSDI, pages 209--224, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. K. E. Coons, S. Burckhardt, and M. Musuvathi. Gambit: effective unit testing for concurrency libraries. In PPOPP, pages 15--24, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. O. Edelstein, E. Farchi, Y. Nir, G. Ratsaby, and S. Ur. Multi-threaded java program test generation. IBM Systems Journal, 41(1):111--125, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. R. Engler and K. Ashcraft. Racerx: effective, static detection of race conditions and deadlocks. In SOSP, pages 237--252, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Flanagan and S. N. Freund. Fasttrack: efficient and precise dynamic race detection. Commun. ACM, 53(11):93--101, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. C. Flanagan, S. N. Freund, and J. Yi. Velodrome: a sound and complete dynamic atomicity checker for multithreaded programs. SIGPLAN Not., 43(6):293--303, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Flanagan and P. Godefroid. Dynamic partial-order reduction for model checking software. In POPL, pages 110--121, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. C. Flanagan and S. Qadeer. A type and effect system for atomicity. SIGPLAN Not., 38(5):338--349, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. P. Godefroid. Partial-Order Methods for the Verification of Concurrent Systems - An Approach to the State-Explosion Problem, volume 1032 of Lecture Notes in Computer Science. Springer, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. P. Godefroid. Model checking for programming languages using verisoft. In POPL, pages 174--186, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. P. Godefroid, N. Klarlund, and K. Sen. Dart: directed auto-mated random testing. In PLDI, pages 213--223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. Godefroid, M. Y. Levin, and D. A. Molnar. Automated whitebox fuzz testing. In NDSS, 2008.Google ScholarGoogle Scholar
  16. K. Havelund and T. Pressburger. Model checking java pro-grams using java pathfinder. STTT, 2(4):366--381, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  17. J. Huang and C. Zhang. Persuasive prediction of concurrency access anomalies. In ISSTA, pages 144--154, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. D. Jackson and C. Damon. Elements of style: Analyzing a software design feature with a counterexample detector. IEEE Trans. Software Eng., 22(7):484--495, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. V. Jagannath, M. Gligoric, D. Jin, Q. Luo, G. Rosu, and D. Marinov. Improved multithreaded unit testing. In SIG-SOFT FSE, pages 223--233, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. Joshi, C.-S. Park, K. Sen, and M. Naik. A randomized dynamic program analysis technique for detecting real dead-locks. In PLDI, pages 110--120, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. V. Kahlon and C. Wang. Universal causality graphs: A pre-cise happens-before model for detecting bugs in concurrent programs. In CAV, pages 434--449, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. B. Krena, Z. Letko, and T. Vojnar. Coverage metrics for saturation-based and search-based testing of concurrent soft-ware. In RV, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Z. Lai, S.-C. Cheung, and W. K. Chan. Detecting atomic-set serializability violations in multithreaded programs through active randomized testing. In ICSE (1), pages 235--244, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Lu, W. Jiang, and Y. Zhou. A study of interleaving coverage criteria. In ESEC/SIGSOFT FSE, pages 533--536, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. S. Lu, S. Park, C. Hu, X. Ma, W. Jiang, Z. Li, R. A. Popa, and Y. Zhou. Muvi: automatically inferring multi-variable access correlations and detecting related semantic and concurrency bugs. In SOSP, pages 103--116, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. S. Lu, J. Tucek, F. Qin, and Y. Zhou. Avio: detecting atomicity violations via access interleaving invariants. In ASPLOS, pages 37--48, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. B. Lucia, J. Devietti, K. Strauss, and L. Ceze. Atom-aid: Detecting and surviving atomicity violations. In ISCA, pages 277--288, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. C.-K. Luk, R. S. Cohn, R. Muth, H. Patil, A. Klauser, P. G. Lowney, S. Wallace, V. J. Reddi, and K. M. Hazelwood. Pin: building customized program analysis tools with dynamic instrumentation. In PLDI, pages 190--200, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. M. Musuvathi and S. Qadeer. Iterative context bounding for systematic testing of multithreaded programs. In PLDI, pages 446--455, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Musuvathi and S. Qadeer. Partial-order reduction for context-bounded state exploration. Technical Report MSR-TR-2007-12, Microsoft Research, 2007.Google ScholarGoogle Scholar
  31. M. Musuvathi and S. Qadeer. Fair stateless model checking. In PLDI, pages 362--371, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. M. Musuvathi, S. Qadeer, T. Ball, G. Basler, P. A. Nainar, and I. Neamtiu. Finding and reproducing heisenbugs in concurrent programs. In OSDI, pages 267--280, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. M. Naik, C.-S. Park, K. Sen, and D. Gay. Effective static deadlock detection. In ICSE, pages 386--396, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. C. Pacheco and M. D. Ernst. Randoop: feedback-directed random testing for java. In OOPSLA Companion, pages 815--816, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. C.-S. Park and K. Sen. Randomized active atomicity violation detection in concurrent programs. In SIGSOFT FSE, pages 135--145, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. S. Park, S. Lu, and Y. Zhou. Ctrigger: exposing atomicity violation bugs from their hiding places. In ASPLOS, pages 25--36, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. H. Patil, C. Pereira, M. Stallcup, G. Lueck, and J. Cownie. Pinplay: a framework for deterministic replay and repro-ducible analysis of parallel programs. In CGO, pages 2--11, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. K. Sen. Race directed random testing of concurrent programs. In PLDI, pages 11--21, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. E. Sherman, M. B. Dwyer, and S. G. Elbaum. Saturation-based testing of concurrent programs. In ESEC/SIGSOFT FSE, pages 53--62, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. F. Sorrentino, A. Farzan, and P. Madhusudan. Penelope: weaving threads to expose atomicity violations. In SIGSOFT FSE, pages 37--46, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. R. N. Taylor, D. L. Levine, and C. D. Kelly. Structural testing of concurrent programs. IEEE Trans. Software Eng., 18(3):206--215, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. M. Vaziri, F. Tip, and J. Dolby. Associating synchronization constraints with data in an object-oriented language. In POPL, pages 334--345, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. J. W. Voung, R. Jhala, and S. Lerner. Relay: static race detection on millions of lines of code. In ESEC/SIGSOFT FSE, pages 205--214, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. C. Wang, M. Said, and A. Gupta. Coverage guided systematic concurrency testing. In ICSE, pages 221--230, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. D. Weeratunge, X. Zhang, and S. Jagannathan. Analyzing multicore dumps to facilitate concurrency bug reproduction. In ASPLOS, pages 155--166, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. S. C. Woo, M. Ohara, E. Torrie, J. P. Singh, and A. Gupta. The splash-2 programs: Characterization and methodological considerations. In ISCA, pages 24--36, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. C.-S. D. Yang, A. L. Souter, and L. L. Pollock. All-dupath coverage for parallel programs. In ISSTA, pages 153--162, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. J. Yu and S. Narayanasamy. A case for an interleaving con-strained shared-memory multi-processor. In ISCA, pages 325--336, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. W. Zhang, J. Lim, R. Olichandran, J. Scherpelz, G. Jin, S. Lu, and T. W. Reps. Conseq: detecting concurrency bugs through sequential errors. In ASPLOS, pages 251--264, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. W. Zhang, C. Sun, and S. Lu. Conmem: detecting severe concurrency bugs through an effect-oriented approach. In ASPLOS, pages 179--192, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Maple: a coverage-driven testing tool for multithreaded programs

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      OOPSLA '12: Proceedings of the ACM international conference on Object oriented programming systems languages and applications
      October 2012
      1052 pages
      ISBN:9781450315616
      DOI:10.1145/2384616
      • cover image ACM SIGPLAN Notices
        ACM SIGPLAN Notices  Volume 47, Issue 10
        OOPSLA '12
        October 2012
        1011 pages
        ISSN:0362-1340
        EISSN:1558-1160
        DOI:10.1145/2398857
        Issue’s Table of Contents

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 October 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate268of1,244submissions,22%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader