skip to main content
10.1145/2483760.2483764acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Variable and thread bounding for systematic testing of multithreaded programs

Published:15 July 2013Publication History

ABSTRACT

Previous approaches to systematic state-space exploration for testing multi-threaded programs have proposed context-bounding and depth-bounding to be effective ranking algorithms for testing multithreaded programs. This paper proposes two new metrics to rank thread schedules for systematic state-space exploration. Our metrics are based on characterization of a concurrency bug using v (the minimum number of distinct variables that need to be involved for the bug to manifest) and t (the minimum number of distinct threads among which scheduling constraints are required to manifest the bug). Our algorithm is based on the hypothesis that in practice, most concurrency bugs have low v (typically 1-2) and low t (typically 2-4) characteristics. We iteratively explore the search space of schedules in increasing orders of v and t. We show qualitatively and empirically that our algorithm finds common bugs in fewer number of execution runs, compared with previous approaches. We also show that using v and t improves the lower bounds on the probability of finding bugs through randomized algorithms. Systematic exploration of schedules requires instrumenting each variable access made by a program, which can be very expensive and severely limits the applicability of this approach. Previous work has avoided this problem by interposing only on synchronization operations (and ignoring other variable accesses). We demonstrate that by using variable bounding (v) and a static imprecise alias analysis, we can interpose on all variable accesses (and not just synchronization operations) at 10-100x less overhead than previous approaches.

References

  1. L. O. Andersen. Program analysis and specialization for the C programming language. Technical report, 1994.Google ScholarGoogle Scholar
  2. K. R. Apt, N. Francez, and S. Katz. Appraising fairness in distributed languages. In POPL ’87. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Bessey, K. Block, B. Chelf, A. Chou, B. Fulton, S. Hallem, C. Henri-Gros, A. Kamsky, S. McPeak, and D. Engler. A few billion lines of code later: using static analysis to find bugs in the real world. Commun. ACM, 53:66–75, February 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Bindal, S. Bansal, and A. Lal. Variable and thread bounding for systematic testing of multithreaded programs. Technical report, IIT Delhi, 2012. http://arxiv.org/abs/1207.2544.Google ScholarGoogle Scholar
  5. D. Bruening and J. Chapin. Systematic testing of multithreaded programs. Technical report, LCS-TM-607, MIT/LCS, 2000.Google ScholarGoogle Scholar
  6. S. Burckhardt, P. Kothari, M. Musuvathi, and S. Nagarakatte. A randomized scheduler with probabilistic guarantees of finding bugs. In ASPLOS ’10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Chiba. Javassist — a reflection-based programming wizard for java. In OOPSLA ’98.Google ScholarGoogle Scholar
  8. T. Elmas, S. Qadeer, and S. Tasiran. Goldilocks: a race and transactionaware java runtime. In PLDI ’07. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Emmi, S. Qadeer, and Z. Rakamaric. Delay-bounded scheduling. In POPL, pages 411–422, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Y. Eytani, K. Havelund, S. D. Stoller, and S. Ur. Toward a framework and benchmark for testing tools for multi-threaded programs. Concurrency and Computation: Practice & Experience, 19(3):267––279, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Godefroid. Model checking for programming languages using VeriSoft. In POPL ’97. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. P. Godefroid. Partial-Order Methods for the Verification of Concurrent Systems: An Approach to the State-Explosion Problem. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. P. Godefroid, R. S. Hanmer, and L. J. Jagadeesan. Model checking without a model: an analysis of the heart-beat monitor of a telephone switch using VeriSoft. In ISSTA ’98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. P. Joshi, C.-S. Park, K. Sen, and M. Naik. A randomized dynamic program analysis technique for detecting real deadlocks. In PLDI ’09. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Khurshid, C. S. Păsăreanu, and W. Visser. Generalized symbolic execution for model checking and testing. In TACAS’03. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. O. Lhoták and L. Hendren. Evaluating the benefits of contextsensitive points-to analysis using a bdd-based implementation. ACM Trans. Softw. Eng. Methodol., 18:3:1–3:53, October 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. S. Lu, S. Park, E. Seo, and Y. Zhou. Learning from mistakes: a comprehensive study on real world concurrency bug characteristics. In ASPLOS ’08. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. S. Lu, J. Tucek, F. Qin, and Y. Zhou. Avio: detecting atomicity violations via access interleaving invariants. In ASPLOS ’06. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. D. Marino, M. Musuvathi, and S. Narayanasamy. Literace: effective sampling for lightweight data-race detection. In PLDI ’09. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Musuvathi and S. Qadeer. Iterative context bounding for systematic testing of multithreaded programs. In PLDI ’07. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. M. Musuvathi, S. Qadeer, T. Ball, G. Basler, P. A. Nainar, and I. Neamtiu. In OSDI ’08.Google ScholarGoogle Scholar
  22. M. Naik, A. Aiken, and J. Whaley. Effective static race detection for java. In PLDI ’06. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. S. Park, S. Lu, and Y. Zhou. CTrigger: Exposing atomicity violation bugs from their hiding places. In ASPLOS ’09. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Savage, M. Burrows, G. Nelson, P. Sobalvarro, and T. Anderson. Eraser: a dynamic data race detector for multithreaded programs. ACM Trans. Comput. Syst., 15:391–411, November 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. K. Sen. Race directed random testing of concurrent programs. In PLDI ’08. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. S. D. Stoller. Model-checking multi-threaded distributed java programs. In SPIN Workshop on Model Checking and Software Verification, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. J. Whaley and M. S. Lam. Cloning-based context-sensitive pointer alias analysis using binary decision diagrams. In PLDI ’04. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. W. Xiong, S. Park, J. Zhang, Y. Zhou, and Z. Ma. Ad hoc synchronization considered harmful. In OSDI ’10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Y. Yu, T. Rodeheffer, and W. Chen. Racetrack: efficient detection of data race conditions via adaptive tracking. In SOSP ’05. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Variable and thread bounding for systematic testing of multithreaded programs

                Recommendations

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader