skip to main content
10.1145/3238147.3238195acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article
Artifacts Available

Domain-independent multi-threaded software model checking

Published:03 September 2018Publication History

ABSTRACT

Recent development of software aims at massively parallel execution, because of the trend to increase the number of processing units per CPU socket. But many approaches for program analysis are not designed to benefit from a multi-threaded execution and lack support to utilize multi-core computers. Rewriting existing algorithms is difficult and error-prone, and the design of new parallel algorithms also has limitations. An orthogonal problem is the granularity: computing each successor state in parallel seems too fine-grained, so the open question is to find the right structural level for parallel execution. We propose an elegant solution to these problems: Block summaries should be computed in parallel. Many successful approaches to software verification are based on summaries of control-flow blocks, large blocks, or function bodies. Block-abstraction memoization is a successful domain-independent approach for summary-based program analysis. We redesigned the verification approach of block-abstraction memoization starting from its original recursive definition, such that it can run in a parallel manner for utilizing the available computation resources without losing its advantages of being independent from a certain abstract domain. We present an implementation of our new approach for multi-core shared-memory machines. The experimental evaluation shows that our summary-based approach has no significant overhead compared to the existing sequential approach and that it has a significant speedup when using multi-threading.

References

  1. G. M. Amdahl. 1967. Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities. In Proc. AFIPS. ACM, 483–485. org/10.1145/1465482.1465560 Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. P. Andrianov, K. Friedberger, M. U. Mandrykin, V. S. Mutilin, and A. Volkov. 2017. CPA-BAM-BnB: Block-Abstraction Memoization and Region-Based Memory Models for Predicate Abstractions (Competition Contribution). In Proc. TACAS (LNCS 10206). Springer, 355–359. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. J. Barnat, J. Havlícek, and P. Rockai. 2013. Distributed LTL Model Checking with Hash Compaction. ENTCS 296 (2013), 79–93. 07.006 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. D. Beyer. 2017. Software Verification with Validation of Results (Report on SVCOMP 2017). In Proc. TACAS (LNCS 10206). Springer, 331–349. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. D. Beyer, M. Dangl, D. Dietsch, and M. Heizmann. 2016. Correctness Witnesses: Exchanging Verification Results Between Verifiers. In Proc. FSE. ACM, 326–337. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Beyer, M. Dangl, D. Dietsch, M. Heizmann, and A. Stahlbauer. 2015. Witness Validation and Stepwise Testification across Software Verifiers. In Proc. FSE. ACM, 721–733. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Beyer, M. Dangl, and P. Wendler. 2015. Boosting k-Induction with Continuously-Refined Invariants. In Proc. CAV (LNCS 9206). Springer, 622–640.Google ScholarGoogle Scholar
  8. D. Beyer, M. Dangl, and P. Wendler. 2018. A Unifying View on SMT-Based Software Verification. J. Autom. Reasoning 60, 3 (2018), 299–335. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. D. Beyer and K. Friedberger. 2018. Replication Package for Article “Domain-Independent Multi-threaded Software Model Checking” in Proc. ASE’18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Beyer, T. A. Henzinger, R. Jhala, and R. Majumdar. 2007. The Software Model Checker Blast. Int. J. Softw. Tools Technol. Transfer 9, 5-6 (2007), 505–525. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. Beyer, T. A. Henzinger, M. E. Keremoglu, and P. Wendler. 2012. Conditional Model Checking: A Technique to Pass Information between Verifiers. In Proc. FSE. ACM, Article 57, 11 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. D. Beyer, T. A. Henzinger, and G. Théoduloz. 2007. Configurable Software Verification: Concretizing the Convergence of Model Checking and Program Analysis. In Proc. CAV (LNCS 4590). Springer, 504–518. 978-3-540-73368-3_51 Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. D. Beyer, M. E. Keremoglu, and P. Wendler. 2010. Predicate Abstraction with Adjustable-Block Encoding. In Proc. FMCAD. FMCAD, 189–197. https://www.sosy-lab.org/research/pub/2010-FMCAD.Predicate_ Abstraction_with_Adjustable-Block_Encoding.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. D. Beyer and T. Lemberger. 2016. Symbolic Execution with CEGAR. In Proc. ISoLA (LNCS 9952). Springer, 195–211.Google ScholarGoogle Scholar
  15. D. Beyer and S. Löwe. 2013. Explicit-State Software Model Checking Based on CEGAR and Interpolation. In Proc. FASE (LNCS 7793). Springer, 146–162. https://www.sosy-lab.org/research/pub/2013-FASE.Explicit-State_ Software_Model_Checking_Based_on_CEGAR_and_Interpolation.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. D. Beyer, S. Löwe, and P. Wendler. 2017. Reliable Benchmarking: Requirements and Solutions. Int. J. Softw. Tools Technol. Transfer (2017).Google ScholarGoogle Scholar
  17. 1007/s10009-017-0469-yGoogle ScholarGoogle Scholar
  18. D. Beyer and A. Stahlbauer. 2014. BDD-based software verification: Applications to event-condition-action systems. STTT 16, 5 (2014), 507–518. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Blom, J. van de Pol, and M. Weber. 2010. LTSmin: Distributed and Symbolic Reachability. In Proc. CAV (LNCS 6174). Springer, 354–359. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Brain and F. Schanda. 2012. A Lightweight Technique for Distributed and Incremental Program Verification. In Proc. VSTTE (LNCS 7152). Springer, 114–129. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. E. M. Clarke, O. Grumberg, S. Jha, Y. Lu, and H. Veith. 2003. Counterexampleguided abstraction refinement for symbolic model checking. J. ACM 50, 5 (2003), 752–794. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. H. Guo, M. Wu, L. Zhou, G. Hu, J. Yang, and L. Zhang. 2011. Practical software model checking via dynamic interface reduction. In Proc. SOSP. ACM, 265–278. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. A. Gurfinkel, A. Albarghouthi, S. Chaki, Y. Li, and M. Chechik. 2013. Ufo: Verification with Interpolants and Abstract Interpretation (Competition Contribution). In Proc. TACAS (LNCS 7795). Springer, 637–640. 978-3-642-36742-7_52 Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. G. J. Holzmann. 2003. The Spin Model Checker: Primer and Reference Manual. Addison-Wesley. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. B. A. Huberman, R. M. Lukose, and T. Hogg. 1997. An Economics Approach to Hard Computational Problems. Science 275, 7 (1997), 51–54. http://www.hpl.hp. com/research/idl/papers/EconomicsApproach/EconomicsApproach.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  26. K. Laster and O. Grumberg. 1998. Modular Model Checking of Software. In Proc. TACAS (LNCS 1384). Springer, 20–35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. P. Müller, P. Peringer, and T. Vojnar. 2015. Predator Hunting Party (Competition Contribution). In Proc. TACAS (LNCS 9035). Springer, 443–446. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. T. L. Nguyen, P. Schrammel, B. Fischer, S. La Torre, and G. Parlato. 2017. Parallel bug-finding in concurrent programs via reduced interleaving instances. In Proc. ASE. IEEE Computer Society, 753–764. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. E. Sherman and M. B. Dwyer. 2018. Structurally Defined Conditional Data-Flow Static Analysis. In Proc. TACAS, Part II (LNCS 10806). Springer, 249–265.Google ScholarGoogle Scholar
  30. V. Still, P. Rockai, and J. Barnat. 2016. DIVINE: Explicit-State LTL Model Checker (Competition Contribution). In Proc. TACAS (LNCS 9636). Springer, 920–922. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. T. van Dijk. 2016. Sylvan: multi-core decision diagrams. Ph.D. Dissertation. University of Twente, Enschede, Netherlands. http://purl.utwente.nl/publications/ 100676Google ScholarGoogle Scholar

Index Terms

  1. Domain-independent multi-threaded software model checking

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader