Skip to main content

2008 | Buch

Model Checking Software

15th International SPIN Workshop, Los Angeles, CA, USA, August 10-12, 2008 Proceedings

herausgegeben von: Klaus Havelund, Rupak Majumdar, Jens Palsberg

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 15th International SPIN workshop on Model Checking Software, SPIN 2008, held in Los Angeles, CA, USA, in August 2008. The 17 revised full papers presented together with 1 tool paper and 4 invited talks were carefully reviewed and selected from 41 submissions. The main focus of the workshop series is software systems, including models and programs. The papers cover theoretical and algorithmic foundations as well as tools for software model checking and foster interactions and exchanges of ideas with related areas in software engineering, such as static analysis, dynamic analysis, and testing.

Inhaltsverzeichnis

Frontmatter

Invited Contributions

Residual Checking of Safety Properties
Abstract
Program analysis and verification techniques have made great strides, yet, as every researcher in the field will admit it is easy to find a program and property for which a given technique is not cost-effective. Investigating the conventional wisdom that programs are mostly correct, we have observed that even failed program analyses usually produce a wealth of information about the parts of the program that operate correctly. Leveraging this information can help focus subsequent analysis and verification activities to make them more cost-effective.
Matthew B. Dwyer, Rahul Purandare
The Case for Context-Bounded Verification of Concurrent Programs
Abstract
Concurrent programs are difficult to get right. Subtle interactions among communicating threads in the program can result in behaviors unexpected to the programmer. These behaviors typically result in bugs that occur late in the software development cycle or even after the software is released. Such bugs are difficult to reproduce and difficult to debug. As a result, they have a huge adverse impact on the productivity of programmers and the cost of software development. Therefore, tools that can help detect and debug concurrency errors will likely provide a significant boost to software productivity.
Shaz Qadeer
Combining Static and Dynamic Reasoning for the Discovery of Program Properties
Abstract
Combinations of static and dynamic analyses can be profitably employed in tasks such as program understanding, bug detection, behavior discovery, etc. In the past several years, we have explored a particular scheme for improving the quality of bug reports in a sequence of tools: JCrasher, Check ’n’ Crash, and DSD-Crasher. We have additionally explored the combination of dynamic and symbolic execution for the purpose of inferring program invariants in the DySy tool. In this talk, we discuss such approaches, while distinguishing the conceptual benefits and drawbacks of each approach from the abilities and shortcomings of the current representative tools.
Yannis Smaragdakis
Using Dynamic Symbolic Execution to Improve Deductive Verification
Abstract
One of the most challenging problems in deductive program verification is to find inductive program invariants typically expressed using quantifiers. With strong-enough invariants, existing provers can often prove that a program satisfies its specification. However, provers by themselves do not find such invariants. We propose to automatically generate executable test cases from failed proof attempts using dynamic symbolic execution by exploring program code as well as contracts with quantifiers. A developer can analyze the test cases with a traditional debugger to determine the cause of the error; the developer may then correct the program or the contracts and repeat the process.
Dries Vanoverberghe, Nikolaj Bjørner, Jonathan de Halleux, Wolfram Schulte, Nikolai Tillmann

Regular Papers

Automated Evaluation of Secure Route Discovery in MANET Protocols
Abstract
Evaluation techniques to analyze security properties in ad hoc routing protocols generally rely on manual, non-exhaustive approaches. Non-exhaustive analysis techniques may conclude a protocol is secure, while in reality the protocol may contain an unapparent or subtle flaw. Using formalized exhaustive evaluation techniques to analyze security properties increases protocol confidence. In this paper, we offer an automated evaluation process to analyze security properties in the route discovery phase for on-demand source routing protocols. Using our automated security evaluation process, we are able to produce and analyze all topologies for a given network size. The individual network topologies are fed into the SPIN model checker to exhaustively evaluate protocol abstractions against an attacker attempting to corrupt the route discovery process.
Todd R. Andel, Alec Yasinsac
Model Checking Abstract Components within Concrete Software Environments
Abstract
In order to model check a software component which is not a standalone program, we need a model of the software which completes the program. This is typically done by abstracting the surrounding software and the environment in which the entire system will be executed. However, abstracting the surrounding software artifact is difficult when the surrounding software is a large, complex artifact. In this paper, we take a new approach to the problem by abstracting the software component under test and leaving the surrounding software concrete. We compare three abstraction schemes, bitstate hashing and two schemes based on predicate abstraction, which can be used to abstract the components. We show how to generate the mixed abstract-concrete model automatically from a C program and verify the model using the SPIN model checker. We give verification results for three C programs each consisting of hundreds or thousands of lines of code, pointers, data structures and calls to library functions. Compared to the predicate abstraction schemes, bitstate hashing was uniformly more efficient in both error discovery and exhaustive state enumeration. The component abstraction results in faster error discovery than normal code execution when pruning during state enumeration avoids repeated execution of instructions on the same data.
Tonglaga Bao, Mike Jones
Generating Compact MTBDD-Representations from Probmela Specifications
Abstract
The purpose of the paper is to provide an automatic transformation of parallel programs of an imperative probabilistic guarded command language (called Probmela) into probabilistic reactive module specifications. The latter serve as basis for the input language of the symbolic MTBDD-based probabilistic model checker PRISM, while Probmela is the modeling language of the model checker LiQuor which relies on an enumerative approach and supports partial order reduction and other reduction techniques. By providing the link between the model checkers PRISM and LiQuor, our translation supports comparative studies of different verification paradigms and can serve to use the (more comfortable) guarded command language for a MTBDD-based quantitative analysis. The challenges were (1) to ensure that the translation preserves the Markov decision process semantics, (2) the efficiency of the translation and (3) the compactness of the symbolic BDD-representation of the generated PRISM-language specifications.
Frank Ciesinski, Christel Baier, Marcus Größer, David Parker
Dynamic Delayed Duplicate Detection for External Memory Model Checking
Abstract
Duplicate detection is an expensive operation of disk-based model checkers. It consists of comparing some potentially new states, the candidate states, to previous visited states. We propose a new approach to this technique called dynamic delayed duplicate detection. This one exploits some typical properties of states spaces, and adapts itself to the structure of the state space to dynamically decide when duplicate detection must be conducted. We implemented this method in a new algorithm and found out that it greatly cuts down the cost of duplicate detection. On some classes of models, it performs significantly better than some previously published algorithms.
Sami Evangelista
State Focusing: Lazy Abstraction for the Mu-Calculus
Abstract
A key technique for the verification of programs is counterexample-guided abstraction refinement (CEGAR). In a previous approach, we developed a CEGAR-based algorithm for the modal μ-calculus, where refinement applies only locally, i.e. lazy abstraction techniques are used. Unfortunately, our previous algorithm was not completely lazy and had some further drawbacks, like a possible local state explosion. In this paper, we present an improved algorithm that maintains all advantages of our previous algorithm but eliminates all its drawbacks. The improvements were only possible by changing the philosophy of refinement from state splitting into the new philosophy of state focusing, where the states that are about to be split are not removed.
Harald Fecher, Sharon Shoham
Efficient Modeling of Concurrent Systems in BMC
Abstract
We present an efficient method for modeling multi-threaded concurrent systems with shared variables and locks in Bounded Model Checking (BMC), and use it to improve the detection of safety properties such as data races. Previous approaches based on synchronous modeling of interleaving semantics do not scale up well due to the inherent asynchronism in those models. Instead, in our approach, we first create independent (uncoupled) models for each individual thread in the system, then explicitly add additional synchronization variables and constraints, incrementally, and only where such synchronization is needed to guarantee the (chosen) concurrency semantics (based on sequential consistency). We describe our modeling in detail and report verification results to demonstrate the efficacy of our approach on a complex case study.
Malay K. Ganai, Aarti Gupta
Tackling Large Verification Problems with the Swarm Tool
Abstract
The range of verification problems that can be solved with logic model checking tools has increased significantly in the last few decades. This increase in capability is based on algorithmic advances, but in no small measure it is also made possible by increases in processing speed and main memory sizes on standard desktop systems. For the time being, though, the increase in CPU speeds has mostly ended as chip-makers are redirecting their efforts to the development of multi-core systems. In the coming years we can expect systems with very large memory sizes, and increasing numbers of CPU cores, but with each core running at a relatively low speed. We will discuss the implications of this important trend, and describe how we can leverage these developments with new tools.
Gerard J. Holzmann, Rajeev Joshi, Alex Groce
Formal Verification of a Flash Memory Device Driver – An Experience Report
Abstract
Flash memory has become virtually indispensable in most mobile devices. In order for mobile devices to operate successfully, it is essential that flash memory be controlled correctly through the device driver software. However, as is typical for embedded software, conventional testing methods often fail to detect hidden flaws in the complex device driver software. This deficiency incurs significant development and operation overhead to the manufacturers.
In order to compensate for the weaknesses of conventional testing, we have applied NuSMV, Spin, and CBMC to verify the correctness of a multi-sector read operation of the Samsung OneNANDTM flash device driver and studied their relative strengths and weaknesses empirically. Through this project, we verified the correctness of the multi-sector read operation on a small scale. The results demonstrate the feasibility of using model checking techniques to verify the control algorithm of a device driver in an industrial setting.
Moonzoo Kim, Yunja Choi, Yunho Kim, Hotae Kim
Layered Duplicate Detection in External-Memory Model Checking
Abstract
This paper presents a disk-based explicit-state model checking algorithm that uses an approach called layered duplicate detection. In this approach, states encountered during a breadth-first traversal of the graph of the transition system are stored in memory according to the layer of the graph in which they are first encountered. With this layered organization of memory, transition locality is exploited by checking only the most recent layers for duplicates. In RAM, exploiting transition locality in this way saves time. In external memory, it saves space. In addition, a layered structure allows an easy method of counterexample reconstruction in disk-based model checking. We prove a worst-case linear bound on the redundant work performed by our approach. Experimental results indicate that average case redundant work is much better than the worst-case. The implemented model checker has been used to verify a transition system that required more than 275 GBs of disk storage.
Peter Lamborn, Eric A. Hansen
Dependency Analysis for Control Flow Cycles in Reactive Communicating Processes
Abstract
The execution of a reactive system amounts to the repetitions of executions of control flow cycles in the component processes of the system. The way in which cycle executions are combined is not arbitrary since cycles may depend on or exclude one another. We believe that the information of such dependencies is important to the design, understanding, and verification of reactive systems. In this paper, we formally define the concept of a cycle dependency, and propose several static analysis methods to discover such dependencies. We have implemented several strategies for computing cycle dependencies and compared their performance with realistic models of considerable size. It is also shown how the detection of accurate dependencies is used to improve a livelock freedom analysis that we developed previously.
Stefan Leue, Alin Ştefănescu, Wei Wei
Improved On-the-Fly Equivalence Checking Using Boolean Equation Systems
Abstract
Equivalence checking is a classical verification method for ensuring the compatibility of a finite-state concurrent system (protocol) with its desired external behaviour (service) by comparing their underlying labeled transition systems (Ltss) modulo an appropriate equivalence relation. The local (or on-the-fly) approach for equivalence checking combats state explosion by exploring the synchronous product of the Ltss incrementally, thus allowing an efficient detection of errors in complex systems. However, when the two Ltss being compared are equivalent, the on-the-fly approach is outperformed by the global one, which completely builds the Ltss and computes the equivalence classes between states using partition refinement. In this paper, we consider the approach based on translating the on-the-fly equivalence checking problem in terms of the local resolution of a boolean equation system (Bes). We propose two enhancements of the approach in the case of equivalent Ltss: a new, faster encoding of equivalence relations in terms of Bess, and a new local Bes resolution algorithm with a better average complexity. These enhancements were incorporated into the Bisimulator 2.0 equivalence checker of the Cadp toolbox, and they led to significant performance improvements w.r.t. existing on-the-fly equivalence checking algorithms.
Radu Mateescu, Emilie Oudot
Resource-Aware Verification Using Randomized Exploration of Large State Spaces
Abstract
Exhaustive verification often suffers from the state-explosion problem, where the reachable state space is too large to fit in main memory. For this reason, and because of disk swapping, once the main memory is full very little progress is made, and the process is not scalable. To alleviate this, partial verification methods have been proposed, some based on randomized exploration, mostly in the form of random walks. In this paper, we enhance partial, randomized state-space exploration methods with the concept of resource-awareness: the exploration algorithm is made aware of the limits on resources, in particular memory and time. We present a memory-aware algorithm that by design never stores more states than those that fit in main memory. We also propose criteria to compare this algorithm with similar other algorithms. We study properties of such algorithms both theoretically on simple classes of state spaces and experimentally on some preliminary case studies.
Nazha Abed, Stavros Tripakis, Jean-Marc Vincent
Incremental Hashing for Spin
Abstract
This paper discusses a generalised incremental hashing scheme for explicit state model checkers. The hashing scheme has been implemented into the model checker Spin. The incremental hashing scheme works for Spin’s exhaustive and both approximate verification modes: bitstate hashing and hash compaction. An implementation is provided for 32-bit and 64-bit architectures.
We performed extensive experiments on the BEEM benchmarks to compare the incremental hash functions against Spin’s traditional hash functions. In almost all cases, incremental hashing is faster than traditional hashing. The amount of performance gain depends on several factors, though.
We conclude that incremental hashing performs best for the (64-bits) Spin’s bitstate hashing mode, on models with large state vectors, and using a verifier, that is optimised by the C compiler.
Viet Yen Nguyen, Theo C. Ruys
Verifying Compiler Based Refinement of Bluespec TM Specifications Using the SPIN Model Checker
Abstract
The underlying model of computation for PROMELA is based on interacting processes with asynchronous communication, and hence SPIN has been mainly used as a verification engine for concurrent software systems. On the other hand, hardware verification has mostly focused on clock synchronous register-transfer level (RTL) models. As a result, verification tools such as SMV which are based on synchronous state machine models have been used more frequently for hardware verification. However, as levels of abstractions are being raised in hardware design and as high-level synthesis is being promoted for synthesizing RTL, hardware design verification problems are changing in nature. In this paper, we consider a specific high-level hardware description langauge, namely, Bluespec System Verilog (BSV). The programming model of BSV is based on concurrent guarded actions, which we also call as Concurrent Action Oriented Specification (CAOS). High-level synthesis from BSV models has been shown to produce efficient RTL designs. Given the industry traction of BSV-based high-level synthesis and associated design flow, we consider the following formal verification problems: (i) Given a BSV specification \({\cal S}\) of a hardware design, does it satisfy certain temporal properties? (ii) Given a BSV specification \({\cal S}\), and an implementation R synthesized from \({\cal S}\) using a BSV-based synthesis tool, does R conform to the behaviors specified by \({\cal S}\); that is, is R a refinement of \({\cal S}\)? (iii) Given a different implementation R synthesized from \({\cal S}\) using some other BSV-based synthesis tool, is R a refinement of R as well? In this paper, we show how SPIN Model Checker can be used to solve these three problems related to the verification of BSV-based designs. Using a sample design, we illustrate how our approach can be used for verifying whether the designer intent in the BSV specification is accurately matched by its synthesized hardware implementation.
Gaurav Singh, Sandeep K. Shukla
Symbolic Context-Bounded Analysis of Multithreaded Java Programs
Abstract
The reachability problem is undecidable for programs with both recursive procedures and multiple threads with shared memory. Approaches to this problem have been the focus of much recent research. One of these is to use context-bounded reachability, i.e. to consider only those runs in which the active thread changes at most k times, where k is fixed. However, to the best of our knowledge, context-bounded reachability has not been implemented in any tool so far, primarily because its worst-case runtime is prohibitively high, i.e. O(n k ), where n is the size of the shared memory. Moreover, existing algorithms for context-bounded reachability do not admit a meaningful symbolic implementation (e.g., using BDDs) to reduce the run-time in practice. In this paper, we propose an improvement that overcomes this problem. We have implemented our approach in the tool jMoped and report on experiments.
Dejvuth Suwimonteerabuth, Javier Esparza, Stefan Schwoon
Efficient Stateful Dynamic Partial Order Reduction
Abstract
In applying stateless model checking methods to realistic multithreaded programs, we find that stateless search methods are ineffective in practice, even with dynamic partial order reduction (DPOR) enabled. To solve the inefficiency of stateless runtime model checking, this paper makes two related contributions. The first contribution is a novel and conservative light-weight method for storing abstract states at runtime to help avoid redundant searches. The second contribution is a stateful dynamic partial order reduction algorithm (SDPOR) that avoids a potential unsoundness when DPOR is naively applied in the context of stateful search. Our stateful runtime model checking approach combines light-weight state recording with SDPOR, and strikes a good balance between state recording overheads, on one hand, and the elimination of redundant searches, on the other hand. Our experiments confirm the effectiveness of our approach on several multithreaded benchmarks in C, including some practical programs.
Yu Yang, Xiaofang Chen, Ganesh Gopalakrishnan, Robert M. Kirby
Symbolic String Verification: An Automata-Based Approach
Abstract
We present an automata-based approach for the verification of string operations in PHP programs based on symbolic string analysis. String analysis is a static analysis technique that determines the values that a string expression can take during program execution at a given program point. This information can be used to verify that string values are sanitized properly and to detect programming errors and security vulnerabilities. In our string analysis approach, we encode the set of string values that string variables can take as automata. We implement all string functions using a symbolic automata representation (MBDD representation from the MONA automata package) and leverage efficient manipulations on MBDDs, e.g., determinization and minimization. Particularly, we propose a novel algorithm for language-based replacement. Our replacement function takes three DFAs as arguments and outputs a DFA. Finally, we apply a widening operator defined on automata to approximate fixpoint computations. If this conservative approximation does not include any bad patterns (specified as regular expressions), we conclude that the program does not contain any errors or vulnerabilities. Our experimental results demonstrate that our approach works quite well in checking the correctness of sanitization operations in real-world PHP applications.
Fang Yu, Tevfik Bultan, Marco Cova, Oscar H. Ibarra
Verifying Multi-threaded C Programs with SPIN
Abstract
A key challenge in model checking software is the difficulty of verifying properties of implementation code, as opposed to checking an abstract algorithmic description. We describe a tool for verifying multi-threaded C programs that uses the SPIN model checker. Our tool works by compiling a multi-threaded C program into a typed bytecode format, and then using a virtual machine that interprets the bytecode and computes new program states under the direction of SPIN. Our virtual machine is compatible with most of SPIN’s search options and optimization flags, such as bitstate hashing and multi-core checking. It provides support for dynamic memory allocation (the malloc and free family of functions), and for the pthread library, which provides primitives often used by multi-threaded C programs. A feature of our approach is that it can check code after compiler optimizations, which can sometimes introduce race conditions. We describe how our tool addresses the state space explosion problem by allowing users to define data abstraction functions and to constrain the number of allowed context switches. We also describe a reduction method that reduces context switches using dynamic knowledge computed on-the-fly, while being sound for both safety and liveness properties. Finally, we present initial experimental results with our tool on some small examples.
Anna Zaks, Rajeev Joshi
Backmatter
Metadaten
Titel
Model Checking Software
herausgegeben von
Klaus Havelund
Rupak Majumdar
Jens Palsberg
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-85114-1
Print ISBN
978-3-540-85113-4
DOI
https://doi.org/10.1007/978-3-540-85114-1