Skip to main content

2008 | Buch

Computer Aided Verification

20th International Conference, CAV 2008 Princeton, NJ, USA, July 7-14, 2008 Proceedings

herausgegeben von: Aarti Gupta, Sharad Malik

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 20th International Conference on Computer Aided Verification, CAV 2008, held in Princeton, NJ, USA, in July 2008. The 33 revised full papers presented together with 14 tool papers and 2 invited papers and 4 invited tutorials were carefully reviewed and selected from 104 regular paper and 27 tool paper submissions. The papers are organized in topical sections on concurrency, memory consistency, abstraction/refinement, hybrid systems, dynamic verification, modeling and specification formalisms, decision procedures, program verification, program and shape analysis, security and program analysis, hardware verification, model checking, space efficient algorithms, and model checking.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Singularity: Designing Better Software (Invited Talk)

Five years ago, frustrated by the never-ending process of finding bugs that developers had cleverly hidden throughout our software, I started a new project with Galen Hunt to rethink what software might look like if it was written, from scratch, with the explicit intent of producing more robust and reliable software artifacts. The Singularity project [1] in Microsoft Research pursued several novel strategies to this end. It has successfully encouraged researchers and product groups to think beyond the straightjacket of time-tested software architectures, to consider new solutions that cross the bounds of academic disciplines such as programming languages, operating systems, and tools.

James R. Larus
Coping with Outside-the-Box Attacks

There is a long history of security attacks that succeed by violating the system designer’s assumptions about how things work. Even if a designer does everything right—within the “obvious” model—such attacks can still succeed. How can we, as designers and verifiers of systems, cope with these “outside-the-box” attacks?

The classic examples of assumption-violating attacks are the timing attacks on cryptosystems first introduced by Kocher [1]. Cryptosystems are designed so that an attacker who has black-box access to an implementation (and does not know the secret key) cannot deduce the key. Extensive mathematical analysis of the input-output behavior of cryptographic functions led to the belief (though unfortunately not proof) that an attacker who can observe the input-output behavior of cryptosystems cannot feasibly find the secret key. Kocher showed that even if this is true, the running time of common cryptographic algorithms does depend on the secret key. Though the dependence of running time on the key is complex, Kocher showed how to use randomized experiments to extract enough signal to deduce the key, at least in principle. Brumley and Boneh later showed that such attacks are practical, even across a network[2].

Edward W. Felten

Invited Tutorials

Assertion-Based Verification: Industry Myths to Realities (Invited Tutorial)

Debugging, on average, has grown to consume more than 60% of today’s ASIC and SoC verification effort. Clearly, this is a topic the industry must address, and some organizations have done just that. Those that have adopted an assertion-based verification (ABV) methodology have seen significant reduction in simulation debugging time (as much as 50% [1]) due to improved observability. Furthermore, organizations that have embraced an ABV methodology are able to take advantage of more advanced verification techniques, such as formal verification, thus improving their overall verification quality and results. Nonetheless, even with multiple published industry case studies from various early adopters—each touting the benefits of applying ABV—the industry as a whole has resisted adopting assertion-based techniques. This tutorial provides an industry survey of today’s ABV landscape, ranging from myths to realities. Emerging challenges and possible research opportunities are discussed. The following extended abstract provides a reference on which the tutorial builds.

Harry Foster
Theorem Proving for Verification (Invited Tutorial)

There are numerous verification techniques in active use. Traditional testing and simulation usually only provide a limited guarantee, since they can seldom exercise all possible situations. Methods based on abstraction consciously simplify the problem to make its complete analysis tractable, but still do not normally completely verify the ultimate target. We will confine ourselves here to full formal verification techniques that can be used to prove complete correctness of a (model of a) system with respect to a formal specification. Roughly speaking, these methods model the system and specification in a logical formalism and then apply general methods to determine whether the formal expressions are valid, indicating correctness of the model with respect to the specification. Typical formalisms include:

Propositional logic, a.k.a. Boolean algebra

Temporal logic (CTL, LTL etc.)

Quantifier-free combinations of first-order theories

Full first-order logic

Higher-order logic or first-order logic with arithmetic or set theory

This list is organized approximately in order of increasing logical generality, with formalisms later in the list often subsuming earlier ones. But there is a price to be paid for this generality: deciding validity in the formalisms becomes successively more difficult.

John Harrison
Tutorial on Separation Logic (Invited Tutorial)

Separation logic is an extension of Hoare’s logic for reasoning about programs that manipulate pointers. Its assertion language extends classical logic with a separating conjunction operator

A

*

B

, which asserts that

A

and

B

hold for separate portions of memory.

In this tutorial I will first cover the basics of the logic, concentrating on highlights from the early work [1,2,3,4].

(i) The separating conjunction fits together with inductive definitions in a way that supports natural descriptions of mutable data structures [1].

(ii) Axiomatizations of pointer operations support

in-place reasoning

, where a portion of a formula is updated in place when passing from precondition to postcondition, mirroring the operational locality of heap update [1,2].

(iii) Notorious “dirty” features of low-level programming (pointer arithmetic, explicit deallocation) are dealt with smoothly, even embraced [2,3].

(iv) Frame axioms, which state what does not change, can be avoided when writing specifications [2,3].

These points together enable specifications and proofs of pointer programs that are dramatically simpler than was possible previously, in many cases approaching the simplicity associated with proofs of pure functional programs. I will describe how that is, and where rough edges lie (programs whose proofs are still more complex than we would like).

Peter O’Hearn
Abstract Interpretation with Applications to Timing Validation
Invited Tutorial

Abstract interpretation is one of the main verification technologies besides model checking and deductive verification.

Abstract interpretation has a rich theory of abstraction and strong support for the construction of abstract domains. It allows to express a precise relation to the (concrete) semantics of the programming language inducing a clear relation between the results of an abstract interpretation and the properties of the analyzed program. It permits trading efficiency against precision and offers means to enforce termination where this is not guaranteed.

We explain abstract interpretation using examples from a particular application domain: the determination of bounds on the execution times of programs. These bounds are used to show reliably that hard real-time systems satisfy their timing constraints.

The application domain requires a number of static analyses and domains with different characteristics. Most domains exhibit Galois connections, a few do not. Some analyses require widening to leap infinite ascending chains and ensure termination.

Reinhard Wilhelm, Björn Wachter

Session 1: Concurrency

Reducing Concurrent Analysis Under a Context Bound to Sequential Analysis

This paper addresses the analysis of concurrent programs with shared memory. Such an analysis is undecidable in the presence of multiple procedures. One approach used in recent work obtains decidability by providing only a partial guarantee of correctness: the approach bounds the number of context switches allowed in the concurrent program, and aims to prove safety, or find bugs, under the given bound. In this paper, we show how to obtain simple and efficient algorithms for the analysis of concurrent programs with a context bound. We give a general reduction from a

concurrent

program

P

, and a given context bound

K

, to a

sequential

program

$P_s^K$

such that the analysis of

$P_s^K$

can be used to prove properties about

P

. We give instances of the reduction for common program models used in model checking, such as Boolean programs and pushdown systems.

Akash Lal, Thomas Reps
Monitoring Atomicity in Concurrent Programs

We study the problem of monitoring concurrent program runs for atomicity violations. Unearthing fundamental results behind scheduling algorithms in database control, we build space-efficient monitoring algorithms for checking atomicity that use space polynomial in the number of active threads and entities, and independent of the length of the run monitored. Second, by interpreting the monitoring algorithm as a

finite automaton

, we solve the model checking problem for atomicity of finite-state concurrent models. This establishes (for the first time) that model checking finite-state concurrent models for atomicity is decidable, and remedies incorrect proofs published in the literature. Finally, we exhibit experimental evidence that our atomicity monitoring algorithm gives substantial time and space benefits on benchmark applications.

Azadeh Farzan, P. Madhusudan
Dynamic Verification of MPI Programs with Reductions in Presence of Split Operations and Relaxed Orderings

Dynamic verification methods are the natural choice for debugging real world programs when model extraction and maintenance are expensive. Message passing programs written using the MPI library fall under this category. Partial order reduction can be very effective for MPI programs because for each process, all its local computational steps, as well as many of its MPI calls, commute with the corresponding steps of all other processes. However, when dependencies arise among MPI calls, they are often a function of the runtime state. While this suggests the use of dynamic partial order reduction (DPOR), three aspects of MPI make previous DPOR algorithms inapplicable: (i) many MPI calls are allowed to complete out of program order; (ii) MPI has global synchronization operations (e.g.,

barrier

) that have a special weak semantics; and (iii) the runtime of MPI cannot, without intrusive modifications, be forced to pursue a specific interleaving because of MPI’s liberal message matching rules, especially pertaining to ‘wildcard receives’. We describe our new dynamic verification algorithm ‘POE’ that exploits the out of order completion semantics of MPI by delaying the issuance of MPI calls, issuing them only according to the formation of

match-sets

, which are ample ‘big-step’ moves. POE guarantees to manifest any feasible interleaving by dynamically rewriting wildcard receives by specific-source receives. This is the first dynamic model-checking algorithm with reductions for (a large subset of) MPI that guarantees to catch all deadlocks and local assertion violations, and is found to work well in practice.

Sarvani Vakkalanka, Ganesh Gopalakrishnan, Robert M. Kirby
A Hybrid Type System for Lock-Freedom of Mobile Processes

We propose a type system for lock-freedom in the

π

-calculus, which guarantees that certain communications will eventually succeed. Distinguishing features of our type system are: it can verify lock-freedom of concurrent programs that have sophisticated recursive communication structures; it can be fully automated; it is hybrid, in that it combines a type system for lock-freedom with local reasoning about deadlock-freedom, termination, and confluence analyses. Moreover, the type system is parameterized by deadlock-freedom/termination/confluence analyses, so that any methods (e.g. type systems and model checking) can be used for those analyses. A lock-freedom analysis tool has been implemented based on the proposed type system, and tested for non-trivial programs.

Naoki Kobayashi, Davide Sangiorgi

Session 2: Memory Consistency

Implied Set Closure and Its Application to Memory Consistency Verification

Hangal et. al. [3] have developed a procedure to check if an instance of the execution of a shared memory multiprocessor program, is consistent with the Total Store Order (TSO) memory consistency model. They also devised an algorithm based on this procedure with time complexity

O

(

n

5

), where

n

is the total number of instructions in the program. Roy et. al. [6] have improved the implementation of the procedure and achieved

O

(

n

4

) time complexity.

We have identified the bottleneck in these algorithms as a graph problem of independent interest, called

implied-set closure

(ISC) problem. In this paper we propose an algorithm for ISC problem and show that using this algorithm, Hangal’s consistency checking procedure can be implemented with

O

(

n

3

) time complexity. We also experimentally show that the new algorithm is significantly faster than Roy’s algorithm.

Surender Baswana, Shashank K. Mehta, Vishal Powar
Effective Program Verification for Relaxed Memory Models

Program verification for relaxed memory models is hard. The high degree of nondeterminism in such models challenges standard verification techniques. This paper proposes a new verification technique for the most common relaxation, store buffers. Crucial to this technique is the observation that all programmers, including those who use low-lock techniques for performance, expect their programs to be sequentially consistent. We first present a monitor algorithm that can detect the presence of program executions that are not sequentially consistent due to store buffers while

only

exploring sequentially consistent executions. Then, we combine this monitor with a stateless model checker that verifies that every sequentially consistent execution is correct. We have implemented this algorithm in a prototype tool called Sober and present experiments that demonstrate the precision and scalability of our method. We find relaxed memory model bugs in several programs, including two previously unknown bugs in a production-level concurrency library that would have been difficult to find by other means.

Sebastian Burckhardt, Madanlal Musuvathi
Mechanical Verification of Transactional Memories with Non-transactional Memory Accesses

Transactional memory

is a programming abstraction intended to simplify the synchronization of conflicting memory accesses (by concurrent threads) without the difficulties associated with locks. In a previous work we presented a formal framework for proving that a transactional memory implementation satisfies its specifications and provided with model checking verification of some using small instantiations. This paper extends the previous work to capture non-transactional accesses to memory, which occurs, for example, when using legacy code. We provide a mechanical proof of the soundness of the verification method, as well as a mechanical verification of a version of the popular

tcc

implementation that includes non-transactional memory accesses. The verification is performed by the deductive temporal checker

tlpvs

.

Ariel Cohen, Amir Pnueli, Lenore D. Zuck

Session 3: Abstraction/Refinement

Automated Assume-Guarantee Reasoning by Abstraction Refinement

Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counterexamples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

Mihaela Gheorghiu Bobaru, Corina S. Păsăreanu, Dimitra Giannakopoulou
Local Proofs for Linear-Time Properties of Concurrent Programs

This paper develops a local reasoning method to check linear-time temporal properties of concurrent programs. In practice, it is often infeasible to model check over the product state space of a concurrent program. The method developed in this paper replaces such global reasoning with checks of (abstracted) individual processes. An automatic refinement step gradually exposes local state if necessary, ensuring that the method is complete. Experiments show that local reasoning can hold a significant advantage over global reasoning.

Ariel Cohen, Kedar S. Namjoshi
Probabilistic CEGAR

Counterexample-guided abstraction refinement (CEGAR) has been

en vogue

for the automatic verification of very large systems in the past years. When trying to apply CEGAR to the verification of probabilistic systems, various foundational questions arise. This paper explores them in the context of predicate abstraction.

Holger Hermanns, Björn Wachter, Lijun Zhang

Session 4: Hybrid Systems

Computing Differential Invariants of Hybrid Systems as Fixedpoints

We introduce a fixedpoint algorithm for verifying safety properties of hybrid systems with differential equations whose right-hand sides are polynomials in the state variables. In order to verify nontrivial systems without solving their differential equations and without numerical errors, we use a continuous generalization of induction, for which our algorithm computes the required

differential invariants

. As a means for combining local differential invariants into global system invariants in a sound way, our fixedpoint algorithm works with a compositional verification logic for hybrid systems. To improve the verification power, we further introduce a

saturation procedure

that refines the system dynamics successively with differential invariants until safety becomes provable. By complementing our symbolic verification algorithm with a robust version of numerical falsification, we obtain a fast and sound verification procedure. We verify roundabout maneuvers in air traffic management and collision avoidance in train control.

André Platzer, Edmund M. Clarke
Constraint-Based Approach for Analysis of Hybrid Systems

This paper presents a constraint-based technique for discovering a rich class of inductive invariants (boolean combinations of polynomial inequalities of bounded degree) for verification of hybrid systems. The key idea is to introduce a template for the unknown invariants and then translate the verification condition into an ∃ ∀ constraint, where the template unknowns are existentially quantified and state variables are universally quantified. The verification condition for continuous dynamics encodes that the system does not exit the invariant set from any point on the boundary of the invariant set. The ∃ ∀ constraint is transformed into ∃ constraint using Farkas lemma. The ∃ constraint is solved using a bit-vector decision procedure. We present preliminary experimental results that demonstrate the feasibility of our approach of solving the ∃ ∀ constraints generated from models of real-world hybrid systems.

Sumit Gulwani, Ashish Tiwari

Session 5: Tools – Dynamic Verification

AutoMOTGen: Automatic Model Oriented Test Generator for Embedded Control Systems
Tool Paper

We present AutoMOTGen, a tool for automatic test case generation (ATG) from MATLAB Simulink/Stateflow (SL/SF) models [6] for testing automotive controllers. Our methodology is based on model checking [2]. The main highlights of the tool are:

1

Enhanced coverage of the model elements as well as high-level requirements.

1

A modular design for

plug-and-play

of different model checkers, test data generators and coverage analysis tools for enhancing the test suite quality.

1

Implements sampling time abstraction to generate tests with

lesser

number of (discrete) steps in the intermediate model.

1

Implements coverage dependent instrumentation of the model for the structural coverage criteria.

1

Capability to handle SL/SF blocks commonly used in automotive controllers (including blocks such as integrator, delay, multiplication/division, look-up tables, triggered subsystems and hierarchical and parallel charts).

The current implementation of AutoMOTGen uses SAL [8] as an intermediate representation and uses associated tools such as

sal-atg

,

sal-bmc

and

sal-smc

for generation of test data and proving the unreachability of some of the coverage goals. AutoMOTGen is implemented in Java and C++ (.NET framework) and uses MATLAB scripting language for extracting the relevant information from SL/SF models required for the purpose of test generation.

Ambar A. Gadkari, Anand Yeolekar, J. Suresh, S. Ramesh, Swarup Mohalik, K. C. Shashidhar
FShell: Systematic Test Case Generation for Dynamic Analysis and Measurement
Tool Paper

Although the principal analogy between counterexample generation and white box testing has been repeatedly addressed, the usage patterns and performance requirements for software testing are quite different from formal verification. Our tool FS

hell

provides a versatile testing environment for C programs which supports both interactive explorative use and a rich scripting language. More than a frontend for software model checkers, FS

hell

is designed as a database engine which dispatches queries about the program to program analysis tools. We report on the integration of CBMC into FS

hell

and describe architectural modifications which support efficient test case generation.

Andreas Holzer, Christian Schallhart, Michael Tautschnig, Helmut Veith

Session 6: Modeling and Specification Formalisms

Applying the Graph Minor Theorem to the Verification of Graph Transformation Systems

We show how to view certain subclasses of (single-pushout) graph transformation systems as well-structured transition systems, which leads to decidability of the covering problem via a backward analysis. As the well-quasi order required for a well-structured transition system we use the graph minor ordering. We give an explicit construction of the backward step and apply our theory in order to show the correctness of a leader election protocol.

Salil Joshi, Barbara König
Conflict-Tolerant Features

We consider systems composed of a base system with multiple “features” or “controllers”, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a way that guarantees the “maximal” use of each feature. The methodology is based on the notion of “conflict-tolerant” features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based composition scheme for such features, which ensures that each feature is maximally utilized. We also provide a formal framework for specifying, verifying, and synthesizing such features. In particular we obtain a compositional technique for verifying systems developed in this framework.

Deepak D’Souza, Madhu Gopinathan
Ranking Automata and Games for Prioritized Requirements

Requirements of reactive systems are usually specified by classifying system executions as desirable and undesirable. To specify

prioritized

requirements, we propose to associate a rank with each execution. This leads to optimization analogs of verification and synthesis problems in which we compute the “best” requirement that can be satisfied or enforced from a given state. The classical definitions of acceptance criteria for automata can be generalized to ranking conditions. In particular, given a mapping of states to colors, the

Büchi ranking

condition maps an execution to the highest color visited infinitely often by the execution, and the

cyclic ranking

condition with cycle

k

maps an execution to the modulo-

k

value of the highest color repeating infinitely often. The well-studied parity acceptance condition is a special case of cyclic ranking with cycle 2, and we show that the cyclic ranking condition can specify all

ω

-regular ranking functions. We show that the classical characterizations of acceptance conditions by fixpoints over sets generalize to characterizations of ranking conditions by fixpoints over an appropriately chosen lattice of coloring functions. This immediately leads to symbolic algorithms for solving verification and synthesis problems. Furthermore, the precise complexity of a decision problem for ranking conditions is no more than the corresponding acceptance version, and in particular, we show how to solve Büchi ranking games in quadratic time.

Rajeev Alur, Aditya Kanade, Gera Weiss

Session 7: Decision Procedures

Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations

The use of Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms for obtaining interpolants for conjunctions of linear diophantine equations, linear modular equations (linear congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering

modular/divisibility

predicates in a counterexample guided abstraction refinement (CEGAR) framework. This has enabled verification of simple programs that cannot be checked using existing CEGAR based model checkers.

Himanshu Jain, Edmund Clarke, Orna Grumberg
Linear Arithmetic with Stars

We consider an extension of integer linear arithmetic with a “star” operator takes closure under vector addition of the solution set of a linear arithmetic subformula. We show that the satisfiability problem for this extended language remains in NP (and therefore NP-complete). Our proof uses semilinear set characterization of solutions of integer linear arithmetic formulas, as well as a generalization of a recent result on sparse solutions of integer linear programming problems. As a consequence of our result, we present worst-case optimal decision procedures for two NP-hard problems that were previously not known to be in NP. The first is the satisfiability problem for a logic of sets, multisets (bags), and cardinality constraints, which has applications in verification, interactive theorem proving, and description logics. The second is the reachability problem for a class of transition systems whose transitions increment the state vector by solutions of integer linear arithmetic formulas.

Ruzica Piskac, Viktor Kuncak
Inferring Congruence Equations Using SAT

This paper proposes a new approach for deriving invariants that are systems of congruence equations where the modulo is a power of 2. The technique is an amalgam of SAT-solving, where a propositional formula is used to encode the semantics of a basic block, and abstraction, where the solutions to the formula are systematically combined and summarised as a system of congruence equations. The resulting technique is more precise than existing congruence analyses since a single optimal transfer function is derived for a basic block as a whole.

Andy King, Harald Søndergaard

Session 8: Tools – Decision Procedures

The Barcelogic SMT Solver
Tool Paper

This is the first system description of the Barcelogic SMT solver, which implements all techniques that our group has been developing over the last four years as well as state-of-the-art features developed by other research groups. We pay special attention to the theory solvers and to functionalities that are not common in SMT solvers.

Miquel Bofill, Robert Nieuwenhuis, Albert Oliveras, Enric Rodríguez-Carbonell, Albert Rubio
The MathSAT 4 SMT Solver
Tool Paper

We present

MathSAT 4

, a state-of-the-art SMT solver.

MathSAT 4

handles several useful theories: (combinations of) equality and uninterpreted functions, difference logic, linear arithmetic, and the theory of bit-vectors. It was explicitly designed for being used in formal verification, and thus provides functionalities which extend the applicability of SMT in this setting. In particular: model generation (for counterexample reconstruction), model enumeration (for predicate abstraction), an incremental interface (for BMC), and computation of unsatisfiable cores and Craig interpolants (for abstraction refinement).

Roberto Bruttomesso, Alessandro Cimatti, Anders Franzén, Alberto Griggio, Roberto Sebastiani
CSIsat: Interpolation for LA+EUF
Tool Paper

We present

CSIsat

, an interpolating decision procedure for the quantifier-free theory of rational linear arithmetic and equality with uninterpreted function symbols. Our implementation combines the efficiency of linear programming for solving the arithmetic part with the efficiency of a SAT solver to reason about the boolean structure. We evaluate the efficiency of our tool on benchmarks from software verification. Binaries and the source code of

CSIsat

are publicly available as free software.

Dirk Beyer, Damien Zufferey, Rupak Majumdar
Prover’s Palette: A User-Centric Approach to Verification with Isabelle and QEPCAD-B
Tool Paper

We present Prover’s Palette, a general framework for formal verification using multiple tools, centred around the user. We illustrate the framework by describing a concrete integration of the theorem prover Isabelle with the computer algebra system QEPCAD-B.

Laura I. Meikle, Jacques D. Fleuriot

Session 9: Program Verification

Heap Assumptions on Demand

Termination of a heap-manipulating program generally depends on preconditions that express

heap assumptions

(i.e., assertions describing reachability, aliasing, separation and sharing in the heap). We present an algorithm for the inference of such preconditions. The algorithm exploits a unique interplay between counterexample-producing abstract termination checker and shape analysis. The shape analysis produces heap assumptions on demand to eliminate counterexamples, i.e., non-terminating abstract computations. The experiments with our prototype implementation indicate its practical potential.

Andreas Podelski, Andrey Rybalchenko, Thomas Wies
Proving Conditional Termination

We describe a method for synthesizing reasonable underapproximations to weakest preconditions for termination—a long-standing open problem. The paper provides experimental evidence to demonstrate the usefulness of the new procedure.

Byron Cook, Sumit Gulwani, Tal Lev-Ami, Andrey Rybalchenko, Mooly Sagiv
Monotonic Abstraction for Programs with Dynamic Memory Heaps

We propose a new approach for automatic verification of programs with dynamic heap manipulation. The method is based on symbolic (backward) reachability analysis using upward-closed sets of heaps w.r.t. an appropriate preorder on graphs. These sets are represented by a finite set of minimal graph patterns corresponding to a set of bad configurations. We define an abstract semantics for the programs which is monotonic w.r.t. the preorder. Moreover, we prove that our analysis always terminates by showing that the preorder is a well-quasi ordering. Our results are presented for the case of programs with 1-next selector. We provide experimental results showing the effectiveness of our approach.

Parosh Aziz Abdulla, Ahmed Bouajjani, Jonathan Cederberg, Frédéric Haziza, Ahmed Rezine
Enhancing Program Verification with Lemmas

One promising approach to verifying heap-manipulating programs is based on

user-defined

inductive predicates in separation logic. This approach can describe data structures with complex invariants and sound reasoning based on unfold/fold. However, an important component towards more expressive program verification is the use of

lemmas

that can soundly relate predicates beyond their original definitions. This paper outlines a new

automatic

mechanism for proving and applying

user-specified lemmas

under separation logic.

Huu Hai Nguyen, Wei-Ngan Chin

Session 10: Program and Shape Analysis

A Numerical Abstract Domain Based on Expression Abstraction and Max Operator with Application in Timing Analysis

This paper describes a precise numerical abstract domain for use in timing analysis. The numerical abstract domain is parameterized by a linear abstract domain and is constructed by means of two domain lifting operations. One domain lifting operation is based on the principle of

expression abstraction

(which involves defining a set of expressions and specifying their semantics using a collection of directed inference rules) and has a more general applicability. It lifts any given abstract domain to include reasoning about a given set of expressions whose semantics is abstracted using a set of axioms. The other domain lifting operation incorporates disjunctive reasoning into a given linear relational abstract domain via introduction of

max

expressions. We present experimental results demonstrating the potential of the new numerical abstract domain to discover a wide variety of timing bounds (including polynomial, disjunctive, logarithmic, exponential, etc.) for small C programs.

Bhargav S. Gulavani, Sumit Gulwani
Scalable Shape Analysis for Systems Code

Pointer safety faults in device drivers are one of the leading causes of crashes in operating systems code. In principle, shape analysis tools can be used to prove the absence of this type of error. In practice, however, shape analysis is not used due to the unacceptable mixture of scalability and precision provided by existing tools. In this paper we report on a new join operation

${\sqcup\dagger}$

for the separation domain which aggressively abstracts information for scalability yet does not lead to false error reports.

${\sqcup\dagger}$

is a critical piece of a new shape analysis tool that provides an acceptable mixture of scalability and precision for industrial application. Experiments on whole Windows and Linux device drivers (firewire, pci-driver, cdrom, md, etc.) represent the first working application of shape analysis to verification of whole industrial programs.

Hongseok Yang, Oukseh Lee, Josh Berdine, Cristiano Calcagno, Byron Cook, Dino Distefano, Peter O’Hearn
Thread Quantification for Concurrent Shape Analysis

In this paper we address the problem of shape analysis for concurrent programs. We present new algorithms, based on abstract interpretation, for automatically verifying properties of programs with an unbounded number of threads manipulating an unbounded shared heap.

Our algorithms are based on a new abstract domain whose elements represent

thread-quantified

invariants: i.e., invariants satisfied by all threads. We exploit existing abstractions to represent the invariants. Thus, our technique lifts existing abstractions by wrapping universal quantification around elements of the base abstract domain. Such abstractions are effective because they are thread modular: e.g., they can capture correlations between the local variables of the same thread as well as correlations between the local variables of a thread and global variables, but forget correlations between the states of distinct threads. (The exact nature of the abstraction, of course, depends on the base abstraction lifted in this style.)

We present techniques for computing sound transformers for the new abstraction by using transformers of the base abstract domain. We illustrate our technique in this paper by instantiating it to the Boolean Heap abstraction, producing a Quantified Boolean Heap abstraction. We have implemented an instantiation of our technique with Canonical Abstraction as the base abstraction and used it to successfully verify linearizability of data-structures in the presence of an unbounded number of threads.

J. Berdine, T. Lev-Ami, R. Manevich, G. Ramalingam, M. Sagiv

Session 11: Tools – Security and Program Analysis

The Scyther Tool: Verification, Falsification, and Analysis of Security Protocols
Tool Paper

With the rise of the Internet and other open networks, a large number of security protocols have been developed and deployed in order to provide secure communication. The analysis of such security protocols has turned out to be extremely difficult for humans, as witnessed by the fact that many protocols were found to be flawed after deployment. This has driven the research in formal analysis of security protocols. Unfortunately, there are no effective approaches yet for constructing correct and efficient protocols, and work on concise formal logics that might allow one to easily prove that a protocol is correct in a formal model, is still ongoing. The most effective approach so far has been automated falsification or verification of such protocols with state-of-the-art tools such as ProVerif [1] or the Avispa tools [2]. These tools have shown to be effective at finding attacks on protocols (Avispa) or establishing correctness of protocols (ProVerif).

Cas J. F. Cremers
The CASPA Tool: Causality-Based Abstraction for Security Protocol Analysis
Tool Paper

CASPA

constitutes a push-button tool for automatically proving secrecy and authenticity properties of cryptographic protocols. The tool is grounded on a novel technique for causality-based abstraction of protocol executions that allows establishing proofs of security for an unbounded number of concurrent protocol executions in an automated manner. We demonstrate the expressiveness and efficiency of the tool by drawing a comparison with T4ASP, the static analyzer for secrecy properties offered by the AVISPA tool.

CASPA

is capable of coping with a substantially larger set of protocols, and excels in performance.

Michael Backes, Stefan Lorenz, Matteo Maffei, Kim Pecina
Jakstab: A Static Analysis Platform for Binaries
Tool Paper

For processing compiled code, model checkers require accurate model extraction from binaries. We present our fully configurable binary analysis platform

Jakstab

, which resolves indirect branches by multiple rounds of disassembly interleaved with dataflow analysis. We demonstrate that this iterative disassembling strategy achieves better results than the state-of-the-art tool IDA Pro.

Johannes Kinder, Helmut Veith
THOR: A Tool for Reasoning about Shape and Arithmetic
Tool Paper

We describe T

hor

(

T

ool for

H

eap-

O

riented

R

easoning), a tool based on separation logic that is capable of reasoning automatically about heap-manipulating programs. There are several such systems in development now. However, T

hor

is unique in that it provides not only shape analysis, but also arithmetic reasoning via a novel combination procedure. Also, considerable effort has been put into making the output clear and easy to understand. T

hor

uses Javascript and HTML to produce an interactive representation of the analysis results.

Stephen Magill, Ming-Hsien Tsai, Peter Lee, Yih-Kuen Tsay

Session 12: Hardware Verification I

Functional Verification of Power Gated Designs by Compositional Reasoning

Power gating is a technique for low power design in which whole sections of the chip are powered off when they are not needed, and powered back on when they are. Functional correctness of power gating is usually checked as part of system-level verification, where the most widely used technique is simulation using pseudo-random stimuli. We propose instead to perform a sequential equivalence check between the power gated design and a version of itself in which power gating is disabled. We take a compositional approach that looks for partial equivalence of each unit under a suitable set of assumptions, guaranteed by the neighboring units. We make use of so-called circular reasoning rules to compose the partial equivalences proved on the individual units back into total equivalence on the whole chip.

Cindy Eisner, Amir Nahir, Karen Yorav
A Practical Approach to Word Level Model Checking of Industrial Netlists

In this paper we present a word-level model checking method that attempts to speed up safety property checking of industrial netlists. Our aim is to construct an algorithm that allows us to check both bounded and unbounded properties using standard bit-level model checking methods as back-end decision procedures, while incurring minimum runtime penalties for designs that are unsuited to our analysis. We do this by combining modifications of several previously known techniques into a static abstraction algorithm which is guaranteed to produce bit-level netlists that are as small or smaller than the original bitblasted designs. We evaluate our algorithm on several challenging hardware components.

Per Bjesse

Session 13: Hardware Verification II

Validating High-Level Synthesis

The growing design-productivity gap has made designers shift toward using high-level languages like C, C++ and Java to do system-level design. High-Level Synthesis (HLS) is the process of generating Register Transfer Level (RTL) design from these initial high-level programs. Unfortunately, this translation process itself can be buggy, which can create a mismatch between what a designer intends and what is actually implemented in the circuit. In this paper, we present an approach to validate the result of HLS against the initial high-level program using insights from translation validation, automated theorem proving and relational approaches to reasoning about programs. We have implemented our validating technique and have applied it to a highly parallelizing HLS framework called

SPARK

. We present the details of our algorithm and experimental results.

Sudipta Kundu, Sorin Lerner, Rajesh Gupta
An Algebraic Approach for Proving Data Correctness in Arithmetic Data Paths

This paper proposes a new approach for proving arithmetic correctness of data paths in System-on-Chip modules. It complements existing techniques which are, for reasons of complexity, restricted to verifying only the control behavior. The circuit is modeled at the arithmetic bit level (ABL) so that our approach is well adapted to current industrial design styles for high performance data paths. Normalization at the ABL is combined with the techniques of computer algebra. We compute normal forms with respect to Gröbner bases over rings ℤ/

$\left\langle{2^n}\right\rangle$

. Our approach proves tractable for industrial data path designs where standard property checking techniques fail.

Oliver Wienand, Markus Wedler, Dominik Stoffel, Wolfgang Kunz, Gert-Martin Greuel
Application of Formal Word-Level Analysis to Constrained Random Simulation
Tool Paper

Constrained random simulation is supported by constraint solvers integrated within simulators. These constraint solvers need to be fast and memory efficient to maintain simulation performance. Binary Decision Diagrams (BDDs) have been successfully applied to represent constraints in this context. However, BDDs are vulnerable to size explosions depending on the constraints they are representing and the number of Boolean variables appearing in them. In this paper, we present a word-level analysis tool

DomRed

to reduce the number of Boolean variables required to represent constraints by reducing the domain of constraint variables.

DomRed

employs static analysis techniques to obtain these reductions. We present experimental results to illustrate the impact of this tool.

Hyondeuk Kim, Hoonsang Jin, Kavita Ravi, Petr Spacek, John Pierce, Bob Kurshan, Fabio Somenzi

Session 14: Model Checking

Producing Short Counterexamples Using “Crucial Events”

Ideally, a model checking tool should successfully tackle state space explosion for complete system validation, while providing short counterexamples when an error exists. Techniques such as partial order (p.o.) reduction [1,2] are very effective at tackling state space explosion, but do not produce short counterexamples. On the other hand, directed model checking [3,4] techniques find short counterexamples, but are prone to state space explosion in the absence of errors. To the best of our knowledge, there is currently no single technique that meets both requirements. We present such a technique in this paper.

For a subset of CTL, which we call CETL (Crucial Event Temporal Logic), we show that there exists a unique

minimum

set of events in each program trace whose execution is both

necessary and sufficient

to lead to an error state. These events are called “crucial events”. We show how crucial events can be used to produce short counterexamples, while also providing state space reduction.

We have implemented the techniques presented here as an extension to the model checker SPIN, called SPICED (Simple PROMELA Interpreter with Crucial Event Detection). Experimental results are presented.

Sujatha Kashyap, Vijay K. Garg
Discriminative Model Checking

Model checking typically compares a system description with a formal specification, and returns either a counterexample or an affirmation of compatibility between the two descriptions. Counterexamples provide evidence to the existence of an error, but it can still be very difficult to understand what is the cause for that error. We propose a model checking methodology which uses two levels of specification. Under this methodology, we group executions as

good

and

bad

with respect to satisfying a

base

LTL specification. We use an

analysis specification

, in CTL

 ∗ 

style, quantifying over the good and bad executions. This specification allows checking not only

whether

the base specification holds or fails to hold in a system, but also

how

it does so. We propose a model checking algorithm in the style of the standard CTL

 ∗ 

decision procedure. This framework can be used for comparing between good and bad executions in a system and outside it, providing assistance in locating the design or programming errors.

Peter Niebert, Doron Peled, Amir Pnueli

Session 15: Space Efficient Algorithms

Correcting a Space-Efficient Simulation Algorithm

Although there are many efficient algorithms for calculating the simulation preorder on finite Kripke structures, only two have been proposed of which the space complexity is of the same order as the size of the output of the algorithm. Of these, the one with the best time complexity exploits the representation of the simulation problem as a generalised coarsest partition problem. It is based on a fixed-point operator for obtaining a generalised coarsest partition as the limit of a sequence of partition pairs. We show that this fixed-point theory is flawed, and that the algorithm is incorrect. Although we do not see how the fixed-point operator can be repaired, we correct the algorithm without affecting its space and time complexity.

Rob van Glabbeek, Bas Ploeger
Semi-external LTL Model Checking

In this paper we establish

c

-bit semi-external graph algorithms, – i.e., algorithms which need only a constant number

c

of bits per vertex in the internal memory. In this setting, we obtain new trade-offs between time and space for I/O efficient LTL model checking. First, we design a

c

-bit semi-external algorithm for depth-first search. To achieve a low internal memory consumption, we construct a RAM-efficient perfect hash function from the vertex set stored on disk. We give a similar algorithm for double depth-first search, which checks for presence of accepting cycles and thus solves the LTL model checking problem. The I/O complexity of the search itself is proportional to the time for scanning the search space. For on-the-fly model checking we apply iterative-deepening strategy known from bounded model checking.

Stefan Edelkamp, Peter Sanders, Pavel Šimeček

Session 16: Tools – Model Checking

QMC: A Model Checker for Quantum Systems
Tool Paper

The novel field of quantum computation and quantum information has been growing at a rapid rate; the study of quantum information in particular has led to the emergence of communication and cryptographic protocols with no classical analogues. Quantum information protocols have interesting properties which are not exhibited by their classical counterparts, but they are most distinguished for their applications in cryptography. Notable results include the unconditional security proof [1] of quantum key distribution. This result, in particular, is one of the reasons for the widespread interest in this field. Furthermore, the implementation of quantum cryptography has been demonstrated in non-laboratory settings and is already an important practical technology. Implementations of quantum cryptography have already been commercially launched and tested by a number of companies including MagiQ, Id Quantique, Toshiba, and NEC. The unconditional security of quantum key distribution protocols does not automatically imply the same degree of security for actual systems, of course; this justifies the need for systems modelling and verification in this setting.

Simon J. Gay, Rajagopal Nagarajan, Nikolaos Papanikolaou
T(O)RMC: A Tool for (ω)-Regular Model Checking
Tool Paper

Within the context of the verification of infinite-state systems, “(

ω

)-Regular model checking” is the name of a family of techniques in which states are represented by words, sets of states by finite automata on these objects, and transitions by finite automata operating on pairs of state encodings, i.e. finite-state transducers. If the states are encoded by finite words, then sets of (pairs of) states can be represented by finite-word automata. This setting can be used to represent various classes of infinite-state systems, [17], including parametric systems, FIFO-queue systems, and systems manipulating integer variables (those defined in Presburger arithmetic). When the states are encoded by infinite words, sets of (pairs of) states are represented by deterministic weak Büchi automata. This setting is mainly used to represent systems involving both integer and real variables [4,6], such as linear hybrid systems with a constant derivative.

Axel Legay
Faster Than Uppaal?
Tool Paper

It is probably very hard to develop a new model checker that is faster than

Uppaal

for verifying

(correct)

timed automata. In fact, our tool

Mcta

does not even try to compete with

Uppaal

in this (i.e.,

Uppaal

’s) arena. Instead,

Mcta

is geared towards analyzing

incorrect

specifications of timed automata. It returns (shorter) error traces faster.

Sebastian Kupferschmid, Martin Wehrle, Bernhard Nebel, Andreas Podelski
Backmatter
Metadaten
Titel
Computer Aided Verification
herausgegeben von
Aarti Gupta
Sharad Malik
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-70545-1
Print ISBN
978-3-540-70543-7
DOI
https://doi.org/10.1007/978-3-540-70545-1