Skip to main content

2009 | Buch

Computer Aided Verification

21st International Conference, CAV 2009, Grenoble, France, June 26 - July 2, 2009. Proceedings

herausgegeben von: Ahmed Bouajjani, Oded Maler

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 21st International Conference on Computer Aided Verification, CAV 2009, held in Grenoble, France, in June/July 2009. The 36 revised full papers presented together with 16 tool papers and 4 invited talks and 4 invited tutorials were carefully reviewed and selected from 135 regular paper and 34 tool paper submissions. The papers are dedicated to the advancement of the theory and practice of computer-aided formal analysis methods for hardware and software systems; their scope ranges from theoretical results to concrete applications, with an emphasis on practical verification tools and the underlying algorithms and techniques.

Inhaltsverzeichnis

Frontmatter

Invited Tutorials

Transactional Memory: Glimmer of a Theory
(Invited Paper)

Transactional memory (TM) is a promising paradigm for concurrent programming. This paper is an overview of our recent theoretical work on defining a theory of TM. We first recall some TM correctness properties and then overview results on the inherent power and limitations of TMs.

Rachid Guerraoui, Michał Kapałka
Mixed-Signal System Verification: A High-Speed Link Example

This tutorial begins by visiting various mixed-signal circuit examples in high-speed interfaces, of which verification cannot be done via the established methods for digital systems and instead has relied on exhaustive time domain circuit simulations. While recognizing the vast literature on trying to extending the prevailing digital verification methods to analog and mixed-signal systems, this tutorial suggests that a promising approach might be to extend the way that digital methods leverage the

design intent

and choose the proper abstraction for it. After examining the properties that the designers would most want to verify for the high-speed interface circuits, it is claimed that those are mostly the properties of a linear or weakly-nonlinear system, e.g. gain, bandwidth, local stability, etc. It is because most of the mixed-signal circuits in highspeed links are motivated by the need to replace analog circuits with digital for the ease of process migration. Although in digital forms, the resulting circuits are still supposed to have analog functionalities which are best described in linear or weakly-nonlinear system models. Therefore, a possible formal verification for those circuits might be to extend the traditional small-signal linear analysis. For example, this tutorial will demonstrate ways to extend the linear AC analysis in SPICE to measure the phase-domain transfer functions of PLLs or the noise transfer functions of delta-sigma ADCs. It also examines the limitations of linear analysis, for instance, the inability to detect start-up failures, which is the problem that can be more properly addressed by extending the digital verification methods and coverage concepts.

Jaeha Kim
Modelling Epigenetic Information Maintenance: A Kappa Tutorial

The purpose of this tutorial is to explain and illustrate an approach to the quantitative modelling of molecular interaction networks which departs from the usual notion of (bio-) chemical reaction. This tutorial is self-contained and supposes no familiarity with molecular biology.

Jean Krivine, Vincent Danos, Arndt Benecke
Component-Based Construction of Real-Time Systems in BIP

BIP is a framework for the component-based construction of real-time systems. It considers that systems can be obtained as the composition of 3-layer components. For a component,

The lower layer describes its

behavior

, a set of transitions with triggers and actions (atomic state transformations). A trigger consists of an enabling condition on data and a

port

through which synchronization is sought.

The intermediate level is the set of

interactions

between transitions of the behavior level. An interaction is a set of synchronizing ports and associated actions. Interactions are specified by using

connectors

expressing synchronization constraints on ports.

The upper level is a set of

priority

rules implementing scheduling policies for interactions.

The framework supports a system construction methodology which is based on a parameterized binary

composition

operator on components. The product of two components consists in composing their corresponding layers, separately. Parameters are used to define new interactions as well as new priorities between the composed components.

Joseph Sifakis

Invited Talks

Models and Proofs of Protocol Security: A Progress Report

This paper discusses progress in the verification of security protocols. Focusing on a small, classic example, it stresses the use of program-like representations of protocols, and their automatic analysis in symbolic and computational models.

Martín Abadi, Bruno Blanchet, Hubert Comon-Lundh
Predictability vs. Efficiency in the Multicore Era: Fight of Titans or Happy Ever after?

In this talk I will give an overview of recent trends in multi-core platforms for embedded computing. The shift toward multicore architectures has been imposed by technology reasons (power consumption and design closure issues in nanometer technology) and not by the ”coming of age” of parallel programming models, compilation, analysis and verification environments. Thus, we may be building terascale many-cores architectures that we cannot program efficiently. Even worse, we may not be able to give any guarantees on execution timing, constraints and real time properties of applications. This is a challenge AND an opportunity for the software design and verification community: I will give some views on what is being done, what could be done and what I hope will be done to build efficient and predictable multi-core platforms.

Luca Benini
SPEED: Symbolic Complexity Bound Analysis
(Invited Talk)

The SPEED project addresses the problem of computing symbolic computational complexity bounds of procedures in terms of their inputs. We discuss some of the challenges that arise and present various orthogonal/complementary techniques recently developed in the SPEED project for addressing these challenges.

Sumit Gulwani
Regression Verification: Proving the Equivalence of Similar Programs
(Invited Talk)

The ability to prove equivalence of successive, closely-related versions of a program can be useful for maintaining backward compatibility. This problem has the potential of being easier in practice than functional verification for at least two reasons: First, it circumvents the problem of specifying what the program should do; Second, in many cases it is computationally easier, because it offers various opportunities for abstraction and decomposition that are only relevant in this context.

Ofer Strichman

Regular Papers

Symbolic Counter Abstraction for Concurrent Software

The trend towards multi-core computing has made concurrent software an important target of computer-aided verification. Unfortunately, Model Checkers for such software suffer tremendously from combinatorial state space explosion. We show how to apply

counter abstraction

to real-world concurrent programs to factor out redundancy due to thread replication. The traditional global state representation as a vector of local states is replaced by a vector of thread counters, one per local state. In practice, straightforward implementations of this idea are unfavorably sensitive to the number of local states. We present a novel symbolic exploration algorithm that avoids this problem by carefully scheduling which counters to track at any moment during the search. Our experiments are carried out on Boolean programs, an abstraction promoted by the

Slam

project. To our knowledge, this marks the first application of counter abstraction to programs with non-trivial local state spaces, and results in the first scalable Model Checker for concurrent Boolean programs.

Gérard Basler, Michele Mazzucchi, Thomas Wahl, Daniel Kroening
Priority Scheduling of Distributed Systems Based on Model Checking

Priorities are used to control the execution of systems to meet given requirements for optimal use of resources, e.g., by using scheduling policies. For distributed systems, it is hard to find efficient implementations for priorities; because they express constraints on global states, their implementation may incur considerable overhead.

Our method is based on performing model checking for knowledge properties. It allows identifying where the local information of a process is sufficient to schedule the execution of a high priority transition. As a result of the model checking, the program is transformed to react upon the knowledge it has at each point. The transformed version has no priorities, and uses the gathered information and its knowledge to limit the enabledness of transitions so that it matches or approximates the original specification of priorities.

Ananda Basu, Saddek Bensalem, Doron Peled, Joseph Sifakis
Explaining Counterexamples Using Causality

When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification is complex, finding the failure in the trace becomes a non-trivial task. In this paper, we address the problem of analyzing a counterexample trace and highlighting the failure that it demonstrates. Using the notion of

causality

, introduced by Halpern and Pearl, we formally define a set of causes for the failure of the specification on the given counterexample trace. These causes are marked as red dots and presented to the user as a visual explanation of the failure. We study the complexity of computing the exact set of causes, and provide a polynomial-time algorithm that approximates it. This algorithm is implemented as a feature in the IBM formal verification platform RuleBase PE, where these visual explanations are an integral part of every counterexample trace. Our approach is independent of the tool that produced the counterexample, and can be applied as a light-weight external layer to any model checking tool, or used to explain simulation traces.

Ilan Beer, Shoham Ben-David, Hana Chockler, Avigail Orni, Richard Trefler
Size-Change Termination, Monotonicity Constraints and Ranking Functions

Size-change termination involves deducing program termination based on the impossibility of infinite descent. To this end we may use a program abstraction in which transitions are described by

monotonicity constraints

over (abstract) variables. When only constraints of the form

x

 > 

y

′ and

x

 ≥ 

y

′ are allowed, we have size-change graphs, for which both theory and practice are now more evolved then for general monotonicity constraints. This work shows that it is possible to transfer some theory from the domain of size-change graphs to the general case, complementing and extending previous work on monotonicity constraints. Significantly, we provide a procedure to construct explicit global ranking functions from monotonicity constraints in singly-exponential time, which is better than what has been published so far even for size-change graphs. We also consider the integer domain, where general monotonicity constraints are essential because the domain is not well-founded.

Amir M. Ben-Amram
Linear Functional Fixed-points

We introduce a logic of functional fixed-points. It is suitable for analyzing heap-manipulating programs and can encode several logics used for program verification with different ways of expressing reachability. While full fixed-point logic remains undecidable, several subsets admit decision procedures. In particular, for the logic of linear functional fixed-points, we develop an abstraction refinement integration of the SMT solver Z3 and a satisfiability checker for propositional linear-time temporal logic. The integration refines the temporal abstraction by generating safety formulas until the temporal abstraction is unsatisfiable or a model for it is also a model for the functional fixed-point formula.

Nikolaj Bjørner, Joe Hendrix
Better Quality in Synthesis through Quantitative Objectives

Most specification languages express only qualitative constraints. However, among two implementations that satisfy a given specification, one may be preferred to another. For example, if a specification asks that every request is followed by a response, one may prefer an implementation that generates responses quickly but does not generate unnecessary responses. We use quantitative properties to measure the “goodness” of an implementation. Using games with corresponding quantitative objectives, we can synthesize “optimal” implementations, which are preferred among the set of possible implementations that satisfy a given specification.

In particular, we show how automata with lexicographic mean-payoff conditions can be used to express many interesting quantitative properties for reactive systems. In this framework, the synthesis of optimal implementations requires the solution of lexicographic mean-payoff games (for safety requirements), and the solution of games with both lexicographic mean-payoff and parity objectives (for liveness requirements). We present algorithms for solving both kinds of novel graph games.

Roderick Bloem, Krishnendu Chatterjee, Thomas A. Henzinger, Barbara Jobstmann
Automatic Verification of Integer Array Programs

We provide a verification technique for a class of programs working on

integer arrays

of finite, but not a priori bounded length. We use the logic of integer arrays

SIL

[13] to specify pre- and post-conditions of programs and their parts. Effects of non-looping parts of code are computed syntactically on the level of

SIL

. Loop pre-conditions derived during the computation in

SIL

are converted into counter automata (CA). Loops are automatically translated—purely on the syntactical level—to transducers. Pre-condition CA and transducers are composed, and the composition over-approximated by flat automata with difference bound constraints, which are next converted back into

SIL

formulae, thus inferring post-conditions of the loops. Finally, validity of post-conditions specified by the user in

SIL

may be checked as entailment is decidable for

SIL

.

Marius Bozga, Peter Habermehl, Radu Iosif, Filip Konečný, Tomáš Vojnar
Automated Analysis of Java Methods for Confidentiality

We address the problem of analyzing programs such as J2ME midlets for mobile devices, where a central correctness requirement concerns confidentiality of data that the user wants to keep secret. Existing software model checking tools analyze individual program executions, and are not applicable to checking confidentiality properties that require reasoning about equivalence among executions. We develop an automated analysis technique for such properties. We show that both over- and under- approximation is needed for sound analysis. Given a program and a confidentiality requirement, our technique produces a formula that is satisfiable if the requirement holds. We evaluate the approach by analyzing bytecode of a set of Java (J2ME) methods.

Pavol Černý, Rajeev Alur
Requirements Validation for Hybrid Systems

The importance of requirements for the whole development flow calls for strong validation techniques based on formal methods. In the case of discrete systems, some approaches based on temporal logic satisfiability are gaining increasing momentum. However, in many real-world domains (e.g. railways signaling), the requirements constrain the temporal evolution of both discrete and continuous variables. These hybrid domains pose substantial problems: on one side, a continuous domain requires very expressive formal languages; on the other side, the resulting expressiveness results in highly intractable problems.

In this paper, we address the problem of requirements validation for real-world hybrid domains, and present two main contributions. First, we propose the HRELTL logic, that extends the Linear-time Temporal Logic with Regular Expressions (RELTL) with hybrid aspects. Second, we show that the satisfiability problem for the linear fragment can be reduced to an equi-satisfiable problem for RELTL. This makes it possible to use automatic (albeit incomplete) techniques based on Bounded Model Checking and on Satisfiability Modulo Theory.

The choice of the language is inspired by and validated within a project funded by the European Railway Agency, on the formalization and validation of the European Train Control System specifications. The activity showed that most of requirements can be formalized into HRELTL, and an experimental evaluation confirmed the practicality of the analyses.

Alessandro Cimatti, Marco Roveri, Stefano Tonetta
Towards Performance Prediction of Compositional Models in Industrial GALS Designs

Systems and Networks on Chips (NoCs) are a prime design focus of many hardware manufacturers. In addition to functional verification, which is a difficult necessity, the chip designers are facing extremely demanding performance prediction challenges, such as the need to estimate the latency of memory accesses over the NoC. This paper attacks this problem in the setting of designing globally asynchronous, locally synchronous systems (GALS). We describe foundations and applications of a combination of compositional modeling, model checking, and Markov process theory, to arrive at a viable approach to compute performance quantities directly on industrial, functionally verified GALS models.

Nicolas Coste, Holger Hermanns, Etienne Lantreibecq, Wendelin Serwe
Image Computation for Polynomial Dynamical Systems Using the Bernstein Expansion

This paper is concerned with the problem of computing the image of a set by a polynomial function. Such image computations constitute a crucial component in typical tools for set-based analysis of hybrid systems and embedded software with polynomial dynamics, which found applications in various engineering domains. One typical example is the computation of all states reachable from a given set in one step by a continuous dynamics described by a differential or difference equation. We propose a new algorithm for over-approximating such images based on the Bernstein expansion of polynomial functions. The images are stored using template polyhedra. Using a prototype implementation, the performance of the algorithm was demonstrated on two practical systems as well as a number of randomly generated examples.

Thao Dang, David Salinas
Cuts from Proofs: A Complete and Practical Technique for Solving Linear Inequalities over Integers

We propose a novel, sound, and complete Simplex-based algorithm for solving linear inequalities over integers. Our algorithm, which can be viewed as a semantic generalization of the

branch-and-bound

technique, systematically discovers and excludes entire subspaces of the solution space containing no integer points. Our main insight is that by focusing on the

defining constraints

of a vertex, we can compute a

proof of unsatisfiability

for the intersection of the defining constraints and use this proof to systematically exclude subspaces of the feasible region with no integer points. We show experimentally that our technique significantly outperforms the top four competitors in the QF-LIA category of the SMT-COMP ’08 when solving linear inequalities over integers.

Isil Dillig, Thomas Dillig, Alex Aiken
Meta-analysis for Atomicity Violations under Nested Locking

We study the problem of determining, given a run of a concurrent program, whether there is any alternate execution of it that violates atomicity, where atomicity is defined using marked blocks of local runs. We show that if a concurrent program adopts

nested locking

, the problem of predicting atomicity violations is efficiently solvable, without exploring all interleavings. In particular, for the case of atomicity violations involving only two threads and a single variable, which covers many of the atomicity errors reported in bug databases, we exhibit efficient algorithms that work in time that is

linear

in the length of the runs, and quadratic in the number of threads. Moreover, we report on an implementation of this algorithm, and show experimentally that it scales well for benchmark concurrent programs and is effective in predicting a large number of atomicity violations even from a single run.

Azadeh Farzan, P. Madhusudan, Francesco Sorrentino
An Antichain Algorithm for LTL Realizability

In this paper, we study the structure of underlying automata based constructions for solving the LTL realizability and synthesis problem. We show how to reduce the LTL realizability problem to a game with an observer that checks that the game visits a bounded number of times accepting states of a universal co-Büchi word automaton. We show that such an observer can be made deterministic and that this deterministic observer has a nice structure which can be exploited by an incremental algorithm that manipulates antichains of game positions. We have implemented this new algorithm and our first results are very encouraging.

Emmanuel Filiot, Naiyong Jin, Jean-François Raskin
On Extending Bounded Proofs to Inductive Proofs

We propose a method for extending a bounded resolution proof to an unbounded inductive proof. More specifically, given a resolution proof that a state machine beginning at an initial state satisfies some property at cycle

k

, we show that the existence of a

Δ

-invariant cut implies that the property holds for cycles

k

 + 

Δ

,

k

 + 2

Δ

, etc. We suggest a linear algorithm for identifying such

Δ

-extendible proofs and develop the required theory for covering all cycles by

Δ

-extendible proofs. To expose

Δ

-invariant cuts, we develop an efficient proof manipulation algorithm that rearranges the proof by the natural temporal order. We demonstrate the applicability of our techniques on a few real-life examples.

Oded Fuhrmann, Shlomo Hoory
Games through Nested Fixpoints

In this paper we consider two-player zero-sum payoff games on finite graphs, both in the deterministic as well as in the stochastic setting. In the deterministic setting, we consider total-payoff games which have been introduced as a refinement of mean-payoff games [10, 18]. In the stochastic setting, our class is a turn-based variant of liminf-payoff games [4, 15, 16]. In both settings, we provide a non-trivial characterization of the values through nested fixpoint equations. The characterization of the values of liminf-payoff games moreover shows that solving liminf-payoff games is polynomial-time reducible to solving stochastic parity games. We construct practical algorithms for solving the occurring nested fixpoint equations based on strategy iteration. As a corollary we obtain that solving deterministic total-payoff games and solving stochastic liminf-payoff games is in

UP

 ∩ 

co−

UP

.

Thomas Martin Gawlitza, Helmut Seidl
Complete Instantiation for Quantified Formulas in Satisfiabiliby Modulo Theories

Quantifier reasoning in Satisfiability Modulo Theories (SMT) is a long-standing challenge. The practical method employed in modern SMT solvers is to instantiate quantified formulas based on heuristics, which is not refutationally complete even for pure first-order logic. We present several decidable fragments of first order logic modulo theories. We show how to construct models for satisfiable formulas in these fragments. For richer undecidable fragments, we discuss conditions under which our procedure is refutationally complete. We also describe useful heuristics based on model checking for prioritizing or avoiding instantiations.

Yeting Ge, Leonardo de Moura
Software Transactional Memory on Relaxed Memory Models

Pseudo-code descriptions of STMs assume sequentially consistent program execution and atomicity of high-level STM operations like read, write, and commit. These assumptions are often violated in realistic settings, as STM implementations run on relaxed memory models, with the atomicity of operations as provided by the hardware. This paper presents the first approach to verify STMs under relaxed memory models with atomicity of 32 bit loads and stores, and read-modify-write operations. We present RML, a new high-level language for expressing concurrent algorithms with a hardware-level atomicity of instructions, and whose semantics is parametrized by various relaxed memory models. We then present our tool, FOIL, which takes as input the RML description of an STM algorithm and the description of a memory model, and automatically determines the locations of fences, which if inserted, ensure the correctness of the STM algorithm under the given memory model. We use FOIL to verify DSTM, TL2, and McRT STM under the memory models of sequential consistency, total store order, partial store order, and relaxed memory order.

Rachid Guerraoui, Thomas A. Henzinger, Vasu Singh
Sliding Window Abstraction for Infinite Markov Chains

We present an on-the-fly abstraction technique for infinite-state continuous -time Markov chains. We consider Markov chains that are specified by a finite set of transition classes. Such models naturally represent biochemical reactions and therefore play an important role in the stochastic modeling of biological systems. We approximate the transient probability distributions at various time instances by solving a sequence of dynamically constructed abstract models, each depending on the previous one. Each abstract model is a finite Markov chain that represents the behavior of the original, infinite chain during a specific time interval. Our approach provides complete information about probability distributions, not just about individual parameters like the mean. The error of each abstraction can be computed, and the precision of the abstraction refined when desired. We implemented the algorithm and demonstrate its usefulness and efficiency on several case studies from systems biology.

Thomas A. Henzinger, Maria Mateescu, Verena Wolf
Centaur Technology Media Unit Verification
Case Study: Floating-Point Addition

We have verified floating-point addition/subtraction instructions for the media unit from Centaur’s 64-bit, X86-compatible microprocessor. This unit implements over one hundred instructions, with the most complex being floating-point addition/subtraction. This media unit can add/subtract four pairs of floating-point numbers every clock cycle with an industry-leading two-cycle latency.

Using the ACL2 theorem proving system, we model the media unit by translating its Verilog design into an HDL that we have deeply embedded in the ACL2 logic. We specify the addition/subtraction instructions as integer-based ACL2 functions. Using a combination of AIG- and BDD-based symbolic simulation, case splitting, and theorem proving, we produce a mechanically checked theorem in ACL2 for each instruction examined stating that the hardware model yields the same result as the instruction specification.

In pursuit of these verifications, we implemented a formal HDL and symbolic simulation framework, including formally verified BDD and AIG operators, within the ACL2 theorem proving system. The HDL includes an extensible interpreter capable of performing concrete and symbolic simulations as well as non-functional analyses. We know of no other symbolic simulation-based floating-point verification that is performed within a single formal system and produces a theorem in that system without relying on unverified external tools.

Warren A. Hunt Jr., Sol Swords
Incremental Instance Generation in Local Reasoning

Many verification approaches use SMT solvers in some form, and are limited by their incomplete handling of quantified formulas. Local reasoning allows to handle SMT problems involving a certain class of universally quantified formulas in a complete way by instantiation to a finite set of ground formulas. We present a method to generate these instances incrementally, in order to provide a more efficient way of solving these satisfiability problems. The incremental instantiation is guided semantically, inspired by the instance generation approach to first-order theorem proving. Our method is sound and complete, and terminates on both satisfiable and unsatisfiable input after generating a subset of the instances needed in standard local reasoning. Experimental results show that for a large class of examples the incremental approach is substantially more efficient than eager generation of all instances.

Swen Jacobs
Quantifier Elimination via Functional Composition

This paper poses the following basic question: Given a quantified Boolean formula ∃ 

x

.

ϕ

, what should a function/formula

f

be such that substituting

f

for

x

in

ϕ

yields a logically equivalent quantifier-free formula? Its answer leads to a solution to quantifier elimination in the Boolean domain, alternative to the conventional approach based on formula expansion. Such a composite function can be effectively derived using symbolic techniques and further simplified for practical applications. In particular, we explore Craig interpolation for scalable computation. This compositional approach to quantifier elimination is analyzably superior to the conventional one under certain practical assumptions. Experiments demonstrate the scalability of the approach. Several large problem instances unsolvable before can now be resolved effectively. A generalization to first-order logic characterizes a composite function’s complete flexibility, which awaits further exploitation to simplify quantifier elimination beyond the propositional case.

Jie-Hong R. Jiang
Monotonic Partial Order Reduction: An Optimal Symbolic Partial Order Reduction Technique

We present a new technique called

Monotonic Partial Order Reduction (MPOR)

that effectively combines dynamic partial order reduction with symbolic state space exploration for model checking concurrent software. Our technique hinges on a new characterization of partial orders defined by computations of a concurrent program in terms of

quasi-monotonic sequences

of thread-ids. This characterization, which is of independent interest, can be used both for explicit or symbolic model checking. For symbolic model checking, MPOR works by adding constraints to allow automatic pruning of redundant interleavings in a SAT/SMT solver based search by restricting the interleavings explored to the set of quasi-monotonic sequences. Quasi-monotonicity guarantees both soundness (all necessary interleavings are explored) and optimality (no redundant interleaving is explored) and is, to the best of our knowledge, the only known optimal symbolic POR technique.

Vineet Kahlon, Chao Wang, Aarti Gupta
Replacing Testing with Formal Verification in Intel $^{\scriptsize\circledR}$ CoreTM i7 Processor Execution Engine Validation

Formal verification of arithmetic datapaths has been part of the established methodology for most Intel processor designs over the last years, usually in the role of supplementing more traditional coverage oriented testing activities. For the recent Intel

$^{\tiny\circledR}$

Core

TM

i7 design we took a step further and used formal verification as the primary validation vehicle for the core execution cluster, the component responsible for the functional behaviour of all microinstructions. We applied symbolic simulation based formal verification techniques for full datapath, control and state validation for the cluster, and dropped coverage driven testing entirely. The project, involving some twenty person years of verification work, is one of the most ambitious formal verification efforts in the hardware industry to date. Our experiences show that under the right circumstances, full formal verification of a design component is a feasible, industrially viable and competitive validation approach.

Roope Kaivola, Rajnish Ghughal, Naren Narasimhan, Amber Telfer, Jesse Whittemore, Sudhindra Pandav, Anna Slobodová, Christopher Taylor, Vladimir Frolov, Erik Reeber, Armaghan Naik
Generating and Analyzing Symbolic Traces of Simulink/Stateflow Models

We present a methodology and a toolkit for improving simulation coverage of Simulink/Stateflow models of hybrid systems using symbolic analysis of simulation traces. We propose a novel instrumentation scheme that allows the simulation engine of Simulink/Stateflow to output, along with the concrete simulation trace, the symbolic transformers needed for our analysis. Given a simulation trace, along with the symbolic transformers, our analysis computes a set of initial states that would lead to traces with the same sequence of discrete components at each step of the simulation. Such an analysis relies critically on the use of convex polyhedra to represent sets of states. However, the exponential complexity of the polyhedral operations implies that the performance of the analysis would degrade rapidly with the increasing size of the model and the simulation traces. We propose a new representation, called the

bounded vertex representation

, which allows us to perform under-approximate computations while fixing the complexity of the representation

a priori

. Using this representation we achieve a trade-off between the complexity of the symbolic computation and the quality of the under-approximation. We demonstrate the benefits of our approach over existing simulation and verification methods with case studies.

Aditya Kanade, Rajeev Alur, Franjo Ivančić, S. Ramesh, Sriram Sankaranarayanan, K. C. Shashidhar
A Markov Chain Monte Carlo Sampler for Mixed Boolean/Integer Constraints

We describe a Markov chain Monte Carlo (MCMC)-based algorithm for sampling solutions to mixed Boolean/integer constraint problems. The focus of this work differs in two points from traditional SAT Modulo Theory (SMT) solvers, which are aimed at deciding whether a given set of constraints is satisfiable: First, our approach targets constraint problems that have a large solution space and thus are relatively easy to satisfy, and second, it aims at efficiently producing a large number of samples with a given (e.g. uniform) distribution over the solution space. Our work is motivated by the need for such samplers in constrained random simulation for hardware verification, where the set of valid input stimuli is specified by a “testbench” using declarative constraints. MCMC sampling is commonly applied in statistics and numerical computation. We discuss how an MCMC sampler can be adapted for the given application, specifically, how to deal with non-connected solution spaces, efficiently process equality and disequality constraints, handle state-dependent constraints, and avoid correlation of consecutive samples. We present a set of experiments to analyze the performance of the proposed approach.

Nathan Kitchen, Andreas Kuehlmann
Generalizing DPLL to Richer Logics

The DPLL approach to the Boolean satisfiability problem (SAT) is a combination of search for a satisfying assignment and logical deduction, in which each process guides the other. We show that this approach can be generalized to a richer class of theories. In particular, we present an alternative to lazy SMT solvers, in which DPLL is used only to find propositionally satisfying assignments, whose feasibility is checked by a separate theory solver. Here, DPLL is applied directly to the theory. We search in the space of theory structures (for example, numerical assignments) rather than propositional assignments. This makes it possible to use conflict in model search to guide deduction in the theory, much in the way that it guides propositional resolution in DPLL. Some experiments using linear rational arithmetic demonstrate the potential advantages of the approach.

Kenneth L. McMillan, Andreas Kuehlmann, Mooly Sagiv
Reducing Context-Bounded Concurrent Reachability to Sequential Reachability

We give a translation from concurrent programs to sequential programs that reduces the context-bounded reachability problem in the concurrent program to a reachability problem in the sequential one. The translation has two salient features: (a) the sequential program tracks, at any time, the local state of only one thread (though it does track multiple copies of shared variables), and (b) all reachable states of the sequential program correspond to reachable states of the concurrent program.

We also implement our transformation in the setting of concurrent recursive programs with finite data domains, and show that the resulting sequential program can be model-checked efficiently using existing recursive sequential program reachability tools.

Salvatore La Torre, P. Madhusudan, Gennaro Parlato
Intra-module Inference

Contract-based property checkers hold the potential for precise, scalable, and incremental reasoning. However, it is difficult to apply such checkers to large program modules because they require programmers to provide detailed contracts, including an interface specification, module invariants, and internal specifications. We argue that given a suitably rich assertion language, modest effort suffices to document the interface specification and the module invariants. However, the burden of providing internal specifications is still significant and remains a deterrent to the use of contract-based checkers. Therefore, we consider the problem of

intra-module inference

, which aims to infer annotations for internal procedures and loops, given the interface specification and the module invariants. We provide simple and scalable techniques to search for a broad class of desired internal annotations, comprising quantifiers and Boolean connectives, guided by the module specification. We have validated our ideas by building a prototype verifier and using it to verify several properties on Windows device drivers with zero false alarms and small annotation overhead. These drivers are complex; they contain thousands of lines and use dynamic data structures such as linked lists and arrays. Our technique significantly improves the soundness, precision, and coverage of verification of these programs compared to earlier techniques.

Shuvendu K. Lahiri, Shaz Qadeer, Juan P. Galeotti, Jan W. Voung, Thomas Wies
Static and Precise Detection of Concurrency Errors in Systems Code Using SMT Solvers

Context-bounded analysis is an attractive approach to verification of concurrent programs. Bounding the number of contexts executed per thread not only reduces the asymptotic complexity, but also the complexity increases gradually from checking a purely sequential program.

Lal and Reps [14] provided a method for reducing the context-bounded verification of a concurrent boolean program to the verification of a sequential boolean program, thereby allowing sequential reasoning to be employed for verifying concurrent programs. In this work, we adapt the encoding to work for systems programs written in C with the heap and accompanying low-level operations such as pointer arithmetic and casts. Our approach is completely automatic: we use a verification condition generator and SMT solvers, instead of a boolean model checker, in order to avoid manual extraction of boolean programs and false alarms introduced by the abstraction. We demonstrate the use of

field slicing

for improving the scalability and (in some cases) coverage of our checking. We evaluate our tool

Storm

on a set of real-world Windows device drivers, and has discovered a bug that could not be detected by extensive application of previous tools.

Shuvendu K. Lahiri, Shaz Qadeer, Zvonimir Rakamarić
Predecessor Sets of Dynamic Pushdown Networks with Tree-Regular Constraints

Dynamic Pushdown Networks (DPNs) are a model for parallel programs with (recursive) procedures and process creation. The goal of this paper is to develop generic techniques for more expressive reachability analysis of DPNs.

In the first part of the paper we introduce a new tree-based view on executions. Traditional interleaving semantics model executions by totally ordered sequences. Instead, we model an execution by a partially ordered set of rule applications, that only specifies the per-process ordering and the causality due to process creation, but no ordering between rule applications on processes that run in parallel. Tree-based executions allow us to compute predecessor sets of regular sets of DPN configurations relative to (tree-) regular constraints on executions. The corresponding problem for interleaved executions is not effective.

In the second part of the paper, we extend DPNs with (well-nested) locks. We generalize Kahlon and Gupta’s technique of acquisition histories to DPNs, and apply the results of the first part of this paper to compute lock-sensitive predecessor sets.

Peter Lammich, Markus Müller-Olm, Alexander Wenner
Reachability Analysis of Hybrid Systems Using Support Functions

This paper deals with conservative reachability analysis of a class of hybrid systems with continuous dynamics described by linear differential inclusions, convex invariants and guards, and linear reset maps. We present an approach for computing over-approximations of the set of reachable states. It is based on the notion of support function and thus it allows us to consider invariants, guards and constraints on continuous inputs and initial states defined by arbitrary compact convex sets. We show how the properties of support functions make it possible to derive an effective algorithm for approximate reachability analysis of hybrid systems. We use our approach on some examples including the navigation benchmark for hybrid systems verification.

Colas Le Guernic, Antoine Girard
Reducing Test Inputs Using Information Partitions

Automatic symbolic techniques to generate test inputs, for example, through concolic execution, suffer from

path explosion

: the number of paths to be symbolically solved for grows exponentially with the number of inputs. In many applications though, the inputs can be partitioned into “non-interfering” blocks such that symbolically solving for each input block while keeping all other blocks fixed to concrete values can find the same set of assertion violations as symbolically solving for the entire input. This can greatly reduce the number of paths to be solved (in the best case, from exponentially many to linearly many in the number of inputs). We present an algorithm that combines test input generation by concolic execution with dynamic computation and maintenance of information flow between inputs. Our algorithm iteratively constructs a partition of the inputs, starting with the finest (all inputs separate) and merging blocks if a dependency is detected between variables in distinct input blocks during test generation. Instead of exploring all paths of the program, our algorithm separately explores paths for each block (while fixing variables in other blocks to random values). In the end, the algorithm outputs an input partition and a set of test inputs such that (a) inputs in different blocks do not have any dependencies between them, and (b) the set of tests provides equivalent coverage with respect to finding assertion violations as full concolic execution. We have implemented our algorithm in the

Splat

test generation tool. We demonstrate that our reduction is effective by generating tests for four examples in packet processing and operating system code.

Rupak Majumdar, Ru-Gang Xu
On Using Floating-Point Computations to Help an Exact Linear Arithmetic Decision Procedure

We consider the decision problem for quantifier-free formulas whose atoms are linear inequalities interpreted over the reals or rationals. This problem may be decided using satisfiability modulo theory (SMT), using a mixture of a SAT solver and a simplex-based decision procedure for conjunctions. State-of-the-art SMT solvers use simplex implementations over rational numbers, which perform well for typical problems arising from model-checking and program analysis (sparse inequalities, small coefficients) but are slow for other applications (denser problems, larger coefficients).

We propose a simple preprocessing phase that can be adapted to existing SMT solvers and that may be optionally triggered. Despite using floating-point computations, our method is sound and complete — it merely affects efficiency. We implemented the method and provide benchmarks showing that this change brings a naive and slow decision procedure (“textbook simplex” with rational numbers) up to the efficiency of recent SMT solvers, over test cases arising from model-checking, and makes it definitely faster than state-of-the-art SMT solvers on dense examples.

David Monniaux
Cardinality Abstraction for Declarative Networking Applications

Declarative Networking is a recent, viable approach to make distributed programming easier, which is becoming increasingly popular in systems and networking community. It offers the programmer a declarative, rule-based language, called P2, for writing distributed applications in an abstract, yet expressive way. This approach, however, imposes new challenges on analysis and verification methods when they are applied to P2 programs. Reasoning about P2 computations is beyond the scope of existing tools since it requires handling of program states defined in terms of collections of relations, which store the application data, together with multisets of tuples, which represent communication events in-flight. In this paper, we propose a cardinality abstraction technique that can be used to analyze and verify P2 programs. It keeps track of the size of relations (together with projections thereof) and multisets defining P2 states, and provides an appropriate treatment of declarative operations, e.g., indexing, unification, variable binding, and negation. Our cardinality abstraction-based verifier successfully proves critical safety properties of a P2 implementation of the Byzantine fault tolerance protocol Zyzzyva, which is a representative and complex declarative networking application.

Juan Antonio Navarro Pérez, Andrey Rybalchenko, Atul Singh
Equivalence Checking of Static Affine Programs Using Widening to Handle Recurrences

Designers often apply manual or semi-automatic loop and data transformations on array and loop intensive programs to improve performance. The transformations should preserve the functionality, however, and this paper presents an automatic method for constructing equivalence proofs for the class of static affine programs. The equivalence checking is performed on a dependence graph abstraction and uses a new approach based on widening to handle recurrences. Unlike transitive closure based approaches, this widening approach can also handle non-uniform recurrences. The implementation is publicly available and is the first of its kind to fully support commutative operations.

Sven Verdoolaege, Gerda Janssens, Maurice Bruynooghe

Tool Papers

D-Finder: A Tool for Compositional Deadlock Detection and Verification

D-Finder tool implements a compositional method for the verification of component-based systems described in BIP language encompassing multi-party interaction. For deadlock detection, D-Finder applies proof strategies to eliminate potential deadlocks by computing increasingly stronger invariants.

Saddek Bensalem, Marius Bozga, Thanh-Hung Nguyen, Joseph Sifakis
HybridFluctuat: A Static Analyzer of Numerical Programs within a Continuous Environment

A new static analyzer is described, based on the analyzer Fluctuat. Its goal is to synthetize invariants for hybrid systems, encompassing a continuous environment described by a system of possibly switched ODEs, and an ANSI C program, in interaction with it. The evolution of the continuous environment is over-approximated using a guaranteed integrator that we developped, and special assertions are added to the program that simulate the action of sensors and actuators, making the continuous environment and the program communicate. We demonstrate our approach on an industrial case study, a part of the flight control software of ASTRIUM’s Automated Transfer Vehicle (ATV).

Olivier Bouissou, Eric Goubault, Sylvie Putot, Karim Tekkal, Franck Vedrine
The Zonotope Abstract Domain Taylor1+

Static analysis by abstract interpretation [1] aims at automatically inferring properties on the behaviour of programs. We focus here on a specific kind of numerical invariants: the set of values taken by numerical variables, with a real numbers semantics, at each control point of a program.

Khalil Ghorbal, Eric Goubault, Sylvie Putot
InvGen: An Efficient Invariant Generator

In this paper we present

InvGen

, an automatic linear arithmetic invariant generator for imperative programs.

InvGen

’s unique feature is in its use of dynamic analysis to make invariant generation order of magnitude more efficient.

Ashutosh Gupta, Andrey Rybalchenko
INFAMY: An Infinite-State Markov Model Checker

The design of complex concurrent systems often involves intricate performance and dependability considerations. Continuous-time Markov chains (CTMCs) are a widely used modeling formalism, where performance and dependability properties are analyzable by model checking. We present

INFAMY

, a model checker for arbitrarily structured infinite-state CTMCs. It checks probabilistic timing properties expressible in continuous stochastic logic (CSL). Conventional model checkers explore the given model exhaustively, which is often costly, due to state explosion, and impossible if the model is infinite.

INFAMY

only explores the model up to a finite depth, with the depth bound being computed

on-the-fly

. The computation of depth bounds is configurable to adapt to the characteristics of different classes of models.

Ernst Moritz Hahn, Holger Hermanns, Björn Wachter, Lijun Zhang
Browser-Based Enforcement of Interface Contracts in Web Applications with BeepBeep

BeepBeep is a lightweight runtime monitor for Ajax web applications. Interface specifications are expressed internally in an extension of LTL with first-order quantification; they can be transparently enforced on the client side using a small and invisible Java applet. Violations of the specification are reported on-the-fly and prevent erroneous or out-of-sequence XML messages from reaching the server.

Sylvain Hallé, Roger Villemaire
Homer: A Higher-Order Observational Equivalence Model checkER

We present

Homer

, an observational-equivalence model checker for the 3rd-order fragment of Idealized Algol (IA) augmented with iteration. It works by first translating terms of the fragment into a precise representation of their game semantics as visibly pushdown automata (VPA). The VPA-translates are then passed to a VPA toolkit (which we have implemented) to test for equivalence. Thanks to the fully abstract game semantics, observational equivalence of these IA-terms reduces to the VPA Equivalence Problem. Our checker is thus sound and complete; because it model checks

open

terms, our approach is also compositional. Further, if the terms are inequivalent,

Homer

will produce both a game-semantic and an operational-semantic counter-example, in the form of a play and a separating context respectively. We showcase these features on a number of examples and (where appropriate) compare its performance with similar tools. To the best of our knowledge,

Homer

is the first implementation of a model checker of 3rd-order programs.

David Hopkins, C. -H. Luke Ong
Apron: A Library of Numerical Abstract Domains for Static Analysis

This article describes

Apron

, a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.

Bertrand Jeannet, Antoine Miné
Beaver: Engineering an Efficient SMT Solver for Bit-Vector Arithmetic

We present the key ideas in the design and implementation of Beaver, an SMT solver for quantifier-free finite-precision bit-vector logic (QF_BV). Beaver uses an eager approach, encoding the original SMT problem into a Boolean satisfiability (SAT) problem using a series of word-level and bit-level transformations. In this paper, we describe the most effective transformations, such as propagating constants and equalities at the word-level, and using and-inverter graph rewriting techniques at the bit-level. We highlight implementation details of these transformations that distinguishes Beaver from other solvers. We present an experimental analysis of the effectiveness of Beaver’s techniques on both hardware and software benchmarks with a selection of back-end SAT solvers.

Beaver is an open-source tool implemented in Ocaml, usable with any back-end SAT engine, and has a well-documented extensible code base that can be used to experiment with new algorithms and techniques.

Susmit Jha, Rhishikesh Limaye, Sanjit A. Seshia
CalFuzzer: An Extensible Active Testing Framework for Concurrent Programs

Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.

Pallavi Joshi, Mayur Naik, Chang-Seo Park, Koushik Sen
MCMAS: A Model Checker for the Verification of Multi-Agent Systems

While temporal logic in its various forms has proven essential to reason about reactive systems, agent-based scenarios are typically specified by considering high-level agents attitudes. In particular, specification languages based on epistemic logic [7], or logics for knowledge, have proven useful in a variety of areas including robotics, security protocols, web-services, etc. For example, security specifications involving anonymity [4] are known to be naturally expressible in epistemic formalisms as they explicitly state the lack of different kinds of knowledge of the principals.

Alessio Lomuscio, Hongyang Qu, Franco Raimondi
TASS: Timing Analyzer of Scenario-Based Specifications

In this paper, we present TASS which is a timing analyzer of scenario-based specifications. TASS accepts UML2.0 interaction models with general and expressive timing constraints and can be used for three kinds of timing analysis problems: the reachability analysis, the constraint conformance analysis, and the bounded delay analysis. The underlying technique of TASS is to reduce the timing analysis problems into linear programming problems.

Minxue Pan, Lei Bu, Xuandong Li
Translation Validation: From Simulink to C

Translation validation is a technique for formally establishing the semantic equivalence of the source and the target of a code generator. In this work we present a translation validation tool for the

Real-Time Workshop

code generator that receives as input Simulink models and generates optimized C code.

Michael Ryabtsev, Ofer Strichman
VS3: SMT Solvers for Program Verification

We present VS

3

, a tool that automatically verifies complex properties of programs and infers maximally weak preconditions and maximally strong postconditions by leveraging the power of SMT solvers. VS

3

discovers program invariants with arbitrary, but prespecified, quantification and logical structure. The user supplies VS

3

with a set of predicates and invariant templates. VS

3

automatically finds instantiations of the unknowns in the templates as subsets of the predicate set. We have used VS

3

to automatically verify ∀ ∃ properties of programs and to infer worst case upper bounds and preconditions for functional correctness.

Saurabh Srivastava, Sumit Gulwani, Jeffrey S. Foster
PAT: Towards Flexible Verification under Fairness

Recent development on distributed systems has shown that a variety of fairness constraints (some of which are only recently defined) play vital roles in designing self-stabilizing population protocols. Current practice of system analysis is, however, deficient under fairness. In this work, we present PAT, a toolkit for flexible and efficient system analysis under fairness. A unified algorithm is proposed to model check systems with a variety of fairness effectively in two different settings. Empirical evaluation shows that PAT complements existing model checkers in terms of fairness. We report that previously unknown bugs have been revealed using PAT against systems functioning under strong global fairness.

Jun Sun, Yang Liu, Jin Song Dong, Jun Pang
A Concurrent Portfolio Approach to SMT Solving

With the availability of multi-core processors and large-scale computing clusters, the study of parallel algorithms has been revived throughout the industry. We present a portfolio approach to deciding the satisfiability of SMT formulas, based on the recent success of related algorithms for the SAT problem. Our parallel version of Z3 outperforms the sequential solver, with speedups of well over an order of magnitude on many benchmarks.

Christoph M. Wintersteiger, Youssef Hamadi, Leonardo de Moura
Backmatter
Metadaten
Titel
Computer Aided Verification
herausgegeben von
Ahmed Bouajjani
Oded Maler
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-02658-4
Print ISBN
978-3-642-02657-7
DOI
https://doi.org/10.1007/978-3-642-02658-4