Skip to main content
Top

2007 | Book

Computer Aided Verification

19th International Conference, CAV 2007, Berlin, Germany, July 3-7, 2007. Proceedings

Editors: Werner Damm, Holger Hermanns

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This volume contains the proceedings of the International Conference on C- puter Aided Veri?cation (CAV), held in Berlin, Germany, July 3–7, 2007. CAV 2007 was the 19th in a series of conferences dedicated to the advancement of the theory and practice of computer-assisted formal analysis methods for software and hardware systems. The conference covers the spectrum from theoretical - sults to concrete applications, with an emphasis on practical veri?cation tools and the algorithms and techniques that are needed for their implementation. We received 134 regular paper submissions and 39 tool paper submissions. Of these, the ProgramCommittee selected 33 regularpapersand 14 toolpapers. Each submission was reviewed by at least three members of the Program C- mittee. The reviewing process included a PC review meeting, and – for the ?rst time in the history of CAV – an author feedback period. About 50 additional reviews were provided by experts external to the Program Committee to assure a high quality selection. The CAV 2007 program included three invited talks from industry: – Byron Cook (Microsoft Research) on Automatically Proving Program T- mination, – David Russino? (AMD) on A Mathematical Approach to RTL Veri?cation, and – Thomas Kropf (Bosch) on Software Bugs Seen from an Industrial Persp- tive.

Table of Contents

Frontmatter

Invited Talks

Automatically Proving Program Termination
(Invited Talk)

In this talk I will describe new tools that allow us to automatically prove termination and other liveness properties of software systems. In particular I will discuss the Terminator program termination prover and its application to the problem of showing that Windows device driver event-handling routines always eventually stop responding to events.

Byron Cook
A Mathematical Approach to RTL Verification
(Invited Talk)

The formal hardware verification effort at Advanced Micro Devices, Inc. has emphasized theorem proving using ACL2, and has focused on the elementary floating-point operations. Floating-point modules, along with the rest of our microprocessor designs, are specified at the register-transfer level in a small synthesizable subset of Verilog. This language is simple enough to admit a clear semantic definition, providing a basis for formal analysis and verification. Thus, we have developed a scheme for automatically translating RTL code into the ACL2 logic, thereby reducing the potential for error in the development of formal hardware models.

David M. Russinoff
Software Bugs Seen from an Industrial Perspective or Can Formal Methods Help on Automotive Software Development?
(Invited Talk)

Developing software for automotive applications is a challenging task. To stay competitive conflicting goals must be met: complex and innovative algorithms with many versions for different car line variants have to be implemented within the tight resource boundaries of embedded systems; high reliability especially for safety critical applications like airbag or braking applications has to be ensured under immense cost pressure. Despite these demanding constraints in recent years automotive software development has made significant progress in terms of productivity and quality. All this has been achieved without direct usage of formal methods.

Thomas Kropf

Invited Tutorials

Algorithms for Interface Synthesis
(Invited Tutorial)

A temporal interface for a software component is a finite automaton that specifies the legal sequences of calls to functions that are provided by the component. We compare and evaluate three different algorithms for automatically extracting temporal interfaces from program code: (1) a

game

algorithm that computes the interface as a representation of the most general environment strategy to avoid a safety violation; (2) a

learning

algorithm that repeatedly queries the program to construct the minimal interface automaton; and (3) a

CEGAR

algorithm that iteratively refines an abstract interface hypothesis by adding relevant program variables. For comparison purposes, we present and implement the three algorithms in a unifying formal setting. While the three algorithms compute the same output and have similar worst-case complexities, their actual running times may differ considerably for a given input program. On the theoretical side, we provide for each of the three algorithms a family of input programs on which that algorithm outperforms the two alternatives. On the practical side, we evaluate the three algorithms experimentally on a variety of Java libraries.

Dirk Beyer, Thomas A. Henzinger, Vasu Singh
A Tutorial on Satisfiability Modulo Theories
(Invited Tutorial)

Solvers for satisfiability modulo theories (SMT) check the satisfiability of first-order formulas containing operations from various theories such as the Booleans, bit-vectors, arithmetic, arrays, and recursive datatypes. SMT solvers are extensions of Boolean satisfiability solvers (SAT solvers) that check the satisfiability of formulas built from Boolean variables and operations. SMT solvers have a wide range of applications in hardware and software verification, extended static checking, constraint solving, planning, scheduling, test case generation, and computer security. We briefly survey the theory of SAT and SMT solving, and present some of the key algorithms in the form of pseudocode. This tutorial presentation is primarily directed at those who wish to build satisfiability solvers or to use existing solvers more effectively.

Leonardo de Moura, Bruno Dutertre, Natarajan Shankar
A JML Tutorial: Modular Specification and Verification of Functional Behavior for Java
(Invited Tutorial)

JML, the Java Modeling Language, is the

lingua

franca of researchers working on specification and verification techniques and tools for Java. There are over 23 research groups worldwide working on various aspects of the JML project. These groups have built a large suite of tools for automated checking and verification (see http://jmlspecs.org).

Gary T. Leavens, Joseph R. Kiniry, Erik Poll
Verification of Hybrid Systems
(Invited Tutorial)

Embedded digital systems have become ubiquitous in everyday life. Many such systems, including many of the safety-critical ones, operate within or comprise tightly coupled networks of both discrete-state and continuous-state components. The behavior of such

hybrid discrete-continuous systems

cannot be fully understood without explicitly modeling and analyzing the tight interaction of their discrete switching behavior and their continuous dynamics, as mutual feedback confines fully separate analysis to limited cases. Tools for building such integrated models and for simulating their approximate dynamics are commercially available, e.g. Simulink with the Stateflow extension. Simulation is, however, inherently incomplete and has to be complemented by

verification

, which amounts available, e.g. Simulink with the Stateflow extension1. Simulation is, however, inherently incomplete and has to be complemented by verification, which amounts to showing that the coupled dynamics of the embedded system and its environment is well-behaved, regardless of the actual disturbance and the influences of the application context, as entering through the open inputs of the system under investigation. Basic notions of being well-behaved demand that the system under investigation may never reach an undesirable state

(safety)

, that it will converge to a certain set of states

(stabilization)

, or that it can be guaranteed to eventually reach a desirable state

(progress)

.

Martin Fränzle

Session I: Compositionality

SAT-Based Compositional Verification Using Lazy Learning

A recent approach to automated assume-guarantee reasoning (AGR) for concurrent systems relies on computing environment assumptions for components using the

L

*

algorithm for learning regular languages. While this approach has been investigated extensively for message passing systems, it still remains a challenge to scale the technique to large shared memory systems, mainly because the assumptions have an exponential communication alphabet size. In this paper, we propose a SAT-based methodology that employs both induction and interpolation to implement automated AGR for shared memory systems. The method is based on a new

lazy

approach to assumption learning, which avoids an explicit enumeration of the exponential alphabet set during learning by using symbolic alphabet clustering and iterative counterexample-driven localized partitioning. Preliminary experimental results on benchmarks in Verilog and SMV are encouraging and show that the approach scales well in practice.

Nishant Sinha, Edmund Clarke
Local Proofs for Global Safety Properties

This paper explores the concept of locality in proofs of global safety properties of asynchronously composed, multi-process programs. Model checking on the full state space is often infeasible due to state explosion. A local proof, in contrast, is a collection of per-process invariants, which together imply the global safety property. Local proofs can be compact: but a central problem is that local reasoning is incomplete. In this paper, we present a “completion” algorithm, which gradually exposes facts about the internal state of components, until either a local proof or a real error is discovered. Experiments show that local reasoning can have significantly better performance over a reachability computation. Moreover, for some parameterized protocols, a local proof can be used to show correctness for

all

instances.

Ariel Cohen, Kedar S. Namjoshi

Session II: Verification Process

Low-Level Library Analysis and Summarization

Programs typically make extensive use of libraries, including dynamically linked libraries, which are often not available in source-code form, and hence not analyzable by tools that work at source level (i.e., that analyze intermediate representations created from source code). A common approach is to write

library models

by hand. A library model is a collection of function stubs and variable declarations that capture some aspect of the library code’s behavior. Because these are hand-crafted, they are likely to contain errors, which may cause an analysis to return incorrect results.

This paper presents a method to construct summary information for a library function automatically by analyzing its low-level implementation (i.e., the library’s binary).

Denis Gopan, Thomas Reps
Verification Across Intellectual Property Boundaries

In many industries, the share of software components provided by third-party suppliers is steadily increasing. As the suppliers seek to secure their intellectual property (IP) rights, the customer usually has no direct access to the suppliers’ source code, and is able to enforce the use of verification tools only by legal requirements. In turn, the supplier has no means to convince the customer about successful verification without revealing the source code. This paper presents a new approach to resolve the conflict between the IP interests of the supplier and the quality interests of the customer. We introduce a protocol in which a dedicated server (called the “amanat”) is controlled by both parties: the customer controls the verification task performed by the amanat, while the supplier controls the communication channels of the amanat to ensure that the amanat does not leak information about the source code. We argue that the protocol is both practically useful and mathematically sound. As the protocol is based on well-known (and relatively lightweight) cryptographic primitives, it allows a straightforward implementation on top of existing verification tool chains. To substantiate our security claims, we establish the correctness of the protocol by cryptographic reduction proofs.

Sagar Chaki, Christian Schallhart, Helmut Veith

Session III: Timed Synthesis and Games

On Synthesizing Controllers from Bounded-Response Properties

In this paper we propose a complete chain for synthesizing controllers from high-level specifications. From real-time properties expressed in the logic

mtl

we generate, under bounded-variability assumptions,

deterministic

timed automata to which we apply safety synthesis algorithms to derive a controller that satisfies the properties by construction. Some preliminary experimental results are reported.

Oded Maler, Dejan Nickovic, Amir Pnueli
An Accelerated Algorithm for 3-Color Parity Games with an Application to Timed Games

Three-color parity games capture the disjunction of a Büchi and a co-Büchi condition. The most efficient known algorithm for these games is the progress measures algorithm by Jurdziński. We present an acceleration technique that, while leaving the worst-case complexity unchanged, often leads to considerable speed-ups in games arising in practice. As an application, we consider games played in discrete real time, where players should be prevented from stopping time by always choosing moves with delay zero. The time progress condition can be encoded as a three-color parity game. Using the tool

Ticc

as a platform, we compare the performance of a BDD-based symbolic implementation of the progress measure algorithm with acceleration, and of the symbolic implementation of the classical

μ

-calculus algorithm of Emerson and Jutla.

Luca de Alfaro, Marco Faella
UPPAAL-Tiga: Time for Playing Games!
(Tool Paper)

In 2005 we proposed the first efficient on-the-fly algorithm for solving games based on timed game automata with respect to reachability and safety properties. The first prototype presented at that time has now matured to a fully integrated tool with dramatic improvements both in terms of performance and the availability of the extended input language of

Uppaal

-4.0. The new tool can output strategies or let the user play against them both from the command line and from the graphical simulator that was completely re-designed.

Gerd Behrmann, Agnès Cougnard, Alexandre David, Emmanuel Fleury, Kim G. Larsen, Didier Lime
The TASM Toolset: Specification, Simulation, and Formal Verification of Real-Time Systems
(Tool Paper)

In this paper, we describe the features of the Timed Abstract State Machine toolset. The toolset implements the features of the Timed Abstract State Machine (TASM) language, a specification language for reactive real-time systems. The TASM language enables the specification of functional and non-functional properties using a unified language. The toolset incorporates features to create specifications, simulate specifications, and verify formal properties of specifications. Properties that can be verified using the toolset include completeness, consistency, worst-case execution time, and best-case execution time. The toolset is being developed as part of an architecture-based framework for embedded real-time system engineering. We describe how the features of the toolset were used successfully to model and analyze case studies from the aerospace and automotive communities.

Martin Ouimet, Kristina Lundqvist

Session IV: Infinitive State Verification

Systematic Acceleration in Regular Model Checking

Regular model checking is a form of symbolic model checking technique for systems whose states can be represented as finite words over a finite alphabet, where regular sets are used as symbolic representation. A major problem in symbolic model checking of parameterized and infinite-state systems is that fixpoint computations to generate the set of reachable states or the set of reachable loops do not terminate in general. Therefore,

acceleration

techniques have been developed, which calculate the effect of arbitrarily long sequences of transitions generated by some action. We present a systematic method for using acceleration in regular model checking, for the case where each transition changes at most one position in the word; this includes many parameterized algorithms and algorithms on data structures. The method extracts a maximal (in a certain sense) set of actions from a transition relation. These actions, and systematically obtained compositions of them, are accelerated to speed up a fixpoint computation. The extraction can be done on any representation of the transition relation, e.g., as a union of actions or as a single monolithic transducer. Using this approach, we are for the first time able to verify completely automatically both safety and absence of starvation properties for a collection of parameterized synchronization protocols from the literature; for some protocols, we obtain significant improvements in verification time. The results show that symbolic state-space exploration, without using abstractions, is a viable alternative for verification of parameterized systems with a linear topology.

Bengt Jonsson, Mayank Saksena
Parameterized Verification of Infinite-State Processes with Global Conditions

We present a simple and effective approximated backward reachability algorithm for parameterized systems with existentially and universally quantified global conditions. The individual processes operate on unbounded local variables ranging over the natural numbers. In addition, processes may communicate via broadcast, rendez-vous and shared variables. We apply the algorithm to verify mutual exclusion for complex protocols such as Lamport’s bakery algorithm both with and without atomicity conditions, a distributed version of the bakery algorithm, and Ricart-Agrawala’s distributed mutual exclusion algorithm.

Parosh Aziz Abdulla, Giorgio Delzanno, Ahmed Rezine

Session V: Tool Environment

CADP 2006: A Toolbox for the Construction and Analysis of Distributed Processes
(Tool Paper)

CADP(

Construction and Analysis of Distributed Processes

) [2,3] is a toolbox for specification, rapid prototyping, verification, testing, and performance evaluation of asynchronous systems (concurrent processes with message-passing communication). The developments of CADP during the last five years led to a new release named CADP 2006

“Edinburgh”

(as a tribute to the achievements in concurrency theory of the Laboratory for Foundations of Computer Science) that supersedes the previous version CADP 2001.

Hubert Garavel, Radu Mateescu, Frédéric Lang, Wendelin Serwe
jMoped: A Test Environment for Java Programs
(Tool Paper)

We present jMoped [1], a test environment for Java programs. Given a Java method, jMoped can simulate its execution for all possible arguments within a finite range and generate coverage information for these executions. Moreover, it checks for some common Java errors, i.e. assertion violations, null pointer exceptions, and array bound violations. When an error is found, jMoped finds out the arguments that lead to the error. A JUnit [2] test case can also be automatically generated for further testing.

Dejvuth Suwimonteerabuth, Felix Berger, Stefan Schwoon, Javier Esparza
Hector: Software Model Checking with Cooperating Analysis Plugins
(Tool Paper)

We present

Hector

, a software tool for combining different abstraction methods to extract sound models of heap-manipulating imperative programs with recursion. Extracted models may be explored visually and model checked with a wide range of “propositional” temporal logic safety properties, where “propositions” are formulae of a first order logic with transitive closure and arithmetic (

${\mathcal L}$

).

Hector

uses techniques initiated in [4,5] to wrap up different abstraction methods as modular

analysis plugins

, and to exchange information about program state between plugins through formulae of

${\cal L}$

. This approach aims to achieve both (apparently conflicting) advantages of increased precision and modularity. When checking safety properties containing non-independent “propositions”, our model checking algorithm gives greater precision than a naïve three-valued one since it maintains some dependencies.

Nathaniel Charlton, Michael Huth
The Why/Krakatoa/Caduceus Platform for Deductive Program Verification
(Tool Paper)

We present the Why/Krakatoa/Caduceus set of tools for deductive verification of Java and C source code.

Jean-Christophe Filliâtre, Claude Marché

Session VI: Shapes

Shape Analysis for Composite Data Structures

We propose a shape analysis that adapts to some of the complex composite data structures found in industrial systems-level programs. Examples of such data structures include “cyclic doubly-linked lists of acyclic singly-linked lists”, “singly-linked lists of cyclic doubly-linked lists with back-pointers to head nodes”, etc. The analysis introduces the use of generic higher-order inductive predicates describing spatial relationships together with a method of synthesizing new parameterized spatial predicates which can be used in combination with the higher-order predicates. In order to evaluate the proposed approach for realistic programs we have performed experiments on examples drawn from device drivers: the analysis proved safety of the data structure manipulation of several routines belonging to an IEEE 1394 (firewire) driver, and also found several previously unknown memory safety bugs.

Josh Berdine, Cristiano Calcagno, Byron Cook, Dino Distefano, Peter W. O’Hearn, Thomas Wies, Hongseok Yang
Array Abstractions from Proofs

We present a technique for using infeasible program paths to automatically infer

Range Predicates

that describe properties of unbounded array segments. First, we build proofs showing the infeasibility of the paths, using axioms that precisely encode the high-level (but informal) rules with which programmers reason about arrays. Next, we mine the proofs for Craig Interpolants which correspond to predicates that refute the particular counterexample path. By embedding the predicate inference technique within a Counterexample-Guided Abstraction-Refinement (CEGAR) loop, we obtain a method for verifying data-sensitive safety properties whose precision is tailored in a program- and property-sensitive manner. Though the axioms used are simple, we show that the method suffices to prove a variety of array-manipulating programs that were previously beyond automatic model checkers.

Ranjit Jhala, Kenneth L. McMillan
Context-Bounded Analysis of Multithreaded Programs with Dynamic Linked Structures

Bounded context switch reachability analysis is a useful and efficient approach for detecting bugs in multithreaded programs. In this paper, we address the application of this approach to the analysis of multithreaded programs with procedure calls and dynamic linked structures. We define a program semantics based on concurrent pushdown systems with

visible heaps

as stack symbols. A visible heap is the part of the heap reachable from global and local variables. We use pushdown analysis techniques to define an algorithm that explores the entire configuration space reachable under given bounds on the number of context switches and the size of visible heaps.

Ahmed Bouajjani, Séverine Fratani, Shaz Qadeer
Revamping TVLA: Making Parametric Shape Analysis Competitive
(Tool Paper)

TVLA is a parametric framework for shape analysis that can be easily instantiated to create different kinds of analyzers for checking properties of programs that use linked data structures. We report on dramatic improvements in TVLA’s performance, which make the cost of parametric shape analysis comparable to that of the most efficient specialized shape-analysis tools (which restrict the class of data structures and programs analyzed) without sacrificing TVLA’s parametricity. The improvements were obtained by employing well-known techniques from the database community to reduce the cost of extracting information from shape descriptors and performing abstract interpretation of program statements and conditions. Compared to the prior version of TVLA, we obtained as much as 50-fold speedup.

Igor Bogudlov, Tal Lev-Ami, Thomas Reps, Mooly Sagiv

Session VII: Concurrent Program Verification

Fast and Accurate Static Data-Race Detection for Concurrent Programs

We present new techniques for fast, accurate and scalable static data race detection in concurrent programs. Focusing our analysis on Linux device drivers allowed us to identify the unique challenges posed by debugging large-scale real-life code and also pinpointed drawbacks in existing race warning generation methods. This motivated the development of new techniques that helped us in improving both the scalability as well as the accuracy of each of the three main steps in a race warning generation system. The first and most crucial step is the automatic discovery of shared variables. Towards that end, we present a new, efficient dataflow algorithm for shared variable detection which is more effective than existing correlation-based techniques that failed to detect the shared variables responsible for data races in majority of the drivers in our benchmark suite. Secondly, accuracy of race warning generation strongly hinges on the precision of the pointer analysis used to compute aliases for lock pointers. We formulate a new scalable context sensitive alias analysis that effectively combines a divide and conquer strategy with function summarization and is demonstrably more efficient than existing BDD-based techniques. Finally, we provide a new warning reduction technique that leverages lock acquisition patterns to yield provably better warning reduction than existing lockset based methods.

Vineet Kahlon, Yu Yang, Sriram Sankaranarayanan, Aarti Gupta
Parametric and Sliced Causality

Happen-before causal partial orders have been widely used in concurrent program verification and testing. This paper presents a parametric approach to happen-before causal partial orders. Existing variants of happen-before relations can be obtained as instances of the parametric framework. A novel causal partial order, called

sliced causality

, is then defined also as an instance of the parametric framework, which loosens the obvious but strict happen-before relation by considering static and dynamic dependence information about the program. Sliced causality has been implemented in a runtime predictive analysis tool for

Java

, named

jPredictor

, and the evaluation results show that sliced causality can significantly improve the capability of concurrent verification and testing.

Feng Chen, Grigore Roşu
Spade: Verification of Multithreaded Dynamic and Recursive Programs
(Tool Paper)

Recently, there are a lot of tools that have been considered for software verification.We can for example mention BLAST [HJMS02], SLAM [BR01], KISS [QW04,QR05], ZING [QRR04], and MAGIC [CCG

 + 

03,CCG

 + 

04,CCK

 + 

06]. However, none of these tools can deal with parallelism, communication between parallel processes, dynamic process creation, and recursion at the same time. The tool we propose, called SPADE, allows to analyse automatically boolean programs presenting all these features. As far as we know, this is the first software model checking tool based on an expressive model that accurately models all these aspects in programs.

Gaël Patin, Mihaela Sighireanu, Tayssir Touili

Session VIII: Reactive Designs

Anzu: A Tool for Property Synthesis
(Tool Paper)

We present the tool

Anzu

.

Anzu

takes a formal specification of a design and generates a functionally correct system if one exists. The specification is given as a set of linear temporal logic (LTL) formulas belonging to the class of

generalized reactivity

of rank 1. Such formulas cover the majority of the formulas used in practice.

Anzu

is an implementation of the symbolic reactive(1) approach to synthesis by Piterman, Pnueli, and Sa’ar. If the specification is

realizable

Anzu

provides the user with a Verilog module that represents a correct finite-state system.

Barbara Jobstmann, Stefan Galler, Martin Weiglhofer, Roderick Bloem
RAT: A Tool for the Formal Analysis of Requirements
(Tool Paper)

Formal languages are increasingly used to describe the functional requirements of circuits. Although formal requirements can be hard to understand and subtle, they are seldom the object of verification. In this paper we present our requirement analysis tool,

RAT

. Our tool supports quality assurance of formal specifications. A designer can interactively explore the requirements’ semantics and automatically check the specification against

assertions

(which must be satisfied) and

possibilities

(which describe allowed corner-case behavior). Using

RAT

, a designer can also investigate the realizability of a specification.

RAT

was successfully examined in several industrial projects.

Roderick Bloem, Roberto Cavada, Ingo Pill, Marco Roveri, Andrei Tchaltsev

Session IX: Parallelisation

Parallelising Symbolic State-Space Generators

Symbolic state-space generators are notoriously hard to parallelise, largely due to the irregular nature of the task. Parallel languages such as Cilk, tailored to irregular problems, have been shown to offer efficient scheduling and load balancing. This paper explores whether Cilk can be used to efficiently parallelise a symbolic state-space generator on a shared-memory architecture. We parallelise the Saturation algorithm implemented in the SMART verification tool using Cilk, and compare it to a parallel implementation of the algorithm using a thread pool. Our experimental studies on a dual-processor, dual-core PC show that Cilk can improve the run-time efficiency of our parallel algorithm due to its load balancing and scheduling efficiency. We also demonstrate that this incurs a significant memory overhead due to Cilk’s inability to support pipelining, and conclude by pointing to a possible future direction for parallel irregular languages to include pipelining.

Jonathan Ezekiel, Gerald Lüttgen, Gianfranco Ciardo
I/O Efficient Accepting Cycle Detection

We show how to adapt an existing non-DFS-based accepting cycle detection algorithm OWCTY [10,15,29] to the I/O efficient setting and compare its I/O efficiency and practical performance to the existing I/O efficient LTL model checking approach of Edelkamp and Jabbar [14]. The new algorithm exhibits similar I/O complexity with respect to the size of the graph while it avoids quadratic increase in the size of the graph. Therefore, the number of I/O operations performed is significantly lower and the algorithm exhibits better practical performance.

Jiri Barnat, Lubos Brim, Pavel Šimeček

Session X: Constraints and Decisions

C32SAT: Checking C Expressions
(Tool Paper)

C32SAT is a tool for checking C expressions. It can check whether a given C expression can be satisfied, is tautological, or always defined according to the ISO C99 standard. C32SAT can be used to detect nonportable expressions where program behavior depends on the compiler. Our contribution consists of C32SAT’s functional representation and the way it handles undefined values. Under-approximation is used as optimization.

Robert Brummayer, Armin Biere
CVC3
(Tool Paper)

CVC3, a joint project of NYU and U Iowa, is the new and latest version of the Cooperating Validity Checker. CVC3 extends and builds on the functionality of its predecessors and includes many new features such as support for additional theories, an abstract architecture for Boolean reasoning, and SMT-LIB compliance. We describe the system and discuss some applications and continuing work.

Clark Barrett, Cesare Tinelli
BAT: The Bit-Level Analysis Tool
(Tool Paper)

While effective methods for bit-level verification of low-level properties exist, system-level properties that entail reasoning about a significant part of the design pose a major verification challenge. We present the Bit-level Analysis Tool (BAT), a state-of-the-art decision procedure for bit-level reasoning that implements a novel collection of techniques targeted towards enabling the verification of system-level properties. Key features of the BAT system are an expressive strongly-typed modeling and specification language, a fully automatic and efficient memory abstraction algorithm for extensional arrays, and a novel CNF generation algorithm. The BAT system can be used to automatically solve system-level RTL verification problems that were previously intractable, such as refinement-based verification of RTL-level pipelined machines.

Panagiotis Manolios, Sudarshan K. Srinivasan, Daron Vroon
LIRA: Handling Constraints of Linear Arithmetics over the Integers and the Reals
(Tool Paper)

The mechanization of many verification tasks relies on efficient implementations of decision procedures for fragments of first-order logic. Interactive theorem provers like pvs also make use of such decision procedures to increase the level of automation. Our tool

lira

implements decision procedures based on automata-theoretic techniques for first-order logics with linear arithmetic, namely, for FO(ℕ, +), FO(ℤ,+,<), and FO(ℝ, ℤ,+,<).

Bernd Becker, Christian Dax, Jochen Eisinger, Felix Klaedtke

Session XI: Probabilistic Verification

Three-Valued Abstraction for Continuous-Time Markov Chains

This paper proposes a novel abstraction technique for continuous-time Markov chains (CTMCs). Our technique fits within the realm of three-valued abstraction methods that have been used successfully for traditional model checking. The key idea is to apply abstraction on uniform CTMCs that are readily obtained from general CTMCs, and to abstract transition probabilities by intervals. It is shown that this provides a conservative abstraction for both

true

and

false

for a three-valued semantics of the branching-time logic CSL (Continuous Stochastic Logic). Experiments on an infinite-state CTMC indicate the feasibility of our abstraction technique.

Joost-Pieter Katoen, Daniel Klink, Martin Leucker, Verena Wolf
Magnifying-Lens Abstraction for Markov Decision Processes

We present a novel abstraction technique which allows the analysis of reachability and safety properties of Markov decision processes with very large state spaces. The technique, called

magnifying-lens abstraction,

(MLA) copes with the state-explosion problem by partitioning the state-space into regions, and by computing upper and lower bounds for reachability and safety properties on the regions, rather than on the states. To compute these bounds, MLA iterates over the regions, considering the concrete states of each region in turn, as if one were sliding across the abstraction a magnifying lens which allowed viewing the concrete states. The algorithm adaptively refines the regions, using smaller regions where more detail is needed, until the difference between upper and lower bounds is smaller than a specified accuracy. We provide experimental results on three case studies illustrating that MLA can provide accurate answers, with savings in memory requirements.

Luca de Alfaro, Pritam Roy
Underapproximation for Model-Checking Based on Random Cryptographic Constructions

For two naturals

m

,

n

such that

m

 < 

n

, we show how to construct a circuit

C

with

m

inputs and

n

outputs, that has the following property: for some 0 ≤ 

k

 ≤ 

m

, the circuit defines a

k

-universal function. This means, informally, that for every subset

K

of

k

outputs, every possible valuation of the variables in

K

is reachable (we prove that

k

is very close to

m

with an arbitrarily high probability). Now consider a circuit

M

with

n

inputs that we wish to model-check. Connecting the inputs of

M

to the outputs of

C

gives us a new circuit

M

′ with

m

inputs, that its original inputs have freedom defined by

k

. This is a very attractive feature for underapproximation in model-checking: on one hand the combined circuit has a smaller number of inputs, and on the other hand it is expected to find an error state fast if there is one.

We report initial experimental results with bounded model checking of industrial designs (the method is equally applicable to unbounded model checking and to simulation), which shows mixed results. An interesting observation, however, is that in 13 out of 17 designs, setting

m

to be

n

/5 is sufficient to detect the bug. This is in contrast to other underapproximation that are based on reducing the number of inputs, which in most cases cannot detect the bug even with

m

 = 

n

/2.

Arie Matsliah, Ofer Strichman

Session XII: Abstraction

Using Counterexamples for Improving the Precision of Reachability Computation with Polyhedra

We present an

extrapolation with care set

operator to accelerate termination of reachability computation with polyhedra. At the same time, a counterexample guided refinement algorithm is used to iteratively expand the care set to improve the precision of the reachability computation. We also introduce two heuristic algorithms called

interpolate

and

restrict

to minimize the polyhedral representations without reducing the accuracy. We present some promising experimental results from a preliminary implementation of these techniques.

Chao Wang, Zijiang Yang, Aarti Gupta, Franjo Ivančić
Structural Abstraction of Software Verification Conditions

Precise software analysis and verification require tracking the exact path along which a statement is executed (path-sensitivity), the different contexts from which a function is called (context-sensitivity), and the bit-accurate operations performed. Previously, verification with such precision has been considered too inefficient to scale to large software. In this paper, we present a novel approach to solving such verification conditions, based on an automatic abstraction-checking-refinement framework that exploits natural abstraction boundaries present in software. Experimental results show that our approach easily scales to over 200,000 lines of real C/C++ code.

Domagoj Babić, Alan J. Hu
An Abstract Domain for Analyzing Heap-Manipulating Low-Level Software

We describe an abstract domain for representing useful invariants of heap-manipulating programs (in presence of recursive data structures and pointer arithmetic) written in languages like C or low-level code. This abstract domain allows representation of must and may equalities among pointer expressions. Pointer expressions contain existentially or universally quantified integer variables guarded by some base domain constraint. We allow quantification of a special form, namely

$\exists \forall$

quantification, to balance expressiveness with efficient automated deduction. The existential quantification is over some

dummy

non-program variables, which are automatically made explicit by our analysis to express useful program invariants. The universal quantifier is used to express properties of collections of memory locations. Our abstract interpreter automatically computes invariants about programs over this abstract domain. We present initial experimental results demonstrating the effectiveness of this abstract domain on some common coding patterns.

Sumit Gulwani, Ashish Tiwari
Adaptive Symmetry Reduction

Symmetry reduction is a technique to counter state explosion for systems of regular structure. It relies on idealistic assumptions about

indistinguishable

components, which in practice may only be

similar

. In this paper we present a generalized algebraic approach to symmetry reduction for exploring a structure without any prior knowledge about its global symmetry. The more behavior is shared among the components, the more compression takes effect. Our idea is to annotate each encountered state with information about how symmetry is violated along the path leading to it. Previous solutions only allow specific types of asymmetry, such as up to bisimilarity, or seem to incur large overhead before or during the verification run. In contrast, our method appeals through its balance between generality and simplicity. We include analytic and experimental results to document its efficiency.

Thomas Wahl

Session XIII: Assume-Guarantee Reasoning

From Liveness to Promptness

Liveness temporal properties state that something “good” eventually happens, e.g., every request is eventually granted. In Linear Temporal Logic (LTL), there is no a priori bound on the “wait time” for an eventuality to be fulfilled. That is,

asserts that

θ

holds eventually, but there is no bound on the time when

θ

will hold. This is troubling, as designers tend to interpret an eventuality

F

θ

as an abstraction of a bounded eventuality

F

 ≤ 

k

θ

, for an unknown

k

, and satisfaction of a liveness property is often not acceptable unless we can bound its wait time. We introduce here

prompt-LTL

, an extension of LTL with the

prompt-eventually

operator

F

p

. A system

S

satisfies a

prompt-LTL

formula

ϕ

if there is some bound

k

on the wait time for all prompt-eventually subformulas of

ϕ

in all computations of

S

. We study various problems related to

prompt-LTL

, including realizability, model checking, and assume-guarantee model checking, and show that they can be solved by techniques that are quite close to the standard techniques for LTL.

Orna Kupferman, Nir Piterman, Moshe Y. Vardi
Automated Assumption Generation for Compositional Verification

We describe a method for computing an exact minimal automaton to act as an intermediate assertion in assume-guarantee reasoning, using a sampling approach and a Boolean satisfiability solver. For a set of synthetic benchmarks intended to mimic common situations in hardware verification, this is shown to be significantly more effective than earlier approximate methods based on Angluin’s L* algorithm. For many of these benchmarks, this method also outperforms BDD-based model checking and interpolation-based model checking.

Anubhav Gupta, Kenneth L. McMillan, Zhaohui Fu

Session XIV: Hybrid Systems

Abstraction and Counterexample-Guided Construction of ω-Automata for Model Checking of Step-Discrete Linear Hybrid Models

For the verification of reactive hybrid systems existing approaches do not scale well w.r.t. large discrete state spaces, since their excellence mostly applies to data computations. However, especially control dominated models of industrial relevance in which computations on continuous data are comprised only of subsidiary parts of the behavior, these large discrete state spaces are not uncommon. By exploiting typical characteristics of such models, the herein presented approach addresses step-discrete linear hybrid models with large discrete state spaces by introducing an iterative abstraction refinement approach based on learning reasons of spurious counterexamples in an

ω

-automaton. Due to the resulting exclusion of comprehensive classes of spurious counterexamples, the algorithm exhibits relatively few iterations to prove or disprove safety properties. The implemented algorithm was successfully applied to parts of industrial models and shows promising results.

Marc Segelken
Test Coverage for Continuous and Hybrid Systems

We propose a novel test coverage measure for continuous and hybrid systems, which is defined using the star discrepancy notion. We also propose a test generation method guided by this coverage measure. This method was implemented in a prototype tool that can handle high dimensional systems (up to 100 dimensions).

Tarik Nahhal, Thao Dang
Hybrid Systems: From Verification to Falsification

We propose

HyDICE

, Hybrid DIscrete Continuous Exploration, a multi-layered approach for hybrid-system testing that integrates continuous sampling-based robot motion planning with discrete searching. The discrete search uses the discrete transitions of the hybrid system and coarse-grained decompositions of the continuous state spaces or related projections to guide the motion planner during the search for witness trajectories. Experiments presented in this paper, using a hybrid system inspired by robot motion planning and with nonlinear dynamics associated with each of several thousand modes, provide an initial validation of

HyDICE

and demonstrate its promise as a hybrid-system testing method. Comparisons to related work show computational speedups of up to two orders of magnitude.

Erion Plaku, Lydia E. Kavraki, Moshe Y. Vardi

Session XV: Program Analysis

Comparison Under Abstraction for Verifying Linearizability

Linearizability

is one of the main correctness criteria for implementations of concurrent data structures. A data structure is

linearizable

if its operations appear to execute atomically. Verifying linearizability of concurrent unbounded linked data structures is a challenging problem because it requires correlating executions that manipulate (unbounded-size) memory states. We present a static analysis for verifying linearizability of concurrent unbounded linked data structures. The novel aspect of our approach is the ability to prove that two (unbounded-size) memory layouts of two programs are isomorphic in the presence of abstraction. A prototype implementation of the analysis verified the linearizability of several published concurrent data structures implemented by singly-linked lists.

Daphna Amit, Noam Rinetzky, Thomas Reps, Mooly Sagiv, Eran Yahav
Leaping Loops in the Presence of Abstraction

Finite abstraction helps program analysis cope with the huge state space of programs. We wish to use abstraction in the process of error detection. Such a detection involves reachability analysis of the program. Reachability in an abstraction that under-approximates the program implies reachability in the concrete system. Under-approximation techniques, however, lose precision in the presence of loops, and cannot detect their termination. This causes reachability analysis that is done with respect to an abstraction to miss states of the program that are reachable via loops. Current solutions to this loop-termination challenge are based on fair termination and involve the use of well-founded sets and ranking functions.

In many cases, the concrete system has a huge, but still finite set of states. Our contribution is to show how, in such cases, it is possible to analyze termination of loops without refinement and without well-founded sets and ranking functions. Instead, our method is based on conditions on the structure of the graph that corresponds to the concrete system — conditions that can be checked with respect to the abstraction. We describe our method, demonstrate its usefulness and show how its application can be automated by means of a theorem prover.

Thomas Ball, Orna Kupferman, Mooly Sagiv
Configurable Software Verification: Concretizing the Convergence of Model Checking and Program Analysis

In automatic software verification, we have observed a theoretical convergence of model checking and program analysis. In practice, however, model checkers are still mostly concerned with precision, e.g., the removal of spurious counterexamples; for this purpose they build and refine reachability trees. Lattice-based program analyzers, on the other hand, are primarily concerned with efficiency. We designed an algorithm and built a tool that can be configured to perform not only a purely tree-based or a purely lattice-based analysis, but offers many intermediate settings that have not been evaluated before. The algorithm and tool take one or more abstract interpreters, such as a predicate abstraction and a shape analysis, and configure their execution and interaction using several parameters. Our experiments show that such customization may lead to dramatic improvements in the precision-efficiency spectrum.

Dirk Beyer, Thomas A. Henzinger, Grégory Théoduloz

Session XVI: SAT and Decision Procedures

A Decision Procedure for Bit-Vectors and Arrays

STP is a decision procedure for the satisfiability of quantifier-free formulas in the theory of bit-vectors and arrays that has been optimized for large problems encountered in software analysis applications. The basic architecture of the procedure consists of word-level pre-processing algorithms followed by translation to SAT. The primary bottlenecks in software verification and bug finding applications are large arrays and linear bit-vector arithmetic. New algorithms based on the abstraction-refinement paradigm are presented for reasoning about large arrays. A solver for bit-vector linear arithmetic is presented that eliminates variables and parts of variables to enable other transformations, and reduce the size of the problem that is eventually received by the SAT solver.

These and other algorithms have been implemented in STP, which has been heavily tested over thousands of examples obtained from several real-world applications. Experimental results indicate that the above mix of algorithms along with the overall architecture is far more effective, for a variety of applications, than a direct translation of the original formula to SAT or other comparable decision procedures.

Vijay Ganesh, David L. Dill
Boolean Abstraction for Temporal Logic Satisfiability

Increasing interest towards property based design calls for effective satisfiability procedures for expressive temporal logics, e.g. the IEEE standard Property Specification Language (PSL).

In this paper, we propose a new approach to the satisfiability of PSL formulae; we follow recent approaches to decision procedures for Satisfiability Modulo Theory, typically applied to fragments of First Order Logic. The underlying intuition is to combine two interacting search mechanisms: on one side, we search for assignments that satisfy the Boolean abstraction of the problem; on the other, we invoke a solver for temporal satisfiability on the conjunction of temporal formulae corresponding to the assignment. Within this framework, we explore two directions. First, given the fixed polarity of each constraint in the theory solver, aggressive simplifications can be applied. Second, we analyze the idea of conflict reconstruction: whenever a satisfying assignment at the level of the Boolean abstraction results in a temporally unsatisfiable problem, we identify inconsistent subsets that can be used to rule out possibly many other assignments. We propose two methods to extract conflict sets on conjunctions of temporal formulae (one based on BDD-based Model Checking, and one based on SAT-based Simple Bounded Model Checking). We analyze the limits and the merits of the approach with a thorough experimental evaluation.

Alessandro Cimatti, Marco Roveri, Viktor Schuppan, Stefano Tonetta
A Lazy and Layered SMT( $\mathcal{BV}$ ) Solver for Hard Industrial Verification Problems

Rarely verification problems originate from bit-level descriptions. Yet, most of the verification technologies are based on

bit blasting

, i.e., reduction to boolean reasoning.

In this paper we advocate reasoning at higher level of abstraction, within the theory of bit vectors (

$\mathcal{BV}$

), where structural information (e.g. equalities, arithmetic functions) is not blasted into bits.Our approach relies on the

lazy

Satisfiability Modulo Theories (SMT) paradigm. We developed a satisfiability procedure for reasoning about bit vectors that carefully leverages on the power of boolean SAT solver to deal with components that are more naturally “boolean”, and activates bit-vector reasoning whenever possible. The procedure has two distinguishing features. First, it relies on the on-line integration of a SAT solver with an incremental and backtrackable solver for

${\mathcal{BV}}$

that enables dynamical optimization of the reasoning about bit vectors; for instance, this is an improvement over static encoding methods which may generate smaller slices of bit-vector variables. Second, the solver for

${\mathcal{BV}}$

is

layered

(i.e., it privileges cheaper forms of reasoning), and it is based on a flexible use of term rewriting techniques.

We evaluate our approach on a set of realistic industrial benchmarks, and demonstrate substantial improvements with respect to state-of-the-art boolean satisfiability solvers, as well as other decision procedures for SMT

${\mathcal{(BV)}}$

.

Roberto Bruttomesso, Alessandro Cimatti, Anders Franzén, Alberto Griggio, Ziyad Hanna, Alexander Nadel, Amit Palti, Roberto Sebastiani
Backmatter
Metadata
Title
Computer Aided Verification
Editors
Werner Damm
Holger Hermanns
Copyright Year
2007
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-73368-3
Print ISBN
978-3-540-73367-6
DOI
https://doi.org/10.1007/978-3-540-73368-3

Premium Partner