Skip to main content

2009 | Buch

Tools and Algorithms for the Construction and Analysis of Systems

15th International Conference, TACAS 2009, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2009, York, UK, March 22-29, 2009. Proceedings

herausgegeben von: Stefan Kowalewski, Anna Philippou

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 15th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2009, held in York, UK, in March 2009, as part of ETAPS 2009, the European Joint Conferences on Theory and Practice of Software. The 27 full papers and 8 tool demonstrations included in the volume were thoroughly reviewed and selected from 131 submissions. The papers are organized in topical sections on Model Checking, Tools, Verification of Concurrent Programs, Parametric Analysis, Program Analysis, and Hybrid Systems.

Inhaltsverzeichnis

Frontmatter

Model Checking I

Hierarchical Set Decision Diagrams and Regular Models

This paper presents algorithms and data structures that exploit a compositional and hierarchical specification to enable more efficient symbolic model-checking. We encode the state space and transition relation using hierarchical Set Decision Diagrams (SDD) [9].In SDD, arcs of the structure are labeled with sets, themselves stored as SDD.

To exploit the hierarchy of SDD, a structured model representation is needed. We thus introduce a formalism integrating a simple notion of

type

and

instance

. Complex composite behaviors are obtained using a synchronization mechanism borrowed from process calculi. Using this relatively general framework, we investigate how to capture similarities in regular and concurrent models. Experimental results are presented, showing that this approach can outperform in time and memory previous work in this area.

Yann Thierry-Mieg, Denis Poitrenaud, Alexandre Hamez, Fabrice Kordon
Büchi Complementation and Size-Change Termination

We compare tools for complementing nondeterministic Büchi automata with a recent termination-analysis algorithm. Complementation of Büchi automata is a key step in program verification. Early constructions using a Ramsey-based argument have been supplanted by rank-based constructions with exponentially better bounds. In 2001 Lee et al. presented the size-change termination (SCT) problem, along with both a reduction to Büchi automata and a Ramsey-based algorithm. This algorithm strongly resembles the initial complementation constructions for Büchi automata.

We prove that the SCT algorithm is a specialized realization of the Ramsey-based complementation construction. Surprisingly, empirical analysis suggests Ramsey-based approaches are superior over the domain of SCT problems. Upon further analysis we discover an interesting property of the problem space that both explains this result and provides a chance to improve rank-based tools. With these improvements, we show that theoretical gains in efficiency are mirrored in empirical performance.

Seth Fogarty, Moshe Y. Vardi
Learning Minimal Separating DFA’s for Compositional Verification

Algorithms for learning a minimal separating DFA of two disjoint regular languages have been proposed and adapted for different applications. One of the most important applications is learning minimal contextual assumptions in automated compositional verification. We propose in this paper an efficient learning algorithm, called

, that learns and generates a minimal separating DFA. Our algorithm has a quadratic query complexity in the product of sizes of the minimal DFA’s for the two input languages. In contrast, the most recent algorithm of Gupta

et al.

has an exponential query complexity in the sizes of the two DFA’s. Moreover, experimental results show that our learning algorithm significantly outperforms all existing algorithms on randomly-generated example problems. We describe how our algorithm can be adapted for automated compositional verification. The adapted version is evaluated on the LTSA benchmarks and compared with other automated compositional verification approaches. The result shows that our algorithm surpasses others in 30 of 49 benchmark problems.

Yu-Fang Chen, Azadeh Farzan, Edmund M. Clarke, Yih-Kuen Tsay, Bow-Yaw Wang

Tools I

RBAC-PAT: A Policy Analysis Tool for Role Based Access Control

Role-Based Access Control (RBAC) has been widely used for expressing access control policies. Administrative Role-Based Access Control (ARBAC) specifies how an RBAC policy may be changed by each administrator. Because sequences of changes by different administrators may interact in unintended ways, it is often difficult to fully understand the effect of an ARBAC policy by simple inspection. This paper presents RBAC-PAT, a tool for analyzing RBAC and ARBAC policies, which supports analysis of various properties including reachability, availability, containment, weakest precondition, dead roles, and information flows.

Mikhail I. Gofman, Ruiqi Luo, Ayla C. Solomon, Yingbin Zhang, Ping Yang, Scott D. Stoller
ITPN-PerfBound: A Performance Bound Tool for Interval Time Petri Nets

The

ITPN-PerfBound

is a tool for the modeling and analysis of Interval Time Petri Nets (ITPN), that is Petri Nets in which firing time intervals, and possibly firing frequency intervals, are associated to transitions. The tool is particularly well-suited in the verification and validation activities of real-time systems, where the main goal is to give guarantees about the worst and best case system performance. The tool has been implemented within the

DrawNET

framework and supports the analysis of ITPN models based on the computation of upper and lower bounds of classical performance metrics, such as throughput and cycle time.

Elina Pacini Naumovich, Simona Bernardi, Marco Gribaudo
Romeo: A Parametric Model-Checker for Petri Nets with Stopwatches

Last time we reported on Romeo, analyses with this tool were mostly based on translations to other tools. This new version provides an integrated TCTL model-checker and has gained in expressivity with the addition of parameters. Although there exists other tools to compute the state-space of stopwatch models, Romeo is the first one that performs TCTL model-checking on stopwatch models. Moreover, it is the first tool that performs TCTL model-checking on timed parametric models. Indeed, Romeo now features an efficient model-checking of time Petri nets using the Uppaal DBM Library, the model-checking of stopwatch Petri nets and parametric stopwatch Petri nets using the Parma Polyhedra Library and a graphical editor and simulator of these models. Furthermore, its audience has increased leading to several industrial contracts. This paper reports on these recent developments of Romeo.

Didier Lime, Olivier H. Roux, Charlotte Seidner, Louis-Marie Traonouez
Alpaga: A Tool for Solving Parity Games with Imperfect Information

Alpaga is a solver for two-player parity games with imperfect information. Given the description of a game, it determines whether the first player can ensure to win and, if so, it constructs a winning strategy. The tool provides a symbolic implementation of a recent algorithm based on antichains.

Dietmar Berwanger, Krishnendu Chatterjee, Martin De Wulf, Laurent Doyen, Thomas A. Henzinger

Game-Theoretic Approaches

Compositional Predicate Abstraction from Game Semantics

We introduce a technique for using conventional predicate abstraction methods to reduce the state-space of models produced using game semantics. We focus on an expressive procedural language that has both local store and local control, a language which enjoys a simple game-semantic model yet is expressive enough to allow non-trivial examples. Our compositional approach allows the verification of incomplete programs (e.g. libraries) and offers the opportunity for new heuristics for improved efficiency. Game-semantic predicate abstraction can be embedded in an abstraction-refinement cycle in a standard way, resulting in an improved version of our experimental model-checking tool

Mage

, and we illustrate it with several toy examples.

Adam Bakewell, Dan R. Ghica
Compositional Synthesis of Reactive Systems from Live Sequence Chart Specifications

Synthesis is the process of automatically generating a correct running system from its specification. In this paper, we suggest a translation of a Live Sequence Chart specification into a two-player game for the purpose of synthesis. We use this representation for synthesizing a reactive system, and introduce a novel algorithm for composing two such systems for two subsets of a specification. Even though this algorithm may fail to compose the systems, or to prove the joint specification to be inconsistent, we present some promising results for which the composition algorithm does succeed and saves significant running time. We also discuss options for extending the algorithm into a sound and complete one.

Hillel Kugler, Itai Segall
Computing Weakest Strategies for Safety Games of Imperfect Information

cedar

(Counter Example Driven Antichain Refinement) is a new symbolic algorithm for computing weakest strategies for safety games of imperfect information. The algorithm computes a fixed point over the lattice of

contravariant antichains.

Here contravariant antichains are antichains over

pairs

consisting of an information set and an allow set representing the associated move. We demonstrate how the richer structure of contravariant antichains for representing antitone functions, as opposed to standard antichains for representing sets of downward closed sets, allows

cedar

to apply a significantly less complex controllable predecessor step than previous algorithms.

Wouter Kuijper, Jaco van de Pol

Verification of Concurrent Programs

Context-Bounded Analysis for Concurrent Programs with Dynamic Creation of Threads

Context-bounded analysis has been shown to be both efficient and effective at finding bugs in concurrent programs. According to its original definition, context-bounded analysis explores all behaviors of a concurrent program up to some fixed number of context switches between threads. This definition is inadequate for programs that create threads dynamically because bounding the number of context switches in a computation also bounds the number of threads involved in the computation. In this paper, we propose a more general definition of context-bounded analysis useful for programs with dynamic thread creation. The idea is to bound the number of context switches for each thread instead of bounding the number of switches of all threads. We consider several variants based on this new definition, and we establish decidability and complexity results for the analysis induced by them.

Mohamed Faouzi Atig, Ahmed Bouajjani, Shaz Qadeer
Semantic Reduction of Thread Interleavings in Concurrent Programs

We propose a static analysis framework for concurrent programs based on reduction of thread interleavings using sound invariants on the top of partial order techniques. Starting from a product graph that represents transactions, we iteratively refine the graph to remove statically unreachable nodes in the product graph using the results of these analyses. We use abstract interpretation to automatically derive program invariants, based on abstract domains of increasing precision. We demonstrate the benefits of this framework in an application to find data race bugs in concurrent programs, where our static analyses serve to reduce the number of false warnings captured by an initial lockset analysis. This framework also facilitates use of model checking on the remaining warnings to generate concrete error traces, where we leverage the preceding static analyses to generate small program slices and the derived invariants to improve scalability. We describe our experimental results on a suite of Linux device drivers.

Vineet Kahlon, Sriram Sankaranarayanan, Aarti Gupta
Inferring Synchronization under Limited Observability

This paper addresses the problem of automatically inferring synchronization for concurrent programs. Given a program and a specification, we infer synchronization that avoids all interleavings violating the specification, but permits as many valid interleavings as possible. We let the user specify an upper bound on the cost of synchronization, which may limit the

observability

— what observations on program state can be made by the synchronization code. We present an algorithm that infers, under certain conditions, the

maximally permissive

synchronization for a given cost. We implemented a prototype of our approach and applied it to infer synchronization in a number of small programs.

Martin Vechev, Eran Yahav, Greta Yorsh
The Complexity of Predicting Atomicity Violations

We study the prediction of runs that violate atomicity from a single run, or from a regular or pushdown model of a concurrent program. When prediction ignores all synchronization, we show predicting from a single run (or from a regular model) is solvable in time

O

(

n

 + 

k

.

c

k

) where

n

is the length of the run (or the size of the regular model),

k

is the number of threads, and

c

is a constant. This is a significant improvement from the simple

$O(n^k\cdot 2^{k^2})$

algorithm that results from building a global automaton and monitoring it. We also show that, surprisingly, the problem is decidable for model-checking recursive concurrent programs without synchronizations. Our results use a novel notion of a

profile

: we extract profiles from each thread locally and compositionally combine their effects to predict atomicity violations.

For threads synchronizing using a set of locks

$\mathcal{L}$

, we show that prediction from runs and regular models can be done in time

$O(n^k\cdot 2^{|\mathcal{L}|\cdot \log k+{k^2}})$

. Notice that we are unable to remove the factor

k

from the exponent on

n

in this case. However, we show that a faster algorithm is

unlikely

: more precisely, we show that prediction for regular programs is unlikely to be fixed-parameter tractable in the parameters

$(k,|\mathcal{L}|)$

by proving it is

W

[1]-hard. We also show, not surprisingly, that prediction of atomicity violations on recursive models communicating using locks is undecidable.

Azadeh Farzan, P. Madhusudan

Tools II

MoonWalker: Verification of .NET Programs

MoonWalker

is a software model checker for

cil

bytecode programs, which is able to detect deadlocks and assertion violations in

cil

assemblies, better known as Microsoft .NET programs. The design of

MoonWalker

is inspired by the Java PathFinder (

jpf

), a model checker for Java programs. The performance of

MoonWalker

is on par with

jpf

. This paper presents the new version of

MoonWalker

and discusses its most important features.

Niels H. M. Aan de Brugh, Viet Yen Nguyen, Theo C. Ruys
Boolector: An Efficient SMT Solver for Bit-Vectors and Arrays

Satisfiability Modulo Theories (SMT) is the problem of deciding satisfiability of a logical formula, expressed in a combination of first-order theories. We present the architecture and selected features of Boolector, which is an efficient SMT solver for the quantifier-free theories of bit-vectors and arrays. It uses term rewriting, bit-blasting to handle bit-vectors, and lemmas on demand for arrays.

Robert Brummayer, Armin Biere
The Yogi Project: Software Property Checking via Static Analysis and Testing

We present

Yogi

, a tool that checks properties of

C

programs by combining static analysis and testing.

Yogi

implements the

Dash

algorithm which performs verification by combining directed testing and abstraction. We have engineered

Yogi

in such a way that it plugs into Microsoft’s Static Driver Verifier framework. We have used this framework to run

Yogi

on 69 Windows Vista drivers with 85 properties. We find that the new algorithm enables

Yogi

to scale much better than

Slam

, which is the current engine driving Microsoft’s Static Driver Verifier.

Aditya V. Nori, Sriram K. Rajamani, SaiDeep Tetali, Aditya V. Thakur
TaPAS: The Talence Presburger Arithmetic Suite

TaPAS

is a suite of libraries dedicated to

. The suite provides (1) the application programming interface

Genepi

for this logic with encapsulations of many classical solvers, (2) the BDD-like library

SaTAF

used for encoding Presburger formulae to automata, and (3) the very first implementation of an algorithm decoding automata to Presburger formulae.

Jérôme Leroux, Gérald Point

Model Checking II

Transition-Based Directed Model Checking

Directed model checking is a well-established technique that is tailored to fast detection of system states that violate a given safety property. This is achieved by influencing the order in which states are explored during the state space traversal. The order is typically determined by an abstract distance function that estimates a state’s distance to a nearest error state. In this paper, we propose a general enhancement to directed model checking based on the evaluation of

state transitions

. We present a schema, parametrized by an abstract distance function, to evaluate transitions and propose a new method for the state space traversal. Our framework can be applied automatically to a wide range of abstract distance functions. The empirical evaluation impressively shows its practical potential. Apparently, the new method identifies a sweet spot in the trade-off between scalability (memory consumption) and short error traces.

Martin Wehrle, Sebastian Kupferschmid, Andreas Podelski
Memoised Garbage Collection for Software Model Checking

Virtual machine based software model checkers like

jpf

and

MoonWalker

spend up to half of their verification time on garbage collection. This is no surprise as after nearly each transition the heap has to be cleaned from garbage. To improve this, this paper presents the Memoised Garbage Collection (

MGC

) algorithm, which exploits the (typical) locality of transitions to incrementally perform garbage collection.

MGC

tracks the depths of objects efficiently and only purges objects whose depths have become infinite, hence unreachable.

MGC

was experimentally evaluated via an implementation in our model checker

MoonWalker

and benchmarks using the parallel Java Grande Forum benchmark suite. By using

MGC

, a performance increase up to 78% was measured over the traditional Mark&Sweep implementation.

Viet Yen Nguyen, Theo C. Ruys
Hierarchical Adaptive State Space Caching Based on Level Sampling

In the past, several attempts have been made to deal with the state space explosion problem by equipping a depth-first search (

Dfs

) algorithm with a state cache, or by avoiding collision detection, thereby keeping the state hash table at a fixed size. Most of these attempts are tailored specifically for

Dfs

, and are often not guaranteed to terminate and/or to exhaustively visit all the states. In this paper, we propose a general framework of hierarchical caches which can also be used by breadth-first searches (

Bfs

). Our method, based on an adequate sampling of

Bfs

levels during the traversal, guarantees that the

Bfs

terminates and traverses all transitions of the state space. We define several (static or adaptive) configurations of hierarchical caches and we study experimentally their effectiveness on benchmark examples of state spaces and on several communication protocols, using a generic implementation of the cache framework that we developed within the

Cadp

toolbox.

Radu Mateescu, Anton Wijs

Parametric Analysis

Static Analysis Techniques for Parameterised Boolean Equation Systems

Parameterised Boolean Equation Systems (PBESs) can be used to encode and solve various types of model checking and equivalence checking problems. PBESs are typically solved by symbolic approximation or by instantiation to Boolean Equation Systems (BESs). The latter technique suffers from something similar to the state space explosion problem and we propose to tackle it by static analysis techniques, which we tailor for PBESs. We introduce a method to eliminate redundant parameters and a method to detect constant parameters. Both lead to a better performance of the instantiation and they can sometimes even reduce problems that are intractable due to the infinity of the underlying BES to tractable ones.

Simona Orzan, Wieger Wesselink, Tim A. C. Willemse
Parametric Trace Slicing and Monitoring

Analysis of execution traces plays a fundamental role in many program analysis approaches. Execution traces are frequently parametric, i.e., they contain events with parameter bindings. Each parametric trace usually consists of many

trace slices

merged together, each slice corresponding to a parameter binding. Several techniques have been proposed to analyze parametric traces, but they have limitations: some in the specification formalism, others in the type of traces they support; moreover, they share common notions, intuitions, even techniques and algorithms, suggesting that a fundamental understanding of parametric trace analysis is needed. This foundational paper gives the first solution to parametric trace analysis that is unrestricted by the type of parametric properties or traces that can be analyzed. First, a general purpose parametric trace slicing technique is discussed, which takes each event in the parametric trace and distributes it to its corresponding trace slices. This parametric trace slicing technique can be used in combination with any conventional, non-parametric trace analysis, by applying the latter on each trace slice. An online monitoring technique is then presented based on the slicing technique, providing a logic-independent solution to runtime verification of parametric properties. The presented monitoring technique has been implemented and extensively evaluated. The results confirm that the generality of the discussed techniques does not come at a performance expense when compared with existing monitoring systems.

Feng Chen, Grigore Roşu

Generative Approaches

From Tests to Proofs

We describe the design and implementation of an automatic invariant generator for imperative programs. While automatic invariant generation through constraint solving has been extensively studied from a theoretical viewpoint as a classical means of program verification, in practice existing tools do not scale even to moderately sized programs. This is because the constraints that need to be solved even for small programs are already too difficult for the underlying (non-linear) constraint solving engines. To overcome this obstacle, we propose to strengthen static constraint generation with information obtained from static abstract interpretation and dynamic execution of the program. The strengthening comes in the form of additional linear constraints that trigger a series of simplifications in the solver, and make solving more scalable. We demonstrate the practical applicability of the approach by an experimental evaluation on a collection of challenging benchmark programs and comparisons with related tools based on abstract interpretation and software model checking.

Ashutosh Gupta, Rupak Majumdar, Andrey Rybalchenko
Test Input Generation for Programs with Pointers

Software testing is an essential process to improve software quality in practice. Researchers have proposed several techniques to automate parts of this process. In particular, symbolic execution can be used to automatically generate a set of test inputs that achieves high code coverage.

However, most state-of-the-art symbolic execution approaches cannot directly handle programs whose inputs are pointers, as is often the case for C programs. Automatically generating test inputs for pointer manipulating code such as a linked list or balanced tree implementation remains a challenge. Eagerly enumerating all possible heap shapes forfeits the advantages of symbolic execution. Alternatively, for a tester, writing assumptions to express the disjointness of memory regions addressed by input pointers is a tedious and labor-intensive task.

This paper proposes a novel solution for this problem: by exploiting type information, disjointness constraints that characterize permissible configurations of typed pointers in byte-addressable memory can be automatically generated. As a result, the constraint solver can automatically generate relevant heap shapes for the program under test. We report on our experience with an implementation of this approach in Pex, a dynamic symbolic execution framework for .NET. We examine two different symbolic representations for typed memory, and we discuss the impact of various optimizations.

Dries Vanoverberghe, Nikolai Tillmann, Frank Piessens
Specification Mining with Few False Positives

Formal specifications can help with program testing, optimization, refactoring, documentation, and, most importantly, debugging and repair. Unfortunately, formal specifications are difficult to write manually, while techniques that infer specifications automatically suffer from 90–99% false positive rates. Consequently, neither option is currently practical for most software development projects.

We present a novel technique that automatically infers partial correctness specifications with a very low false positive rate. We claim that existing specification miners yield false positives because they assign equal weight to all aspects of program behavior. By using additional information from the software engineering process, we are able to dramatically reduce this rate. For example, we grant less credence to duplicate code, infrequently-tested code, and code that exhibits high turnover in the version control system.

We evaluate our technique in two ways: as a preprocessing step for an existing specification miner and as part of novel specification inference algorithms. Our technique identifies which input is most indicative of program behavior, which allows off-the-shelf techniques to learn the same number of specifications using only 60% of their original input. Our inference approach has few false positives in practice, while still finding useful specifications on over 800,000 lines of code. When minimizing false alarms, we obtain a 5% false positive rate, an order-of-magnitude improvement over previous work. When used to find bugs, our mined specifications locate over 250 policy violations. To the best of our knowledge, this is the first specification miner with such a low false positive rate, and thus a low associated burden of manual inspection.

Claire Le Goues, Westley Weimer

Program Analysis

Path Feasibility Analysis for String-Manipulating Programs

We discuss the problem of path feasibility for programs manipulating strings using a collection of standard string library functions. We prove results on the complexity of this problem, including its undecidability in the general case and decidability of some special cases. In the context of test-case generation, we are interested in an efficient finite model finding method for string constraints. To this end we develop a two-tier finite model finding procedure. First, an integer abstraction of string constraints are passed to an SMT (Satisfiability Modulo Theories) solver. The abstraction is either unsatisfiable, or the solver produces a model that fixes lengths of enough strings to reduce the entire problem to be finite domain. The resulting fixed-length string constraints are then solved in a second phase. We implemented the procedure in a symbolic execution framework, report on the encouraging results and discuss directions for improving the method further.

Nikolaj Bjørner, Nikolai Tillmann, Andrei Voronkov
Symbolic String Verification: Combining String Analysis and Size Analysis

We present an automata-based approach for symbolic verification of systems with unbounded string and integer variables. Particularly, we are interested in automatically discovering the relationships among the string and integer variables. The lengths of the strings in a regular language form a semilinear set. We present a novel construction for length automata that accept the unary or binary representations of the lengths of the strings in a regular language. These length automata can be integrated with an arithmetic automaton that recognizes the valuations of the integer variables at a program point. We propose a static analysis technique that uses these automata in a forward fixpoint computation with widening and is able to catch relationships among the lengths of the string variables and the values of the integer variables. This composite string and integer analysis enables us to verify properties that cannot be verified using string analysis or size analysis alone.

Fang Yu, Tevfik Bultan, Oscar H. Ibarra
Iterating Octagons

In this paper we prove that the transitive closure of a non-deterministic octagonal relation using integer counters can be expressed in Presburger arithmetic. The direct consequence of this fact is that the reachability problem is decidable for flat counter automata with octagonal transition relations. This result improves the previous results of Comon and Jurski [7] and Bozga, Iosif and Lakhnech [6] concerning the computation of transitive closures for difference bound relations. The importance of this result is justified by the wide use of octagons to computing sound abstractions of real-life systems [15]. We have implemented the octagonal transitive closure algorithm in a prototype system for the analysis of counter automata, called FLATA, and we have experimented with a number of test cases.

Marius Bozga, Codruţa Gîrlea, Radu Iosif
Verifying Reference Counting Implementations

Reference counting is a widely-used resource management idiom which maintains a count of references to each resource by incrementing the count upon an acquisition, and decrementing upon a release; resources whose counts fall to zero may be recycled. We present an algorithm to verify the correctness of reference counting with minimal user interaction. Our algorithm performs compositional verification through the combination of symbolic

temporal case splitting

and predicate abstraction-based reachability. Temporal case splitting reduces the verification of an unbounded number of processes and resources to verification of a finite number through the use of

Skolem variables

. The finite state instances are discharged by symbolic model checking, with an auxiliary invariant correlating reference counts with the number of held references. We have implemented our algorithm in Referee, a reference counting analysis tool for C programs, and applied Referee to two real programs: the memory allocator of an OS kernel and the file interface of the Yaffs file system. In both cases our algorithm proves correct the use of reference counts in less than one minute.

Michael Emmi, Ranjit Jhala, Eddie Kohler, Rupak Majumdar

Hybrid Systems

Falsification of LTL Safety Properties in Hybrid Systems

This paper develops a novel computational method for the falsification of safety properties specified by syntactically safe linear temporal logic (LTL) formulas

φ

for hybrid systems with general nonlinear dynamics and input controls. The method is based on an effective combination of robot motion planning and model checking. Experiments on a hybrid robotic system benchmark with nonlinear dynamics show significant speedup over related work. The experiments also indicate significant speedup when using minimized DFA instead of non-minimized NFA, as obtained by standard tools, for representing the violating prefixes of

φ

.

Erion Plaku, Lydia E. Kavraki, Moshe Y. Vardi
Computing Optimized Representations for Non-convex Polyhedra by Detection and Removal of Redundant Linear Constraints

We present a method which computes optimized representations for non-convex polyhedra. Our method detects so-called redundant linear constraints in these representations by using an incremental SMT (Satisfiability Modulo Theories) solver and then removes the redundant constraints based on Craig interpolation. The approach is motivated by applications in the context of model checking for Linear Hybrid Automata. Basically, it can be seen as an optimization method for formulas including arbitrary boolean combinations of linear constraints and boolean variables. We show that our method provides an essential step making quantifier elimination for linear arithmetic much more efficient. Experimental results clearly show the advantages of our approach in comparison to state-of-the-art solvers.

Christoph Scholl, Stefan Disch, Florian Pigorsch, Stefan Kupferschmid

Decision Procedures and Theorem Proving

All-Termination(T)

We introduce the

All-Termination

(

T

) problem: given a termination solver

T

and a collection of functions

F

, find every subset of the formal parameters to

F

whose consideration is sufficient to show, using

T

, that

F

terminates. An important and motivating application is enhancing theorem proving systems by constructing the set of strongest induction schemes for

F

, modulo

T

. These schemes can be derived from the set of

termination cores

, the minimal sets returned by

All-Termination

(

T

), without any reference to an explicit measure function. We study the

All-Termination

(

T

) problem as applied to the size-change termination analysis (

$\mathit{SCT}$

), a

PSpace

-complete problem that underlies many termination solvers. Surprisingly, we show that

All-Termination

$(\mathit{SCT})$

is also

PSpace

-complete, even though it substantially generalizes

$\mathit{SCT}$

. We develop a practical algorithm for

All-Termination

$(\mathit{SCT})$

, and show experimentally that on the ACL2 regression suite (whose size is over 100MB) our algorithm generates stronger induction schemes on 90% of multiargument functions.

Panagiotis Manolios, Aaron Turon
Ground Interpolation for the Theory of Equality

Given a theory

$\mathcal{T}$

and two formulas

A

and

B

jointly unsatisfiable in

$\mathcal{T}$

, a

theory interpolant

of

A

and

B

is a formula

I

such that (i) its non-theory symbols are shared by

A

and

B

, (ii) it is entailed by

A

in

$\mathcal{T}$

, and (iii) it is unsatisfiable with

B

in

$\mathcal{T}$

. Theory interpolants are used in model checking to accelerate the computation of reachability relations. We present a novel method for computing ground interpolants for ground formulas in the theory of equality. Our algorithm computes interpolants from colored congruence graphs representing derivations in the theory of equality. These graphs can be produced by conventional congruence closure algorithms in a straightforward manner. By working with graphs, rather than at the level of individual proof steps, we are able to derive interpolants that are pleasingly simple (conjunctions of Horn clauses) and smaller than those generated by other tools.

Alexander Fuchs, Amit Goel, Jim Grundy, Sava Krstić, Cesare Tinelli
Satisfiability Procedures for Combination of Theories Sharing Integer Offsets

We present a novel technique to combine satisfiability procedures for theories that model some data-structures and that share the integer offsets. This procedure extends the Nelson-Oppen approach to a family of non-disjoint theories that have practical interest in verification. The result is derived by showing that the considered theories satisfy the hypotheses of a general result on non-disjoint combination. In particular, the capability of computing logical consequences over the shared signature is ensured in a non trivial way by devising a suitable complete superposition calculus.

Enrica Nicolini, Christophe Ringeissen, Michaël Rusinowitch

Invited Contribution

Bridging the Gap Between Model-Based Development and Model Checking

The growing power of model checking is making it feasible to use formal verification for important classes of software systems. However, for this to be practical it is necessary to bridge the gap between the commercial modeling tools industrial developers prefer to use and the input languages of the formal verification tools. This paper describes a translator framework that makes it possible to use several popular formal verification tools with commercial modeling tools. The practicality of this approach is illustrated by four case studies in which model checking was successfully used in the development of avionics software.

Steven P. Miller
Backmatter
Metadaten
Titel
Tools and Algorithms for the Construction and Analysis of Systems
herausgegeben von
Stefan Kowalewski
Anna Philippou
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-00768-2
Print ISBN
978-3-642-00767-5
DOI
https://doi.org/10.1007/978-3-642-00768-2