Skip to main content

2013 | Buch

Tools and Algorithms for the Construction and Analysis of Systems

19th International Conference, TACAS 2013, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013, Rome, Italy, March 16-24, 2013. Proceedings

herausgegeben von: Nir Piterman, Scott A. Smolka

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2013, held in Rome, Italy, in March 2013. The 42 papers presented in this volume were carefully reviewed and selected from 172 submissions. They are organized in topical sections named: Markov chains; termination; SAT/SMT; games and synthesis; process algebra; pushdown; runtime verification and model checking; concurrency; learning and abduction; timed automata; security and access control; frontiers (graphics and quantum); functional programs and types; tool demonstrations; explicit-state model checking; Büchi automata; and competition on software verification.

Inhaltsverzeichnis

Frontmatter

Markov Chains

On-the-Fly Exact Computation of Bisimilarity Distances

This paper proposes an algorithm for exact computation of bisimilarity distances between discrete-time Markov chains introduced by Desharnais et. al. Our work is inspired by the theoretical results presented by Chen et. al. at FoSSaCS’12, proving that these distances can be computed in polynomial time using the ellipsoid method. Despite its theoretical importance, the ellipsoid method is known to be inefficient in practice. To overcome this problem, we propose an efficient on-the-fly algorithm which, unlike other existing solutions, computes exactly the distances between given states and avoids the exhaustive state space exploration. It is parametric in a given set of states for which we want to compute the distances. Our technique successively refines over-approximations of the target distances using a greedy strategy which ensures that the state space is further explored only when the current approximations are improved. Tests performed on a consistent set of (pseudo)randomly generated Markov chains shows that our algorithm improves, on average, the efficiency of the corresponding iterative algorithms with orders of magnitude.

Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, Radu Mardare
The Quest for Minimal Quotients for Probabilistic Automata

One of the prevailing ideas in applied concurrency theory and verification is the concept of automata minimization with respect to strong or weak bisimilarity. The minimal automata can be seen as canonical representations of the behaviour modulo the bisimilarity considered. Together with congruence results wrt. process algebraic operators, this can be exploited to alleviate the notorious state space explosion problem. In this paper, we aim at identifying minimal automata and canonical representations for concurrent probabilistic models. We present minimality and canonicity results for probabilistic automata wrt. strong and weak bisimilarity, together with polynomial time minimization algorithms.

Christian Eisentraut, Holger Hermanns, Johann Schuster, Andrea Turrini, Lijun Zhang
LTL Model Checking of Interval Markov Chains

Interval Markov chains (IMCs) generalize ordinary Markov chains by having interval-valued transition probabilities. They are useful for modeling systems in which some transition probabilities depend on an unknown environment, are only approximately known, or are parameters that can be controlled. We consider the problem of computing values for the unknown probabilities in an IMC that maximize the probability of satisfying an

ω

-regular specification. We give new upper and lower bounds on the complexity of this problem. We then describe an approach based on an expectation maximization algorithm. We provide some analytical guarantees on the algorithm, and show how it can be combined with translation of logic to automata. We give experiments showing that the resulting system gives a practical approach to model checking IMCs.

Michael Benedikt, Rastislav Lenhardt, James Worrell

Termination

Ramsey vs. Lexicographic Termination Proving

Termination proving has traditionally been based on the search for (possibly lexicographic) ranking functions. In recent years, however, the discovery of termination proof techniques based on Ramsey’s theorem have led to new automation strategies,

e.g.

size-change, or iterative reductions from termination to safety. In this paper we revisit the decision to use Ramsey-based termination arguments in the iterative approach. We describe a new iterative termination proving procedure that instead searches for lexicographic termination arguments. Using experimental evidence we show that this new method leads to dramatic speedups.

Byron Cook, Abigail See, Florian Zuleger
Structural Counter Abstraction

Depth-Bounded Systems form an expressive class of well-structured transition systems. They can model a wide range of concurrent infinite-state systems including those with dynamic thread creation, dynamically changing communication topology, and complex shared heap structures. We present the first method to automatically prove fair termination of depth-bounded systems. Our method uses a numerical abstraction of the system, which we obtain by systematically augmenting an over-approximation of the system’s reachable states with a finite set of counters. This numerical abstraction can be analyzed with existing termination provers. What makes our approach unique is the way in which it exploits the well-structuredness of the analyzed system. We have implemented our work in a prototype tool and used it to automatically prove liveness properties of complex concurrent systems, including nonblocking algorithms such as Treiber’s stack and several distributed processes. Many of these examples are beyond the scope of termination analyses that are based on traditional counter abstractions.

Kshitij Bansal, Eric Koskinen, Thomas Wies, Damien Zufferey

Quantifier Elimination

Extending Quantifier Elimination to Linear Inequalities on Bit-Vectors

We present an algorithm for existentially quantifying variables from conjunctions of linear modular equalities (LMEs), disequalities (LMDs) and inequalities (LMIs). We use sound but relatively less complete and cheaper heuristics first, and expensive but more complete techniques are used only when required. Our experiments demonstrate that our algorithm outperforms alternative quantifier elimination techniques based on bit-blasting and Omega Test. We also extend this algorithm to work with Boolean combinations of LMEs, LMDs and LMIs.

Ajith K. John, Supratik Chakraborty

SAT/SMT

The MathSAT5 SMT Solver

MathSAT

is a long-term project, which has been jointly carried on by FBK-IRST and University of Trento, with the aim of developing and maintaining a state-of-the-art SMT tool for formal verification (and other applications).

MathSAT

5 is the latest version of the tool. It supports most of the SMT-LIB theories and their combinations, and provides many functionalities (like e.g. unsat cores, interpolation, AllSMT).

MathSAT

5 improves its predecessor

MathSAT

4 in many ways, also providing novel features: first, a much improved incrementality support, which is vital in SMT applications; second, a full support for the theories of arrays and floating point; third, sound SAT-style Boolean formula preprocessing for SMT formulae; finally, a framework allowing users for plugging their custom tuned SAT solvers.

MathSAT

5 is freely available, and it is used in numerous internal projects, as well as by a number of industrial partners.

Alessandro Cimatti, Alberto Griggio, Bastiaan Joost Schaafsma, Roberto Sebastiani
Formula Preprocessing in MUS Extraction

Efficient algorithms for extracting minimally unsatisfiable subformulas (MUSes) of Boolean formulas find a wide range of applications in the analysis of systems, e.g., hardware and software bounded model checking. In this paper we study the applicability of preprocessing techniques for Boolean satisfiability (SAT) in the context of MUS extraction. Preprocessing has proven to be extremely important in enabling more efficient SAT solving. Hence the study of the applicability and the effectiveness of preprocessing in MUS extraction is highly relevant. Considering the extraction of both standard and group MUSes, we focus on a number of SAT preprocessing techniques, and formally prove to what extent the techniques can be directly applied in the context of MUS extraction. Furthermore, we develop a generic theoretical framework that captures MUS extraction problems, and enables formalizing conditions for correctness-preserving applications of preprocessing techniques that are not applicable directly. We experimentally evaluate the effect of preprocessing in the context of group MUS extraction.

Anton Belov, Matti Järvisalo, Joao Marques-Silva
Proof Tree Preserving Interpolation

Craig interpolation in SMT is difficult because, e. g., theory combination and integer cuts introduce mixed literals, i. e., literals containing local symbols from both input formulae. In this paper, we present a scheme to compute Craig interpolants in the presence of mixed literals. Contrary to existing approaches, this scheme neither limits the inferences done by the SMT solver, nor does it transform the proof tree before extracting interpolants. Our scheme works for the combination of uninterpreted functions and linear arithmetic but is extendable to other theories. The scheme is implemented in the interpolating SMT solver SMTInterpol.

Jürgen Christ, Jochen Hoenicke, Alexander Nutz
Asynchronous Multi-core Incremental SAT Solving

Solvers for propositional logic formulas, so called SAT solvers, are used in many practical applications. As multi-core and multi-processor hardware has become widely available, parallelizations of such solvers are actively researched. Such research typically ignores the incremental problem specification feature that modern SAT solvers possess. This feature is, however, crucial for many of the real-life applications of SAT solvers. Such applications include formal verification, equivalence checking, and typical artificial intelligence tasks such as scheduling, planning and reasoning.

We have developed a multi-core SAT solver called Tarmo, which provides an interface that is compatible with conventional incremental solvers. It enables substantial performance improvements for many applications, without requiring code modifications. We present the

asynchronous interface

, a natural extension to the conventional solver interface that allows the construction of efficient application specific parallelizations. Through the asynchronous interface multiple problems can be given to the solver simultaneously. This enables conceptually simple but efficient parallelization of the solving process. Moreover, an asynchronous solver is easier to run in parallel with other independent tasks, simplifying the construction of so called coarse grained parallelizations. We provide an extensive experimental evaluation to illustrate the performance of the proposed techniques.

Siert Wieringa, Keijo Heljanko

Games and Synthesis

Model-Checking Iterated Games

We propose a logic for the definition of the collaborative power of groups of agents to enforce different temporal objectives. The resulting

temporal cooperation logic

(

TCL

) extends ATL by allowing for successive definition of strategies for agents and agencies. Different to previous logics with similar aims, our extension cuts a fine line between extending the power and maintaining a low complexity: model-checking TCL sentences is EXPTIME complete in the logic, and fixed parameter tractable for specifications of bounded size. This advancement over non-elementary logics is bought by disallowing a too close entanglement between cooperation and competition. We show how allowing such an entanglement immediately leads to a non-elementary complexity. We have implemented a model-checker for the logic and shown the feasibility of model-checking on a few benchmarks.

Chung-Hao Huang, Sven Schewe, Farn Wang
Synthesis from LTL Specifications with Mean-Payoff Objectives

The classical

LTL

synthesis problem is purely qualitative: the given

LTL

specification is realized or not by a reactive system.

LTL

is not expressive enough to formalize the correctness of reactive systems with respect to some quantitative aspects. This paper extends the

qualitative

LTL

synthesis setting to a

quantitative

setting. The alphabet of actions is extended with a weight function ranging over the integer numbers. The value of an infinite word is the mean-payoff of the weights of its letters. The synthesis problem then amounts to automatically construct (if possible) a reactive system whose executions all satisfy a given

LTL

formula and have mean-payoff values greater than or equal to some given threshold. The latter problem is called

LTL

$_\textsf{MP}$

synthesis and the

LTL

$_\textsf{MP}$

realizability problem asks to check whether such a system exists. By reduction to two-player mean-payoff parity games, we first show that

LTL

$_\textsf{MP}$

realizability is not more difficult than

LTL

realizability: it is 2ExpTime-Complete. While infinite memory strategies are required to realize

LTL

$_\textsf{MP}$

specifications in general, we show that

ε

-optimality can be obtained with finite-memory strategies, for any

ε

 > 0. To obtain efficient algorithms in practice, we define a Safraless procedure to decide whether there exists a finite-memory strategy that realizes a given specification for some given threshold. This procedure is based on a reduction to two-player energy safety games which are in turn reduced to safety games. Finally, we show that those safety games can be solved efficiently by exploiting the structure of their state spaces and by using antichains as a symbolic data-structure. All our results extend to multi-dimensional weights. We have implemented an antichain-based procedure and we report on some promising experimental results.

Aaron Bohy, Véronique Bruyère, Emmanuel Filiot, Jean-François Raskin
PRISM-games: A Model Checker for Stochastic Multi-Player Games

We present PRISM-games, a model checker for stochastic multi-player games, which supports modelling, automated verification and strategy synthesis for probabilistic systems with competitive or cooperative behaviour. Models are described in a probabilistic extension of the Reactive Modules language and properties are expressed using rPATL, which extends the well-known logic ATL with operators to reason about probabilities, various reward-based measures, quantitative properties and precise bounds. The tool is based on the probabilistic model checker PRISM, benefiting from its existing user interface and simulator, whilst adding novel model checking algorithms for stochastic games, as well as functionality to synthesise optimal player strategies, explore or export them, and verify other properties under the specified strategy.

Taolue Chen, Vojtěch Forejt, Marta Kwiatkowska, David Parker, Aistis Simaitis

Process Algebra

PIC2LNT: Model Transformation for Model Checking an Applied Pi-Calculus

The

π

-calculus [12] was proposed by Milner, Parrow, and Walker about twenty years ago for describing concurrent systems with mobile communication. The

π

-calculus is equipped with operational semantics defined in terms of Ltss (Labelled Transition Systems). Although a lot of theoretical results have been achieved on this language (see,

e.g.

, [1, chapter 8] for a survey), only a few verification tools have been designed for analysing

π

-calculus specifications automatically. The two most famous examples are the Mobility Workbench (Mwb) [14] and Jack [5], which were developed in the 90s.

Radu Mateescu, Gwen Salaün
An Overview of the mCRL2 Toolset and Its Recent Advances

The analysis of complex distributed systems requires dedicated software tools. The

mCRL

language and toolset have been developed to support such analysis. We highlight changes and improvements made to the toolset in recent years. On the one hand, these affect the scope of application, which has been broadened with extended support for data structures like infinite sets and functions. On the other hand, considerable progress has been made regarding the performance of our tools for state space generation and model checking, due to improvements in symbolic reduction techniques and due to a shift towards parity game-based solving. We also discuss the software architecture of the toolset, which was well suited to accommodate the above changes, and we address a number of case studies to illustrate the approach.

Sjoerd Cranen, Jan Friso Groote, Jeroen J. A. Keiren, Frank P. M. Stappers, Erik P. de Vink, Wieger Wesselink, Tim A. C. Willemse

Pushdown Systems Boolean/Integer Programs

Analysis of Boolean Programs

Boolean programs are a popular abstract domain for static-analysis-based software model checking. Yet little is known about the complexity of model checking for this model of computation. This paper aims to fill this void by providing a comprehensive study of the worst-case complexity of several basic analyses of Boolean programs, including reachability analysis, cycle detection, LTL, CTL, and CTL* model checking. We present algorithms for these problems and show that our algorithms are all

optimal

by providing matching lower bounds. We also identify particular classes of Boolean programs which are easier to analyse, and compare our results to prior work on pushdown model checking.

Patrice Godefroid, Mihalis Yannakakis
Weighted Pushdown Systems with Indexed Weight Domains

The reachability analysis of weighted pushdown systems is a very powerful technique in verification and analysis of recursive programs. Each transition rule of a weighted pushdown system is associated with an element of a bounded semiring representing the weight of the rule. However, we have realized that the restriction of the boundedness is too strict and the formulation of weighted pushdown systems is not general enough for some applications.

To generalize weighted pushdown systems, we first introduce the notion of stack signatures that summarize the effect of a computation of a pushdown system and formulate pushdown systems as automata over the monoid of stack signatures. We then generalize weighted pushdown systems by introducing semirings indexed by the monoid and weaken the boundedness to local boundedness.

Yasuhiko Minamide
Underapproximation of Procedure Summaries for Integer Programs

We show how to underapproximate the procedure summaries of recursive programs over the integers using off-the-shelf analyzers for non-recursive programs. The novelty of our approach is that the non-recursive program we compute may capture unboundedly many behaviors of the original recursive program for which stack usage cannot be bounded. Moreover, we identify a class of recursive programs on which our method terminates and returns the precise summary relations without underapproximation. Doing so, we generalize a similar result for non-recursive programs to the recursive case. Finally, we present experimental results of an implementation of our method applied on a number of examples.

Pierre Ganty, Radu Iosif, Filip Konečný

Runtime Verification and Model Checking

Runtime Verification Based on Register Automata

We propose TOPL automata as a new method for runtime verification of systems with unbounded resource generation. Paradigmatic such systems are object-oriented programs which can dynamically generate an unbounded number of fresh object identities during their execution. Our formalism is based on register automata, a particularly successful approach in automata over infinite alphabets which administers a finite-state machine with boundedly many input-storing registers. We show that TOPL automata are equally expressive to register automata and yet suitable to express properties of programs. Compared to other runtime verification methods, our technique can handle a class of properties beyond the reach of current tools. We show in particular that properties which require value updates are not expressible with current techniques yet are naturally captured by TOPL machines. On the practical side, we present a tool for runtime verification of Java programs via TOPL properties, where the trade-off between the coverage and the overhead of the monitoring system is tunable by means of a number of parameters. We validate our technique by checking properties involving multiple objects and chaining of values on large open source projects.

Radu Grigore, Dino Distefano, Rasmus Lerchedahl Petersen, Nikos Tzevelekos
Unbounded Model-Checking with Interpolation for Regular Language Constraints

We present a decision procedure for the problem of, given a set of regular expressions

R

1

, …,

R

n

, determining whether

R

 = 

R

1

 ∩ ⋯ ∩ 

R

n

is empty. Our solver,

revenant

, finitely unrolls automata for

R

1

, …,

R

n

, encoding each as a set of propositional constraints. If a SAT solver determines satisfiability then

R

is non-empty. Otherwise our solver uses unbounded model checking techniques to extract an interpolant from the bounded proof. This interpolant serves as an overapproximation of

R

. If the solver reaches a fixed-point with the constraints remaining unsatisfiable, it has proven

R

to be empty. Otherwise, it increases the unrolling depth and repeats. We compare

revenant

with other state-of-the-art string solvers. Evaluation suggests that it behaves better for constraints that express the intersection of sets of regular languages, a case of interest in the context of verification.

Graeme Gange, Jorge A. Navas, Peter J. Stuckey, Harald Søndergaard, Peter Schachte
eVolCheck: Incremental Upgrade Checker for C

Software is not created at once. Rather, it grows incrementally version by version and evolves long after being first released. To be practical for software developers, the software verification tools should be able to cope with changes. In this paper, we present a tool,

eVolCheck

, that focuses on incremental verification of software as it evolves. During the software evolution the tool maintains abstractions of program functions, function summaries, derived using Craig interpolation. In each check, the function summaries are used to localize verification of an upgrade to analysis of the modified functions. Experimental evaluation on a range of various benchmarks shows substantial speedup of incremental upgrade checking of

eVolCheck

in contrast to checking each version from scratch.

Grigory Fedyukovich, Ondrej Sery, Natasha Sharygina
Intertwined Forward-Backward Reachability Analysis Using Interpolants

In this work we develop a novel SAT-based verification approach which is based on interpolation. The novelty of our approach is in extracting interpolants in both forward and backward manner and exploiting them for an

intertwined approximated forward and backward reachability analysis

. Our approach is also mostly

local

and avoids unrolling of the checked model as much as possible. This results in an efficient and complete SAT-based verification algorithm.

We implemented our algorithm and compared it with both McMillan’s interpolation-based algorithm and with IC3, on real-life industrial designs as well as on examples from the HWMCC’11 benchmark. In many cases, our algorithm outperformed both methods.

Yakir Vizel, Orna Grumberg, Sharon Shoham

Concurrency

An Integrated Specification and Verification Technique for Highly Concurrent Data Structures

We present a technique for automatically verifying safety properties of concurrent programs, in particular programs which rely on subtle dependencies of local states of different threads, such as lock-free implementations of stacks and queues in an environment without garbage collection. Our technique addresses the joint challenges of infinite-state specifications, an unbounded number of threads, and an unbounded heap managed by explicit memory allocation. Our technique builds on the automata-theoretic approach to model checking, in which a specification is given by an automaton that observes the execution of a program and accepts executions that violate the intended specification. We extend this approach by allowing specifications to be given by a class of infinite-state automata. We show how such automata can be used to specify queues, stacks, and other data structures, by extending a data-independence argument. For verification, we develop a shape analysis, which tracks correlations between pairs of threads, and a novel abstraction to make the analysis practical. We have implemented our method and used it to verify programs, some of which have not been verified by any other automatic method before.

Parosh Aziz Abdulla, Frédéric Haziza, Lukáš Holík, Bengt Jonsson, Ahmed Rezine
A Verification-Based Approach to Memory Fence Insertion in PSO Memory Systems

This paper addresses the problem of verifying and correcting programs when they are moved from a sequential consistency execution environment to a relaxed memory context. Specifically, it considers the PSO (Partial Store Order) memory model, which corresponds to the use of a store buffer for each shared variable and each process. We also will consider, as an intermediate step, the TSO (Total Store Order) memory model, which corresponds to the use of one store buffer per process.

The proposed approach extends a previously developed verification tool that uses finite automata to symbolically represent the possible contents of the store buffers. Its starting point is a program that is correct for the usual Sequential Consistency (SC) memory model, but that might be incorrect under PSO with respect to safety properties. This program is then first analyzed and corrected for the TSO memory model, and then this TSO-safe program is analyzed and corrected under PSO, producing a PSO-safe program. To obtain a TSO-safe program, only store-load fences (TSO only allows store-load relaxations) are introduced into the program. Finaly, to produce a PSO-safe program, only store-store fences (PSO additionally allows store-store relaxations) are introduced.

An advantage of our technique is that the underlying symbolic verification tool makes a full exploration of program behaviors possible even for cyclic programs, which makes our approach broadly applicable. The method has been tested with an experimental implementation and can effectively handle a series of classical examples.

Alexander Linden, Pierre Wolper

Learning and Abduction

Identifying Dynamic Data Structures by Learning Evolving Patterns in Memory

We investigate whether dynamic data structures in pointer programs can be identified by analysing program executions only. This paper describes a first step towards solving this problem by applying machine learning and pattern recognition techniques to analyse executions of C programs. By searching for repeating temporal patterns in memory caused by multiple invocations of data-structure operations, we are able to first locate and then identify these operations. Applying a prototypic tool implementing our approach to pointer programs that employ, e.g., lists, queues and stacks, we show that the identified operations can accurately determine the data structures used.

David H. White, Gerald Lüttgen
Synthesis of Circular Compositional Program Proofs via Abduction

This paper presents a technique for synthesizing circular compositional proofs of program correctness. Our technique uses abductive inference to decompose the proof into small lemmas, which are represented as small program fragments annotated with pre and post-conditions. Different tools are used to discharge each different lemma, combining the strengths of different verifiers. Furthermore, each lemma concerns the correctness of small syntactic fragments of the program, addressing scalability concerns. We have implemented this technique and used it combine four different verification tools. Our experiments show that our technique can be successfully used to verify applications that cannot be verified by any individual technique.

Boyang Li, Isil Dillig, Thomas Dillig, Ken McMillan, Mooly Sagiv

Timed Automata

As Soon as Probable: Optimal Scheduling under Stochastic Uncertainty

In this paper we continue our investigation of stochastic (and hence dynamic) variants of classical scheduling problems. Such problems can be modeled as duration probabilistic automata (DPA), a well-structured class of acyclic timed automata where temporal uncertainty is interpreted as a bounded

uniform distribution

of task durations [18]. In [12] we have developed a framework for computing the expected performance of a

given

scheduling policy. In the present paper we move from

analysis

to

controller synthesis

and develop a dynamic-programming style procedure for automatically synthesizing (or approximating)

expected time optimal

schedulers, using an iterative computation of a

stochastic time-to-go

function over the state and clock space of the automaton.

Jean-François Kempf, Marius Bozga, Oded Maler
Integer Parameter Synthesis for Timed Automata

We provide a subclass of parametric timed automata (PTA) that we can actually and efficiently analyze, and we argue that it retains most of the practical usefulness of PTA. The currently most useful known subclass of PTA, L/U automata, has a strong syntactical restriction for practical purposes, and we show that the associated theoretical results are mixed. We therefore advocate for a different restriction scheme: since in classical timed automata, real-valued clocks are always compared to integers for all practical purposes, we also search for parameter values as bounded integers. We show that the problem of the existence of parameter values such that some TCTL property is satisfied is PSPACE-complete. In such a setting, we can also of course synthesize all the values of parameters and we give symbolic algorithms, for reachability and unavoidability properties, to do it efficiently, i.e., without an explicit enumeration. This also has the practical advantage of giving the result as symbolic constraints between the parameters. We finally report on a few experimental results to illustrate the practical usefulness of the approach.

Aleksandra Jovanović, Didier Lime, Olivier H. Roux

Security and Access Control

LTL Model-Checking for Malware Detection

Nowadays, malware has become a critical security threat. Traditional anti-viruses such as signature-based techniques and code emulation become insufficient and easy to get around. Thus, it is important to have efficient and robust malware detectors. In [20,19], CTL model-checking for PushDown Systems (PDSs) was shown to be a robust technique for malware detection. However, the approach of [20,19] lacks precision and runs out of memory in several cases. In this work, we show that several malware specifications could be expressed in a more precise manner using LTL instead of CTL. Moreover, LTL can express malicious behaviors that cannot be expressed in CTL. Thus, since LTL model-checking for PDSs is polynomial in the size of PDSs while CTL model-checking for PDSs is exponential, we propose to use LTL model-checking for PDSs for malware detection. Our approach consists of: (1) Modeling the binary program as a PDS. This allows to track the program’s stack (needed for malware detection). (2) Introducing a new logic (SLTPL) to specify the malicious behaviors. SLTPL is an extension of LTL with variables, quantifiers, and predicates over the stack. (3) Reducing the malware detection problem to SLTPL model-checking for PDSs. We reduce this model checking problem to the emptiness problem of Symbolic Büchi PDSs. We implemented our techniques in a tool, and we applied it to detect several viruses. Our results are encouraging.

Fu Song, Tayssir Touili
Policy Analysis for Self-administrated Role-Based Access Control

Current techniques for security analysis of administrative role-based access control (ARBAC) policies restrict themselves to the

separate administration

assumption that essentially separates administrative roles from regular ones. The naive algorithm of tracking all users is all that is known for the analysis of ARBAC policies without separate administration, and the state space explosion that this results in precludes building effective tools. In contrast, the separate administration assumption greatly simplifies the analysis since it makes it sufficient to track only one user at a time. However, separation limits the expressiveness of the models and restricts modeling distributed administrative control. We undertake a fundamental study of analysis of ARBAC policies without the separate administration restriction, and show that analysis algorithms can be built that track only a bounded number of users, where the bound depends only on the number of administrative roles in the system. Using this fundamental insight paves the way for us to design an involved heuristic to further tame the state space explosion in practical systems. Our results are also very effective when applied on policies designed under the separate administration restriction. We implement our techniques and report on experiments conducted on several realistic case studies.

Anna Lisa Ferrara, P. Madhusudan, Gennaro Parlato
Model Checking Agent Knowledge in Dynamic Access Control Policies

In this paper, we develop a modeling technique based on interpreted systems in order to verify temporal-epistemic properties over access control policies. This approach enables us to detect information flow vulnerabilities in dynamic policies by verifying the knowledge of the agents gained by both reading and reasoning about system information. To overcome the practical limitations of state explosion in model-checking temporal-epistemic properties, we introduce a novel abstraction and refinement technique for temporal-epistemic safety properties in ACTLK (ACTL with knowledge modality K) and a class of interesting properties that does fall in this category.

Masoud Koleini, Eike Ritter, Mark Ryan

Frontiers (Graphics and Quantum)

Automatic Testing of Real-Time Graphics Systems

In this paper we deal with the general topic of verification of real-time graphics systems. In particular we present the Runtime Graphics Verification Framework (

RUGVEF

), where we combine techniques from runtime verification and image analysis to automate testing of graphics systems. We provide a proof of concept in the form of a case study, where

RUGVEF

is evaluated in an industrial setting to verify an on-air graphics playout system used by the Swedish Broadcasting Corporation. We report on experimental results from the evaluation, in particular the discovery of five previously unknown defects.

Robert Nagy, Gerardo Schneider, Aram Timofeitchik
Equivalence Checking of Quantum Protocols

Quantum Information Processing (QIP) is an emerging area at the intersection of physics and computer science. It aims to establish the principles of communication and computation for systems based on the theory of quantum mechanics. Interesting QIP protocols such as quantum key distribution, teleportation, and blind quantum computation have already been realised in the laboratory and are now in the realm of mainstream industrial applications. The complexity of these protocols, along with possible inaccuracies in implementation, demands systematic and formal analysis. In this paper, we present a new technique and a tool, with a high-level interface, for verification of quantum protocols using equivalence checking. Previous work by Gay, Nagarajan and Papanikolaou used model-checking to verify quantum protocols represented in the stabilizer formalism, a restricted model which can be simulated efficiently on classical computers. Here, we are able to go beyond stabilizer states and verify protocols efficiently on all input states.

Ebrahim Ardeshir-Larijani, Simon J. Gay, Rajagopal Nagarajan

Functional Programs and Types

Encoding Monomorphic and Polymorphic Types

Most automatic theorem provers are restricted to untyped logics, and existing translations from typed logics are bulky or unsound. Recent research proposes monotonicity as a means to remove some clutter. Here we pursue this approach systematically, analysing formally a variety of encodings that further improve on efficiency while retaining soundness and completeness. We extend the approach to rank-1 polymorphism and present alternative schemes that lighten the translation of polymorphic symbols based on the novel notion of “cover”. The new encodings are implemented, and partly proved correct, in Isabelle/HOL. Our evaluation finds them vastly superior to previous schemes.

Jasmin Christian Blanchette, Sascha Böhme, Andrei Popescu, Nicholas Smallbone
Deriving Probability Density Functions from Probabilistic Functional Programs

The

probability density function

of a probability distribution is a fundamental concept in probability theory and a key ingredient in various widely used machine learning methods. However, the necessary framework for compiling probabilistic functional programs to density functions has only recently been developed. In this work, we present a density compiler for a probabilistic language with discrete and continuous distributions, and discrete observations, and provide a proof of its soundness. The compiler greatly reduces the development effort of domain experts, which we demonstrate by solving inference problems from various scientific applications, such as modelling the global carbon cycle, using a standard Markov chain Monte Carlo framework.

Sooraj Bhat, Johannes Borgström, Andrew D. Gordon, Claudio Russo

Tool Demonstrations

Polyglot: Systematic Analysis for Multiple Statechart Formalisms

Polyglot is a tool for the systematic analysis of systems integrated from components built using multiple Statechart formalisms. In Polyglot, Statechart models are translated into a common Java representation with pluggable semantics for different Statechart variants. Polyglot is tightly integrated with the Java Pathfinder verification tool-set, providing analysis and test-case generation capabilities. The tool has been applied in the context of safety-critical software systems whose interacting components were modeled using multiple Statechart formalisms.

Daniel Balasubramanian, Corina S. Păsăreanu, Gábor Karsai, Michael R. Lowry
Memorax, a Precise and Sound Tool for Automatic Fence Insertion under TSO

We introduce

Memorax

, a tool for the verification of control state reachability (i.e., safety properties) of concurrent programs manipulating finite range and integer variables and running on top of weak memory models. The verification task is non-trivial as it involves exploring state spaces of arbitrary or even infinite sizes. Even for programs that only manipulate finite range variables, the sizes of the store buffers could grow unboundedly, and hence the state spaces that need to be explored could be of infinite size. In addition,

Memorax

incorporates an interpolation based CEGAR loop to make possible the verification of control state reachability for concurrent programs involving integer variables. The reachability procedure is used to automatically compute possible memory fence placements that guarantee the unreachability of bad control states under TSO. In fact, for programs only involving finite range variables and running on TSO, the fence insertion functionality is complete, i.e., it will find all minimal sets of memory fence placements (minimal in the sense that removing any fence would result in the reachability of the bad control states). This makes

Memorax

the first freely available, open source, push-button verification and fence insertion tool for programs running under TSO with integer variables.

Parosh Aziz Abdulla, Mohamed Faouzi Atig, Yu-Fang Chen, Carl Leonardsson, Ahmed Rezine
BULL: A Library for Learning Algorithms of Boolean Functions

We present the tool BULL (Boolean fUnction Learning Library), the first publicly available implementation of learning algorithms for Boolean functions. The tool is implemented in C with interfaces to C++, JAVA and OCAML. Experimental results show significant advantages of Boolean function learning algorithms over all variants of the

L

*

learning algorithm for regular languages.

Yu-Fang Chen, Bow-Yaw Wang
AppGuard – Enforcing User Requirements on Android Apps

The success of Android phones makes them a prominent target for malicious software, in particular since the Android permission system turned out to be inadequate to protect the user against security and privacy threats. This work presents

AppGuard

, a powerful and flexible system for the enforcement of user-customizable security policies on untrusted Android applications. AppGuard does not require any changes to a smartphone’s firmware or root access. Our system offers complete mediation of security-relevant methods based on callee-site inline reference monitoring. We demonstrate the general applicability of AppGuard by several case studies, e.g., removing permissions from overly curious apps as well as defending against several recent real-world attacks on Android phones. Our technique exhibits very little space and runtime overhead. AppGuard is publicly available, has been invited to the Samsung Apps market, and has had more than 500,000 downloads so far.

Michael Backes, Sebastian Gerling, Christian Hammer, Matteo Maffei, Philipp von Styp-Rekowsky

Explicit-State Model Checking

Model Checking Database Applications

We describe the design of DPF, an explicit-state model checker for database-backed web applications. DPF interposes between the program and the database layer, and precisely tracks the effects of queries made to the database. We experimentally explore several implementation choices for the model checker: stateful vs. stateless search, state storage and backtracking strategies, and dynamic partial-order reduction. In particular, we define independence relations at different granularity levels of the database (at the database, relation, record, attribute, or cell level), and show the effectiveness of dynamic partial-order reduction based on these relations.

We apply DPF to look for atomicity violations in web applications. Web applications maintain shared state in databases, and typically there are relatively few database accesses for each request. This implies concurrent interactions are limited to relatively few and well-defined points, enabling our model checker to scale. We explore the performance implications of various design choices and demonstrate the effectiveness of DPF on a set of Java benchmarks. Our model checker was able to find new concurrency bugs in two open-source web applications, including in a standard example distributed with the Spring framework.

Milos Gligoric, Rupak Majumdar
Efficient Property Preservation Checking of Model Refinements

In model-driven software development, models and model refinements are used to create software. To automatically generate correct software from abstract models by means of model refinement, desirable properties of the initial models must be preserved. We propose an explicit-state model checking technique to determine whether refinements are property preserving. We use networks of labelled transition systems (LTSs) to represent models with concurrent components, and formalise refinements as systems of LTS transformation rules. Property preservation checking involves determining how a rule system relates to an input network, and checking bisimilarity between behaviour subjected to transformation and the corresponding behaviour after transformation. In this way, one avoids generating the entire LTS of the new model. Experimental results demonstrate speedups of several orders of magnitude.

Anton Wijs, Luc Engelen

Büchi Automata

Strength-Based Decomposition of the Property Büchi Automaton for Faster Model Checking

The automata-theoretic approach for model checking of linear-time temporal properties involves the emptiness check of a large Büchi automaton. Specialized emptiness-check algorithms have been proposed for the cases where the property is represented by a weak or terminal automaton.

When the property automaton does not fall into these categories, a general emptiness check is required. This paper focuses on this class of properties. We refine previous approaches by classifying stronglyconnected components rather than automata, and suggest a decomposition of the property automaton into three smaller automata capturing the terminal, weak, and the remaining strong behaviors of the property. The three corresponding emptiness checks can be performed independently, using the most appropriate algorithm.

Such a decomposition approach can be used with any automata-based model checker. We illustrate the interest of this new approach using explicit and symbolic LTL model checkers.

Etienne Renault, Alexandre Duret-Lutz, Fabrice Kordon, Denis Poitrenaud

Competition on Software Verification

Second Competition on Software Verification
(Summary of SV-COMP 2013)

This report describes the 2nd International Competition on Software Verification (SV-COMP 2013), which is the second edition of this thorough evaluation of fully automatic verifiers for software programs. The reported results represent the 2012 state-of-the-art in automatic software verification, in terms of effectiveness and efficiency, and as available and participated. The benchmark set of verification tasks consists of 2 315 programs, written in C, and exposing features of integers, heap-data structures, bit-vector operations, and concurrency; the properties include reachability and memory safety. The competition is again organized as a satellite event of TACAS.

Dirk Beyer
CPAchecker with Explicit-Value Analysis Based on CEGAR and Interpolation
(Competition Contribution)

CPAchecker

is a freely available software-verification framework, built on the concepts of

Configurable Program Analysis

(CPA). Within

CPAchecker

, several such CPAs are available, e.g., a Predicate-CPA, building on the predicate domain, as well as an Explicit-CPA, in which an abstract state is represented as an

explicit

variable assignment. In the

CPAchecker

configuration we are submitting, the highly efficient Explicit-CPA, backed by interpolation-based counterexample-guided abstraction refinement, joins forces with an auxiliary Predicate-CPA in a setup utilizing dynamic precision adjustment. This combination constitutes a highly promising verification tool, and thus, we submit a configuration making use of this analysis approach.

Stefan Löwe
CPAchecker with Sequential Combination of Explicit-State Analysis and Predicate Analysis
(Competition Contribution)

CPAchecker

is an open-source framework for software verification, based on the concepts of

Configurable Program Analysis

(CPA). We submit a

CPAchecker

configuration that uses a sequential combination of two approaches. It starts with an explicit-state analysis, and, if no answer can be found within some time, switches to a predicate analysis with adjustable-block encoding and CEGAR.

Philipp Wendler
CSeq: A Sequentialization Tool for C
(Competition Contribution)

Sequentialization translates concurrent programs into equivalent non-deterministic sequential programs so that the different concurrent schedules no longer need to be handled explicitly. It can thus be used as a

concurrency pre-processor

for many sequential program verification techniques. CSeq implements sequentialization for C and uses ESBMC as sequential verification backend [5].

Bernd Fischer, Omar Inverso, Gennaro Parlato
Handling Unbounded Loops with ESBMC 1.20
(Competition Contribution)

We extended ESBMC to exploit the combination of context-bounded symbolic model checking and

k

-induction to prove safety properties in single- and multi-threaded ANSI-C programs with unbounded loops. We now first try to verify by induction that the safety property holds in the system. If that fails, we search for a bounded reachable state that constitutes a counterexample.

Jeremy Morse, Lucas Cordeiro, Denis Nicole, Bernd Fischer
LLBMC: Improved Bounded Model Checking of C Programs Using LLVM
(Competition Contribution)

LLBMC

is a tool for detecting bugs and runtime errors in

C

and

C++

programs. It is based on bounded model checking using an SMT solver and thus achieves bit-accurate precision. A distinguishing feature of

LLBMC

in contrast to other bounded model checking tools for

C

programs is that it operates on a compiler intermediate representation and not directly on the source code.

Stephan Falke, Florian Merz, Carsten Sinz
Predator: A Tool for Verification of Low-Level List Manipulation
(Competition Contribution)

Predator is a tool for automated formal verification of sequential C programs operating with pointers and linked lists. The core algorithms of Predator were originally inspired by works on separation logic with higher-order list predicates, but they are now purely graph-based and significantly extended to support various forms of low-level memory manipulation used in system-level code. This paper briefly introduces Predator and describes its participation in the Software Verification Competition SV-COMP’13 held at TACAS’13.

Kamil Dudka, Petr Müller, Petr Peringer, Tomáš Vojnar
Symbiotic: Synergy of Instrumentation, Slicing, and Symbolic Execution
(Competition Contribution)

Symbiotic

is a tool for detection of bugs described by finite state machines in C programs. The tool combines three well-known techniques: instrumentation, program slicing, and symbolic execution. This paper briefly describes the approach of

Symbiotic

including its strengths, weaknesses, and modifications for SV-COMP 2013. Architecture and installation of the tool are described as well.

Jiri Slaby, Jan Strejček, Marek Trtík
Threader: A Verifier for Multi-threaded Programs
(Competition Contribution)

Threader

is a tool that automates verification of safety and termination properties for multi-threaded C programs. The distinguishing feature of

Threader

is its use of reasoning that is compositional with regards to the thread structure of the verified program. This paper describes the verification approach taken by

Threader

and provides instructions on how to install and use the tool.

Corneliu Popeea, Andrey Rybalchenko
UFO: Verification with Interpolants and Abstract Interpretation
(Competition Contribution)

The algorithms underlying

Ufo

are described in [1-3]. The Ufo tool is described in more detail in [4].

Ufo

marries the power and efficiency of numerical Abstract Interpretation (AI) domains [6] with the generalizing ability of interpolation-based software verification in an abstraction refinement loop.

Aws Albarghouthi, Arie Gurfinkel, Yi Li, Sagar Chaki, Marsha Chechik
Ultimate Automizer with SMTInterpol
(Competition Contribution)

Ultimate

Automizer

is an automatic software verification tool for C programs. This tool is the first implementation of

trace abstraction

, which is an automata-theoretic approach to software verification. The implemented algorithm uses

nested interpolants

in its interprocedural program analysis. The interpolating SMT solver

SMTInterpol

is used to compute Craig interpolants.

Matthias Heizmann, Jürgen Christ, Daniel Dietsch, Evren Ermis, Jochen Hoenicke, Markus Lindenmann, Alexander Nutz, Christian Schilling, Andreas Podelski
Backmatter
Metadaten
Titel
Tools and Algorithms for the Construction and Analysis of Systems
herausgegeben von
Nir Piterman
Scott A. Smolka
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-36742-7
Print ISBN
978-3-642-36741-0
DOI
https://doi.org/10.1007/978-3-642-36742-7