Skip to main content

2015 | Buch

Tools and Algorithms for the Construction and Analysis of Systems

21st International Conference, TACAS 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015, Proceedings

herausgegeben von: Christel Baier, Cesare Tinelli

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 21st International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2015, which took place in London, UK, in April 2015, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015. The 45 papers included in this volume, consisting of 27 research papers, 2 case-study papers, 7 regular tool papers and 9 tool demonstration papers, were carefully reviewed and selected from 164 submissions. In addition, the book contains one invited contribution. The papers have been organized in topical sections on hybrid systems; program analysis; verification and abstraction; tool demonstrations; stochastic models; SAT and SMT; partial order reduction, bisimulation, and fairness; competition on software verification; parameter synthesis; program synthesis; program and runtime verification; temporal logic and automata and model checking.

Inhaltsverzeichnis

Frontmatter

Invited Talk

Frontmatter
Scalable Timing Analysis with Refinement

Traditional timing analysis techniques rely on composing system-level worst-case behavior with local worst-case behaviors of individual components. In many complex real-time systems, no single local worst-case behavior exists for each component and it generally requires to enumerate all the combinations of individual local behaviors to find the global worst case. This paper presents a scalable timing analysis technique based on abstraction refinement, which provides effective guidance to significantly prune away state space and quickly verify the desired timing properties. We first establish the general framework of the method, and then apply it to solve the analysis problem for several different real-time task models.

Nan Guan, Yue Tang, Jakaria Abdullah, Martin Stigge, Wang Yi

Hybrid systems

Frontmatter
A Formally Verified Hybrid System for the Next-Generation Airborne Collision Avoidance System

The

Next-Generation Airborne Collision Avoidance System

(ACAS X) is intended to be installed on all large aircraft to give advice to pilots and prevent mid-air collisions with other aircraft. It is currently being developed by the Federal Aviation Administration (FAA). In this paper we determine the geometric configurations under which the advice given by ACAS X is safe under a precise set of assumptions and formally verify these configurations using hybrid systems theorem proving techniques. We conduct an initial examination of the current version of the real ACAS X system and discuss some cases where our safety theorem conflicts with the actual advisory given by that version, demonstrating how formal, hybrid approaches are helping ensure the safety of ACAS X. Our approach is general and could also be used to identify unsafe advice issued by other collision avoidance systems or confirm their safety.

Jean-Baptiste Jeannin, Khalil Ghorbal, Yanni Kouskoulas, Ryan Gardner, Aurora Schmidt, Erik Zawadzki, André Platzer
Verified Reachability Analysis of Continuous Systems

Ordinary differential equations (ODEs) are often used to model the dynamics of (often safety-critical) continuous systems.

This work presents the formal verification of an algorithm for reachability analysis in continuous systems. The algorithm features adaptive Runge-Kutta methods and rigorous numerics based on affine arithmetic. It is proved to be sound with respect to the existing formalization of ODEs in Isabelle/HOL. Optimizations like splitting, intersecting and collecting reachable sets are necessary to analyze chaotic systems. Experiments demonstrate the practical usability of our developments.

Fabian Immler
HyComp: An SMT-Based Model Checker for Hybrid Systems

HyComp

is a model checker for hybrid systems based on Satisfiability Modulo Theories (SMT).

HyComp

takes as input networks of hybrid automata specified using the HyDI symbolic language.

HyComp

relies on the encoding of the network into an infinite-state transition system, which can be analyzed using SMT-based verification techniques (e.g. BMC, K-induction, IC3). The tool features specialized encodings of the automata network and can discretize various kinds of dynamics.

HyComp

can verify invariant and LTL properties, and scenario specifications; it can also perform synthesis of parameters ensuring the satisfaction of a given (invariant) property. All these features are provided either through specialized algorithms, as in the case of scenario or LTL verification, or applying off-the-shelf algorithms based on SMT. We describe the tool in terms of functionalities, architecture, and implementation, and we present the results of an experimental evaluation.

Alessandro Cimatti, Alberto Griggio, Sergio Mover, Stefano Tonetta
C2E2: A Verification Tool for Stateflow Models

Mathworks’ Stateflow is a predominant environment for modeling embedded and cyber-physical systems where control software interacts with physical processes. We present Compare-Execute-Check-Engine (C2E2)—a verification tool for continuous and hybrid Stateflow models. It checks bounded time invariant properties of models with nonlinear dynamics, and discrete transitions with guards and resets. C2E2 transforms the model, generates simulations using a validated numerical solver, and then computes reachtube over-approximations with increasing precision. For this last step it uses annotations that have to be added to the model. These annotations are extensions of proof certificates studied in Control Theory and can be automatically obtained for linear dynamics. The C2E2 algorithm is sound and it is guaranteed to terminate if the system is robustly safe (or unsafe) with respect to perturbations of guards and invariants of the model. We present the architecture of C2E2, its workflow, and examples illustrating its potential role in model-based design, verification, and validation.

Parasara Sridhar Duggirala, Sayan Mitra, Mahesh Viswanathan, Matthew Potok

Program Analysis

Frontmatter
Non-cumulative Resource Analysis

Existing cost analysis frameworks have been defined for cumulative resources which keep on increasing along the computation. Traditional cumulative resources are execution time, number of executed steps, amount of memory allocated, and energy consumption. Non-cumulative resources are acquired and (possibly) released along the execution. Examples of non-cumulative cost are memory usage in the presence of garbage collection, number of connections established that are later closed, or resources requested to a virtual host which are released after using them.We present, to the best of our knowledge, the first generic static analysis framework to infer an

upper bound

on the

peak cost

for non-cumulative types of resources. Our analysis comprises several components: (1) a pre-analysis to infer when resources are being used simultaneously, (2) a

program-point

resource analysis which infers an upper bound on the cost at the points of interest (namely the points where resources are acquired) and (3) the elimination from the upper bounds obtained in (2) of those resources accumulated that are not used simultaneously. We report on a prototype implementation of our analysis that can be used on a simple imperative language.

Elvira Albert, Jesús Correas Fernández, Guillermo Román-Díez
Value Slice: A New Slicing Concept for Scalable Property Checking

A backward slice is a commonly used preprocessing step for scaling property checking. For large programs though, the reduced size of the slice may still be too large for verifiers to handle. We propose an aggressive slicing method that, apart from slicing out the same statements as backward slice, also eliminates computations that only decide whether the point of property assertion is reachable. However, for precision, we also carefully identify and retain

all

computations that influence the values of the variables in the property. The resulting slice, called

value slice

, is smaller and scales better for property checking than backward slice.

We carry experiments on property checking of industry strength programs using three comparable slicing techniques: backward slice, value slice and an even more aggressive slicing technique called thin slice that retains only those statements on which the variables in the property are data dependent. While backward slicing enables highest precision and thin slice scales best, value slice based property checking comes close to the best in both scalability and precision. This makes value slice a good compromise between backward and thin slice for property checking.

Shrawan Kumar, Amitabha Sanyal, Uday P. Khedker
A Method for Improving the Precision and Coverage of Atomicity Violation Predictions

Atomicity violations are the most common non-deadlock concurrency bugs, which have been extensively studied in recent years. Since detecting the actual occurrences of atomicity violations is extremely hard and exhaustive testing of a multi-threaded program is in general impossible, many predictive methods have been proposed, which make error predictions based on a small number of instrumented interleaved executions. Predictive methods often make tradeoffs between precision and coverage. An over-approximate predictive method ensures coverage but lacks precision and thus may report a large number of false bugs. An under-approximate predictive method ensures precision but lacks coverage and thus can miss significant real bugs. This paper presents a post-prediction analysis method for improving the precision of the prediction results obtained through over-approximation while achieving better coverage than that obtained through under-approximation. Our method analyzes and filters the prediction results of over-approximation by evaluating a subset of read-after-write relationships without enforcing all of them as in existing under-approximation methods. Our post-prediction method is a static analysis method on the predicted traces from dynamic instrumentation of C/C++ executable, and is faster than dynamic replaying methods for ensuring precision.

Reng Zeng, Zhuo Sun, Su Liu, Xudong He
Commutativity of Reducers

In the Map-Reduce programming model for data parallel computation, a reducer computes an output from a list of input values associated with a key. The inputs however may not arrive at a reducer in a fixed order due to non-determinism in transmitting key-value pairs over the network. This gives rise to the

reducer commutativity

problem, that is, is the reducer computation independent of the order of its inputs? In this paper, we study the reducer commutativity problem formally. We introduce a syntactic subset of integer programs termed

integer reducers

to model real-world reducers. In spite of syntactic restrictions, we show that checking commutativity of integer reducers over unbounded lists of exact integers is undecidable. It remains undecidable even with input lists of a fixed length. The problem however becomes decidable for reducers over unbounded input lists of bounded integers. We propose an efficient reduction of commutativity checking to conventional assertion checking and report experimental results using various off-the-shelf program analyzers.

Yu-Fang Chen, Chih-Duo Hong, Nishant Sinha, Bow-Yaw Wang

Verification and Abstraction

Frontmatter
Inferring Simple Solutions to Recursion-Free Horn Clauses via Sampling

Recursion-free Horn-clause constraints have received much recent attention in the verification community. It extends Craig interpolation, and is proposed as a unifying formalism for expressing abstraction refinement. In abstraction refinement, it is often desirable to infer “simple” refinements, and researchers have studied techniques for inferring simple Craig interpolants. Drawing on the line of work, this paper presents a technique for inferring simple solutions to recursion-free Hornclause constraints. Our contribution is a constraint solving algorithm that lazily samples fragments of the given constraints whose solution spaces are used to form a simple solution for the whole. We have implemented a prototype of the constraint solving algorithm in a verification tool, and have confirmed that it is able to infer simple solutions that aid the verification process.

Hiroshi Unno, Tachio Terauchi
Analysis of Dynamic Process Networks

We formulate a method to compute global invariants of dynamic process networks. In these networks, inter-process connectivity may be altered by an adversary at any point in time. Dynamic networks serve as models for ad-hoc and sensor-network protocols. The analysis combines elements of compositional reasoning, symmetry reduction, and abstraction. Together, they allow a small “cutoff” network to represent arbitrarily large networks. A compositional invariant computed on the small network generalizes to a parametric invariant of the shape “for all networks and all processes: property

p

holds of each process and its local neighborhood.” We illustrate this method by showing how to compute useful invariants for a simple dining philosophers protocol, and the latest version of the ad-hoc routing protocol AODV (version 2).

Kedar S. Namjoshi, Richard J. Trefler

Tool Demonstrations

Frontmatter
MultiGain: A Controller Synthesis Tool for MDPs with Multiple Mean-Payoff Objectives

We present

MultiGain

, a tool to synthesize strategies for Markov decision processes (MDPs) with multiple mean-payoff objectives. Our models are described in PRISM, and our tool uses the existing interface and simulator of PRISM. Our tool extends PRISM by adding novel algorithms for multiple mean-payoff objectives, and also provides features such as (i) generating strategies and exploring them for simulation, and checking them with respect to other properties; and (ii) generating an approximate Pareto curve for two mean-payoff objectives. In addition, we present a new practical algorithm for the analysis of MDPs with multiple mean-payoff objectives under memoryless strategies.

Tomáš Brázdil, Krishnendu Chatterjee, Vojtěch Forejt, Antonín Kučera
syntMaskFT: A Tool for Synthesizing Masking Fault-Tolerant Programs from Deontic Specifications

In this paper we introduce

syntMaskFT

, a tool that synthesizes fault-tolerant programs from specifications written in a fragment of branching time logic with deontic operators, designed for specifying fault-tolerant systems. The tool focuses on producing masking tolerant programs, that is, programs that during a failure mask faults in such a way that they cannot be observed by the environment. It is based on an algorithm we have introduced in previous work, and shown to be sound and complete.

syntMaskFT

takes a specification and automatically determines whether a masking fault-tolerant component is realizable; in such a case, a description of the component is produced together with the maximal set of faults that can be supported for this level of tolerance. We present the ideas behind the tool by means of a simple example, and also report the result of experiments realized with more complex case studies.

Ramiro Demasi, Pablo F. Castro, Nicolás Ricci, Thomas S. E. Maibaum, Nazareno Aguirre
νZ - An Optimizing SMT Solver

νZ

is a part of the SMT solver Z3. It allows users to pose and solve optimization problems modulo theories. Many SMT applications use models to provide satisfying assignments, and a growing number of these build on top of Z3 to get optimal assignments with respect to objective functions.

νZ

provides a portfolio of approaches for solving linear optimization problems over SMT formulas, MaxSMT, and their combinations. Objective functions are combined as either Pareto fronts, lexicographically, or each objective is optimized independently. We describe usage scenarios of

νZ

, outline the tool architecture that allows dispatching problems to special purpose solvers, and examine use cases.

Nikolaj Bjørner, Anh-Dung Phan, Lars Fleckenstein
dReach: δ-Reachability Analysis for Hybrid Systems

dReach

is a bounded reachability analysis tool for nonlinear hybrid systems. It encodes reachability problems of hybrid systems to first-order formulas over real numbers, which are solved by delta-decision procedures in the SMT solver

dReach

. In this way,

dReach

is able to handle a wide range of highly nonlinear hybrid systems. It has scaled well on various realistic models from biomedical and robotics applications.

Soonho Kong, Sicun Gao, Wei Chen, Edmund Clarke
Uppaal Stratego

Uppaal Stratego

is a novel tool which facilitates generation, optimization, comparison as well as consequence and performance exploration of strategies for stochastic priced timed games in a user-friendly manner. The tool allows for efficient and flexible “strategy-space” exploration before adaptation in a final implementation by maintaining strategies as first class objects in the model-checking query language. The paper describes the strategies and their properties, construction and transformation algorithms and a typical tool usage scenario.

Alexandre David, Peter Gjøl Jensen, Kim Guldstrand Larsen, Marius Mikučionis, Jakob Haahr Taankvist
BINSEC: Binary Code Analysis with Low-Level Regions

This article presents the open source

BinSec

platform for (formal) binary-level code analysis. The platform is based on an extension of the DBA Intermediate Representation, and it is composed of three main modules: a front-end including several syntactic disassembly algorithms and heavy simplification of the resulting IR, a simulator supporting the recent low-level region-based memory model, and a generic static analysis module.

Adel Djoudi, Sébastien Bardin
Insight: An Open Binary Analysis Framework

We present

Insight

, a framework for binary program analysis and two tools provided with it:

CFGRecovery

and iii.

Insight

is intended to be a full environment for analyzing, interacting and verifying executable programs.

Insight

is able to translate

x86

,

x86-64

and

msp430

binary code to our intermediate representation and execute it symbolically in an abstract domain where each variable (register, memory cell) is substituted by a formula representing all its possible values along the current execution path.

CFGRecovery

aims at automatically rebuilding the program control flow based only on the executable file. It heavily relies on SMT solvers.

iii provides an interactive and a (Python) programmable interface to a coherent set of features from the

Insight

framework. It behaves like a debugger except that the execution traces that are examined are symbolic and cover a collection of possible concrete executions at once. For example, iii allows to perform an interactive reconstruction of the CFG.

Emmanuel Fleury, Olivier Ly, Gérald Point, Aymeric Vincent
SAM: The Static Analysis Module of the MAVERIC Mobile App Security Verification Platform

The tremendous success of the mobile application paradigm is due to the ease with which new applications are uploaded by developers, distributed through the application markets (e.g. Google Play), and finally installed by the users. Yet, the very same model is causing serious security concerns, since users have no or little means to ascertain the trustworthiness of the applications they install on their devices. To protect their customers, Poste Italiane has defined the Mobile Application Verification Cluster (MAVERIC), a process for the systematic security analysis of third-party mobile apps that leverage the online services provided by the company (e.g. home banking, parcel tracking). We present SAM, a toolkit that supports this process by automating a number of operations including reverse engineering, privilege analysis, and automatic verification of security properties. We introduce the functionalities of SAM through a demonstration of the platform applied to real Android applications.

Alessandro Armando, Gianluca Bocci, Giantonio Chiarelli, Gabriele Costa, Gabriele De Maglie, Rocco Mammoliti, Alessio Merlo
Symbolic Model-Checking Using ITS-Tools

We present verification toolset ITS-tools, featuring a symbolic model-checking back-end engine based on hierarchical set decision diagrams (SDD) that supports reachability, CTL and LTL model-checking and a user-friendly eclipse based front-end. Using model transformations to a Guarded Action Language (GAL) as intermediate format, ITS-tools can analyze third party (Uppaal, Spin, Divine...) specifications.

Yann Thierry-Mieg

Stochastic Models

Frontmatter
Semantic Importance Sampling for Statistical Model Checking

Statistical Model Checking (SMC) is a technique, based on Monte-Carlo simulations, for computing the bounded probability that a specific event occurs during a stochastic system’s execution. Estimating the probability of a “rare” event accurately with SMC requires many simulations. To this end, Importance Sampling (IS) is used to reduce the simulation effort. Commonly, IS involves “tilting” the parameters of the original input distribution, which is ineffective if the set of inputs causing the event (i.e., input-event region) is disjoint. In this paper, we propose a technique called Semantic Importance Sampling (SIS) to address this challenge. Using an SMT solver, SIS recursively constructs an abstract indicator function that over-approximates the input-event region, and then uses this abstract indicator function to perform SMC with IS. By using abstraction and SMT solving, SIS thus exposes a new connection between the verification of non-deterministic and stochastic systems. We also propose two optimizations that reduce the SMT solving cost of SIS significantly. Finally, we implement SIS and validate it on several problems. Our results indicate that SIS reduces simulation effort by multiple orders of magnitude even in systems with disjoint input-event regions.

Jeffery P. Hansen, Lutz Wrage, Sagar Chaki, Dionisio de Niz, Mark Klein
Strategy Synthesis for Stochastic Games with Multiple Long-Run Objectives

We consider turn-based stochastic games whose winning conditions are conjunctions of satisfaction objectives for long-run average rewards, and address the problem of finding a strategy that almost surely maintains the averages above a given multi-dimensional threshold vector. We show that strategies constructed from Pareto set approximations of expected energy objectives are

ε

-optimal for the corresponding average rewards. We further apply our methods to compositional strategy synthesis for multi-component stochastic games that leverages composition rules for probabilistic automata, which we extend for long-run ratio rewards with fairness. We implement the techniques and illustrate our methods on a case study of automated compositional synthesis of controllers for aircraft primary electric power distribution networks that ensure a given level of reliability.

Nicolas Basset, Marta Kwiatkowska, Ufuk Topcu, Clemens Wiltsche
FAUST $^{\mathsf 2}$ : Formal Abstractions of Uncountable-STate STochastic Processes

FAUST

$^{\mathsf 2}$

is a software tool that generates formal abstractions of (possibly non-deterministic) discrete-time Markov processes (dtMP) defined over uncountable (continuous) state spaces. A dtMP model is specified in MATLAB and abstracted as a finite-state Markov chain or a Markov decision process. The abstraction procedure runs in MATLAB and employs parallel computations and fast manipulations based on vector calculus, which allows scaling beyond state-of-the-art alternatives. The abstract model is formally put in relationship with the concrete dtMP via a user-defined maximum threshold on the approximation error introduced by the abstraction procedure.

FAUST

$^{\mathsf 2}$

allows exporting the abstract model to well-known probabilistic model checkers, such as PRISM or MRMC. Alternatively, it can handle internally the computation of PCTL properties (e.g. safety or reach-avoid) over the abstract model.

FAUST

$^{\mathsf 2}$

allows refining the outcomes of the verification procedures over the concrete dtMP in view of the quantified and tunable error, which depends on the dtMP dynamics and on the given formula. The toolbox is available at

http://sourceforge.net/projects/faust2/

Sadegh Esmaeil Zadeh Soudjani, Caspar Gevaerts, Alessandro Abate

SAT and SMT

Frontmatter
Linearly Ordered Attribute Grammar Scheduling Using SAT-Solving

Many computations over trees can be specified using attribute grammars. Compilers for attribute grammars need to find an evaluation order (or

schedule

) in order to generate efficient code. For the class of

linearly ordered attribute grammars

such a schedule can be found statically, but this problem is known to be NP-hard.

In this paper, we show how to encode linearly ordered attribute grammar scheduling as a SAT-problem. For such grammars it is necessary to ensure that the dependency graph is cycle free, which we approach in a novel way by transforming the dependency graph to a chordal graph allowing the cycle freeness to be efficiently expressed and computed using SAT solvers.

There are two main advantages to using a SAT-solver for scheduling: (1) the scheduling algorithm runs faster than existing scheduling algorithms on real-world examples, and (2) by adding extra constraints we obtain fine-grained control over the resulting schedule, thereby enabling new scheduling optimisations.

Jeroen Bransen, L. Thomas van Binsbergen, Koen Claessen, Atze Dijkstra
On Parallel Scalable Uniform SAT Witness Generation

Constrained-random verification (CRV) is widely used in industry for validating hardware designs. The effectiveness of CRV depends on the uniformity of test stimuli generated from a given set of constraints. Most existing techniques sacrifice either uniformity or scalability when generating stimuli. While recent work based on random hash functions has shown that it is possible to generate almost uniform stimuli from constraints with 100,000+ variables, the performance still falls short of today’s industrial requirements. In this paper, we focus on pushing the performance frontier of uniform stimulus generation further. We present a random hashing-based, easily parallelizable algorithm,

UniGen

2, for sampling solutions of propositional constraints.

UniGen

2 provides strong and relevant theoretical guarantees in the context of CRV, while also offering significantly improved performance compared to existing almost-uniform generators. Experiments on a diverse set of benchmarks show that

UniGen

2 achieves an average speedup of about 20× over a state-of-the-art sampling algorithm, even when running on a single core. Moreover, experiments with multiple cores show that

UniGen

2 achieves a near-linear speedup in the number of cores, thereby boosting performance even further.

Supratik Chakraborty, Daniel J. Fremont, Kuldeep S. Meel, Sanjit A. Seshia, Moshe Y. Vardi
Approximate Counting in SMT and Value Estimation for Probabilistic Programs

#SMT, or model counting for logical theories, is a well-known hard problem that generalizes such tasks as counting the number of satisfying assignments to a Boolean formula and computing the volume of a polytope. In the realm of satisfiability modulo theories (SMT) there is a growing need for model counting solvers, coming from several application domains (quantitative information flow, static analysis of probabilistic programs). In this paper, we show a reduction from an approximate version of #SMT to SMT.

We focus on the theories of integer arithmetic and linear real arithmetic. We propose model counting algorithms that provide approximate solutions with formal bounds on the approximation error. They run in polynomial time and make a polynomial number of queries to the SMT solver for the underlying theory, exploiting “for free” the sophisticated heuristics implemented within modern SMT solvers. We have implemented the algorithms and used them to solve a value estimation problem for a model of loop-free probabilistic programs with nondeterminism.

Dmitry Chistikov, Rayna Dimitrova, Rupak Majumdar
Pushing the Envelope of Optimization Modulo Theories with Linear-Arithmetic Cost Functions

In the last decade we have witnessed an impressive progress in the expressiveness and efficiency of Satisfiability Modulo Theories (SMT) solving techniques. This has brought previously-intractable problems at the reach of state-of-the-art SMT solvers, in particular in the domain of SW and HW verification. Many SMT-encodable problems of interest, however, require also the capability of finding models that are

optimal

wrt. some cost functions. In previous work, namely

Optimization Modulo Theory with Linear Rational Cost Functions – OMT(

$\mathcal{LRA}\cup \mathcal{T}$

), we have leveraged SMT solving to handle the

minimization

of cost functions on linear arithmetic over the rationals, by means of a combination of SMT and LP minimization techniques.

In this paper we push the envelope of our OMT approach along three directions: first, we extend it to work with linear arithmetic on the mixed integer/rational domain, by means of a combination of SMT, LP and ILP minimization techniques; second, we develop a

multi-objective

version of OMT, so that to handle many cost functions simultaneously or lexicographically; third, we develop an

incremental

version of OMT, so that to exploit the incrementality of some OMT-encodable problems. An empirical evaluation performed on OMT-encoded verification problems demonstrates the usefulness and efficiency of these extensions.

R. Sebastiani, P. Trentin

Partial Order Reduction, Bisimulation and Fairness

Frontmatter
Stateless Model Checking for TSO and PSO

We present a technique for efficient stateless model checking of programs that execute under the relaxed memory models TSO and PSO. The basis for our technique is a novel representation of executions under TSO and PSO, called

chronological traces

. Chronological traces induce a partial order relation on relaxed memory executions, capturing dependencies that are needed to represent the interaction via shared variables. They are optimal in the sense that they only distinguish computations that are inequivalent under the widely-used representation by Shasha and Snir. This allows an optimal dynamic partial order reduction algorithm to explore a minimal number of executions while still guaranteeing full coverage. We apply our techniques to check, under the TSO and PSO memory models, LLVM assembly produced for C/pthreads programs. Our experiments show that our technique reduces the verification effort for relaxed memory models to be almost that for the standard model of sequential consistency. In many cases, our implementation significantly outperforms other comparable tools.

Parosh Aziz Abdulla, Stavros Aronis, Mohamed Faouzi Atig, Bengt Jonsson, Carl Leonardsson, Konstantinos Sagonas
GPU Accelerated Strong and Branching Bisimilarity Checking

Bisimilarity checking is an important operation to perform explicit-state model checking when the state space of a model under verification has already been generated. It can be applied in various ways: reduction of a state space w.r.t. a particular flavour of bisimilarity, or checking that two given state spaces are bisimilar. Bisimilarity checking is a computationally intensive task, and over the years, several algorithms have been presented, both sequential, i.e. single-threaded, and parallel, the latter either relying on shared memory or message-passing. In this work, we first present a novel way to check strong bisimilarity on general-purpose graphics processing units (GPUs), and show experimentally that an implementation of it for CUDA-enabled GPUs is competitive with other parallel techniques that run either on a GPU or use message-passing on a multi-core system. Building on this, we propose, to the best of our knowledge, the first many-core branching bisimilarity checking algorithm, an implementation of which shows speedups comparable to our strong bisimilarity checking approach.

Anton Wijs
Fairness for Infinite-State Systems

In this paper we introduce the first known tool for symbolically proving

fair

-CTL properties of (infinite-state) integer programs. Our solution is based on a reduction to existing techniques for fairness-free CTL model checking via the use of infinite non-deterministic branching to symbolically partition fair from unfair executions. We show the viability of our approach in practice using examples drawn from device drivers and algorithms utilizing shared resources.

Byron Cook, Heidy Khlaaf, Nir Piterman

Competition on Software Verification

Frontmatter
Software Verification and Verifiable Witnesses
(Report on SV-COMP 2015)

SV-COMP 2015 marks the start of a new epoch of software verification: In the 4

th

Competition on Software Verification, software verifiers produced for each reported property violation a machine-readable error witness in a common exchange format (so far restricted to reachability properties of sequential programs without recursion). Error paths were reported previously, but always in different, incompatible formats, often insufficient to reproduce the identified bug, and thus, useless to the user. The common exchange format and the support by a large set of verification tools that use the format will make a big difference: One verifier can re-verify the witnesses produced by another verifier, visual error-path navigation tools can be developed, and here in the competition, we use witness checking to make sure that a verifier that claimed a found bug, had really found a valid error path. The other two changes to SV-COMP that we made this time were (a) the addition of the new property, a set of verification tasks, and ranking category for termination verification, and (b) the addition of two new categories for reachability analysis: Arrays and Floats. SV-COMP 2015, the fourth edition of the thorough comparative evaluation of fully-automatic software verifiers, reports effectiveness and efficiency results of the state of the art in software verification. The competition used 5803 verification tasks, more than double the number of SV-COMP’14. Most impressively, the number of participating verifiers increased from 15 to 22 verification systems, including 13 new entries.

Dirk Beyer
AProVE: Termination and Memory Safety of C Programs
(Competition Contribution)

AProVE

is a system for automatic termination and complexity proofs of

C

,

Java

,

Haskell

,

Prolog

, and term rewrite systems. The particular strength of

AProVE

when analyzing

C

is its capability to reason about pointer arithmetic combined with direct memory accesses (as, e.g., in standard implementations of string algorithms). As a prerequisite for termination,

AProVE

also proves memory safety of

C

programs.

Thomas Ströder, Cornelius Aschermann, Florian Frohn, Jera Hensel, Jürgen Giesl
Cascade
(Competition Contribution)

Cascade is a static program analysis tool developed at New York University. It uses bounded model checking to generate verification conditions and checks them using an SMT solver which either produces a proof of correctness or gives a concrete trace showing how an assertion can fail. It supports the majority of standard C features except for floating point. A distinguishing feature of Cascade is that its analysis uses a memory model which divides up memory into several partitions based on alias information.

Wei Wang, Clark Barrett
CPAchecker with Support for Recursive Programs and Floating-Point Arithmetic
(Competition Contribution)

We submit to SV-COMP’15 the software-verification framework

CPAchecker

. The submitted configuration is a combination of seven different analyses, based on explicit-value analysis,

k

-induction, predicate analysis, and concrete memory graphs. These analyses use concepts such as CEGAR, lazy abstraction, interpolation, adjustable-block encoding, bounded model checking, invariant generation, and block-abstraction memoization. Found counterexamples are cross-checked by a bit-precise analysis. The combination of several different analyses copes well with the diversity of the verification tasks in SV-COMP.

Matthias Dangl, Stefan Löwe, Philipp Wendler
CPArec: Verifying Recursive Programs via Source-to-Source Program Transformation
(Competition Contribution)

CPArec

is a tool for verifying recursive C programs via source-to-source program transformation. It uses a recursion-free program analyzer

CPAChecker

as a black box and computes function summaries from the inductive invariants generated by

CPAChecker

. Such function summaries enable

CPArec

to check recursive programs.

Yu-Fang Chen, Chiao Hsieh, Ming-Hsien Tsai, Bow-Yaw Wang, Farn Wang
FramewORk for Embedded System verification
(Competition Contribution)

Forest

is a bounded model checker that implements symbolic execution on top of the LLVM intermediate language and is able to detect errors in programs developed in C.

Forest

transforms a program into a set of SMT formulas describing each feasible path and decides these formulas with an SMT solver. This enables it to prove the satisfiability of reachability conditions such as the ones presented in SV-COMP.

Forest

implements different ways of representing SMT formulas: linear arithmetic, polynomials and generic bit-accurate and not bit-accurate representations.

Pablo Gonzalez-de-Aledo, Pablo Sanchez
Forester: Shape Analysis Using Tree Automata
(Competition Contribution)

Forester is a tool for shape analysis of programs with complex dynamic data structures—including various flavours of lists (such as singly/doubly linked lists, nested lists, or skip lists) as well as trees and other complex data structures—that uses an abstract domain based on finite tree automata. This paper gives a brief description of the verification approach of Forester and discusses its strong and weak points revealed during its participation in SV-COMP’15.

Lukáš Holík, Martin Hruška, Ondřej Lengál, Adam Rogalewicz, Jiří Šimáček, Tomáš Vojnar
MU-CSeq 0.3: Sequentialization by Read-Implicit and Coarse-Grained Memory Unwindings
(Competition Contribution)

We describe a new CSeq module that implements improved algorithms for the verification of multi-threaded C programs with dynamic thread creation. It is based on sequentializing the programs according to a guessed sequence of write operations in the shared memory (memory unwinding, MU). The original algorithm (implemented in MU-CSeq 0.1) stores the values of all shared variables for each write (read-explicit fine-grained MU), which requires multiple copies of the shared variables. Our new algorithms store only the writes (read-implicit MU) or only a subset of the writes (coarse-grained MU), which reduces the memory footprint of the unwinding and so allows larger unwinding bounds.

Ermenegildo Tomasco, Omar Inverso, Bernd Fischer, Salvatore La Torre, Gennaro Parlato
Perentie: Modular Trace Refinement and Selective Value Tracking
(Competition Contribution)

Perentie

is a software analysis tool based on iterative refinement of trace abstraction: if the refinement process terminates, the program is either declared correct or a counterexample is provided and the program is incorrect.

Franck Cassez, Takashi Matsuoka, Edward Pierzchalski, Nathan Smyth
Predator Hunting Party (Competition Contribution)

This paper introduces PredatorHP (Predator Hunting Party), a program verifier built on top of the Predator shape analyser, and discusses its participation in the SV-COMP’15 software verification competition. Predator is a sound shape analyser dealing with C programs with lists implemented via low-level pointer operations. PredatorHP uses Predator to prove programs safe while at the same time using several bounded versions of Predator for bug hunting.

Petr Muller, Petr Peringer, Tomáš Vojnar
SeaHorn: A Framework for Verifying C Programs (Competition Contribution)

seahorn

is a framework and tool for verification of safety properties in C programs. The distinguishing feature of

seahorn

is its modular design that separates how program semantics is represented from the verification engine. This paper describes its verification approach as well as the instructions on how to install and use it.

Arie Gurfinkel, Temesghen Kahsai, Jorge A. Navas
SMACK+Corral: A Modular Verifier
(Competition Contribution)

SMACK and Corral are two components of a modular toolchain for verifying C programs. Together they exploit state-of-the-art compiler technologies and theorem provers to simplify and dispatch verification conditions.

Arvind Haran, Montgomery Carter, Michael Emmi, Akash Lal, Shaz Qadeer, Zvonimir Rakamarić
Ultimate Automizer with Array Interpolation
(Competition Contribution)

Ultimate Automizer

is a software verification tool that is able to analyze reachability of an error label, memory safety, and termination of C programs. For all three tasks, our tool follows an automata-based approach where interpolation is used to compute proofs for traces. The interpolants are generated via a new scheme that requires only the post operator, unsatisfiable cores and live variable analysis. This new scheme enables our tool to use the SMT theory of arrays in combination with interpolation.

Matthias Heizmann, Daniel Dietsch, Jan Leike, Betim Musa, Andreas Podelski
ULTIMATE KOJAK with Memory Safety Checks
(Competition Contribution)

Ultimate Kojak

is a symbolic software model checker implemented in the

Ultimate

framework. It follows the CEGAR approach and uses Craig interpolants to refine an overapproximation of the program until it can either prove safety or has found a real counterexample.

This year’s version features a new refinement algorithm, a precise treatment of heap memory, which allows us to deal with pointer aliasing and to participate in the memsafety category, and an improved interpolants generator.

Alexander Nutz, Daniel Dietsch, Mostafa Mahmoud Mohamed, Andreas Podelski
Unbounded Lazy-CSeq: A Lazy Sequentialization Tool for C Programs with Unbounded Context Switches
(Competition Contribution)

We describe a new CSeq module for the verification of multi-threaded C programs with dynamic thread creation. This module implements a variation of the

lazy sequentialization

algorithm implemented in Lazy-CSeq. The main novelty is that we now support an unbounded number of context switches and allow unbounded loops, while the number of allowed threads still remains bounded. This is achieved by a modified sequentialization transformation and the use of the CPAchecker as sequential verification backend.

Truc L. Nguyen, Bernd Fischer, Salvatore La Torre, Gennaro Parlato
FuncTion: An Abstract Domain Functor for Termination
(Competition Contribution)

FuncTion is a research prototype static analyzer designed for proving (conditional) termination of C programs. The tool automatically infers piecewise-defined ranking functions (and sufficient preconditions for termination) by means of abstract interpretation. It combines a variety of abstract domains in order to balance the precision and cost of the analysis.

Caterina Urban

Parameter Synthesis

Frontmatter
Model Checking Gene Regulatory Networks

The behaviour of gene regulatory networks (GRNs) is typically analysed using simulation-based statistical testing-like methods. In this paper, we demonstrate that we can replace this approach by a formal verification-like method that gives higher assurance and scalability. We focus on Wagner’s weighted GRN model with varying weights, which is used in evolutionary biology. In the model, weight parameters represent the gene interaction strength that may change due to genetic mutations. For a property of interest, we synthesise the constraints over the parameter space that represent the set of GRNs satisfying the property. We experimentally show that our parameter synthesis procedure computes the mutational robustness of GRNs -an important problem of interest in evolutionary biology- more efficiently than the classical simulation method. We specify the property in linear temporal logics. We employ symbolic bounded model checking and SMT solving to compute the space of GRNs that satisfy the property, which amounts to synthesizing a set of linear constraints on the weights.

Mirco Giacobbe, Călin C. Guet, Ashutosh Gupta, Thomas A. Henzinger, Tiago Paixão, Tatjana Petrov
Symbolic Quantitative Robustness Analysis of Timed Automata

We study the robust safety problem for timed automata under guard imprecisions which consists in computing an imprecision parameter under which a safety specification holds. We give a symbolic semi-algorithm for the problem based on a parametric data structure, and evaluate its performance in comparison with a recently published one, and with a binary search on enlargement values.

Ocan Sankur

Program Synthesis

Frontmatter
Pattern-Based Refinement of Assume-Guarantee Specifications in Reactive Synthesis

We consider the problem of compositional refinement of components’ specifications in the context of compositional reactive synthesis. Our solution is based on automatic refinement of assumptions and guarantees expressed in linear temporal logic (LTL). We show how behaviors of the environment and the system can be inferred from counter-strategies and strategies, respectively, as formulas in special forms called patterns. Instantiations of patterns are LTL formulas which hold over all runs of such strategies, and are used to refine the specification by adding new input assumptions or output guarantees. We propose three different approaches for compositional refinement of specifications, based on how much information is shared between the components, and demonstrate and compare the methods empirically.

Rajeev Alur, Salar Moarref, Ufuk Topcu
Assume-Guarantee Synthesis for Concurrent Reactive Programs with Partial Information

Synthesis of program parts is particularly useful for concurrent systems. However, most approaches do not support common design tasks, like modifying a single process without having to re-synthesize or verify the whole system. Assume-guarantee synthesis (AGS) provides robustness against modifications of system parts, but thus far has been limited to the perfect information setting. This means that local variables cannot be hidden from other processes, which renders synthesis results cumbersome or even impossible to realize. We resolve this shortcoming by defining AGS under partial information. We analyze the complexity and decidability in different settings, showing that the problem has a high worst-case complexity and is undecidable in many interesting cases. Based on these observations, we present a pragmatic algorithm based on bounded synthesis, and demonstrate its practical applicability on several examples.

Roderick Bloem, Krishnendu Chatterjee, Swen Jacobs, Robert Könighofer
Shield Synthesis:
Runtime Enforcement for Reactive Systems

Scalability issues may prevent users from verifying critical properties of a complex hardware design. In this situation, we propose to synthesize a “safety shield” that is attached to the design to enforce the properties at run time.

Shield synthesis

can succeed where model checking and reactive synthesis fail, because it only considers a small set of critical properties, as opposed to the complex design, or the complete specification in the case of reactive synthesis. The shield continuously monitors the input/output of the design and corrects its erroneous output only if necessary, and as little as possible, so other non-critical properties are likely to be retained. Although runtime enforcement has been studied in other domains such as action systems, reactive systems pose unique challenges where the shield must act without delay. We thus present the first shield synthesis solution for reactive hardware systems and report our experimental results.

Roderick Bloem, Bettina Könighofer, Robert Könighofer, Chao Wang

Program and Runtime Verification

Frontmatter
Verifying Concurrent Programs by Memory Unwinding

We describe a new sequentialization-based approach to the symbolic verification of multithreaded programs with shared memory and dynamic thread creation. Its main novelty is the idea of

memory unwinding

(MU), i.e., a sequence of write operations into the shared memory. For the verification, we nondeterministically guess an MU and then simulate the behavior of the program according to any scheduling that respects it. This approach is complementary to other sequentializations and explores an orthogonal dimension, i.e., the number of write operations. It also simplifies the implementation of several important optimizations, in particular the targeted exposure of individual writes. We implemented this approach as a code-to-code transformation from multithreaded into nondeterministic sequential programs, which allows the reuse of sequential verification tools. Experiments show that our approach is effective: it found all errors in the concurrency category of SV-COMP15.

Ermenegildo Tomasco, Omar Inverso, Bernd Fischer, Salvatore La Torre, Gennaro Parlato
AutoProof: Auto-Active Functional Verification of Object-Oriented Programs

Auto-active verifiers provide a level of automation intermediate between fully automatic and interactive: users supply code with annotations as input while benefiting from a high level of automation in the back-end. This paper presents AutoProof, a state-of-the-art auto-active verifier for object-oriented sequential programs with complex functional specifications. AutoProof fully supports advanced object-oriented features and a powerful methodology for framing and class invariants, which make it applicable in practice to idiomatic objectoriented patterns. The paper focuses on describing AutoProof’s interface, design, and implementation features, and demonstrates AutoProof’s performance on a rich collection of benchmark problems. The results attest AutoProof’s competitiveness among tools in its league on cutting-edge functional verification of object-oriented programs.

Julian Tschannen, Carlo A. Furia, Martin Nordio, Nadia Polikarpova
An LTL Proof System for Runtime Verification

We propose a local proof system for LTL formalising deductions within the constraints of Runtime Verification (RV), and show how such a system can be used as a basis for the construction of online runtime monitors. Novel soundness and completeness results are proven for this system. We also prove decidability and incrementality properties for a monitoring algorithm constructed from it. Finally, we relate its expressivity to existing symbolic analysis techniques used in RV.

Clare Cini, Adrian Francalanza
MarQ: Monitoring at Runtime with QEA

Runtime monitoring is the process of checking whether an execution trace of a running system satisfies a given specification. For this to be effective, monitors which run trace-checking algorithms must be efficient so that they introduce minimal computational overhead. We present the

MarQ

tool for monitoring properties expressed as Quantified Event Automata. This formalism generalises previous automata-based specification methods.

MarQ

extends the established parametric trace slicing technique and incorporates existing techniques for indexing and garbage collection as well as a new technique for optimising runtime monitoring:

structural specialisations

where monitors are generated based on structural characteristics of the monitored property.

MarQ

recently came top in two tracks in the 1st international Runtime Verification competition, showing that

MarQ

is one of the most efficient existing monitoring tools for both offline monitoring of trace logs and online monitoring of running systems.

Giles Reger, Helena Cuenca Cruz, David Rydeheard

Temporal Logic and Automata

Frontmatter
Parallel Explicit Model Checking for Generalized Büchi Automata

We present new parallel emptiness checks for LTL model checking. Unlike existing parallel emptiness checks, these are based on an SCC enumeration, support generalized Büchi acceptance, and require no synchronization points nor repair procedures. A salient feature of our algorithms is the use of a global union-find data structure in which multiple threads share structural information about the automaton being checked. Our prototype implementation has encouraging performances: the new emptiness checks have better speedup than existing algorithms in half of our experiments.

Etienne Renault, Alexandre Duret-Lutz, Fabrice Kordon, Denis Poitrenaud
Limit Deterministic and Probabilistic Automata for LTL ∖ GU

LTL ∖ GU is a fragment of linear temporal logic (LTL), where negations appear only on propositions, and formulas are built using the temporal operators

X

(next),

F

(eventually),

G

(always), and

U

(until, with the restriction that no until operator occurs in the scope of an always operator. Our main result is the construction of Limit Deterministic Büchi automata for this logic that are exponential in the size of the formula. One consequence of our construction is a new, improved EXPTIME model checking algorithm (as opposed to the previously known doubly exponential time) for Markov Decision Processes and LTL ∖ GU formulae. Another consequence is that it gives us a way to construct exponential sized Probabilistic Büchi Automata for LTL ∖ GU.

Dileep Kini, Mahesh Viswanathan
Saturation-Based Incremental LTL Model Checking with Inductive Proofs

Efficient symbolic and explicit model checking approaches have been developed for the verification of linear time temporal properties. Nowadays, advances resulted in the combination of on-the-fly search with symbolic encoding in a hybrid solution providing many results by now. In this work, we propose a new hybrid approach that leverages the so-called saturation algorithm both as an iteration strategy during the state space generation and in a new incremental fixed-point computation algorithm to compute strongly connected components (SCCs). In addition, our solution works on-the-fly during state space traversal and exploits the decomposition of the model as an abstraction to inductively prove the absence of SCCs with cheap explicit runs on the components. When a proof cannot be shown, the incremental symbolic fixed-point algorithm will find the SCC, if one exists. Evaluation on the models of the Model Checking Contest shows that our approach outperforms similar algorithms for concurrent systems.

Vince Molnár, Dániel Darvas, András Vörös, Tamás Bartha
Nested Antichains for WS1S

We propose a novel approach for coping with alternating quantification as the main source of nonelementary complexity of deciding WS1S formulae. Our approach is applicable within the state-of-the-art automata-based WS1S decision procedure implemented, e.g. in MONA. The way in which the standard decision procedure processes quantifiers involves determinization, with its worst case exponential complexity, for every quantifier alternation in the prefix of a formula. Our algorithm avoids building the deterministic automata—instead, it constructs only those of their states needed for (dis)proving validity of the formula. It uses a symbolic representation of the states, which have a deeply nested structure stemming from the repeated implicit subset construction, and prunes the search space by a nested subsumption relation, a generalization of the one used by the so-called antichain algorithms for handling nondeterministic automata. We have obtained encouraging experimental results, in some cases outperforming MONA by several orders of magnitude.

Tomáš Fiedor, Lukáš Holík, Ondřej Lengál, Tomáš Vojnar

Model Checking

Frontmatter
Sylvan: Multi-Core Decision Diagrams

Decision diagrams such as binary decision diagrams and multi-valued decision diagrams play an important role in various fields, including symbolic model checking. An ongoing challenge is to develop datastructures and algorithms for modern multi-core architectures. The BDD package Sylvan provides one contribution by implementing parallelized BDD operations and thus allowing sequential algorithms to exploit the power of multi-core machines.

We present several extensions to Sylvan. We implement parallel operations on list decision diagrams, a variant of multi-valued decision diagrams that is useful for symbolic model checking. We also substitute several core components of Sylvan by new designs, such as the work-stealing framework, the unique table and the operation cache. Furthermore, we combine parallel operations with parallelization on a higher level, by partitioning the transition relation. We show that this results in an improved speedup using the model checking toolset

ltsmin

. We also demonstrate that the parallelization of symbolic model checking for explicit-state modeling languages with an on-the-fly next-state function, as supported by

ltsmin

, scales well.

Tom van Dijk, Jaco van de Pol
LTSmin: High-Performance Language-Independent Model Checking

In recent years, the

LTSmin

model checker has been extended with support for several new modelling languages, including probabilistic (

Mapa

) and timed systems (

Uppaal

). Also, connecting additional language front-ends or ad-hoc state-space generators to

LTSmin

was simplified using custom C-code. From symbolic and distributed reachability analysis and minimisation,

LTSmin

’s functionality has developed into a model checker with multi-core algorithms for on-the-fly LTL checking with partial-order reduction, and multi-core symbolic checking for the modal

μ

calculus, based on the multi-core decision diagram package

Sylvan

.

In

LTSmin

, the modelling languages and the model checking algorithms are connected through a Partitioned Next-State Interface (

Pins

), that allows to abstract away from language details in the implementation of the analysis algorithms and on-the-fly optimisations. In the current paper, we present an overview of the toolset and its recent changes, and we demonstrate its performance and versatility in two case studies.

Gijs Kant, Alfons Laarman, Jeroen Meijer, Jaco van de Pol, Stefan Blom, Tom van Dijk
Using a Formal Model to Improve Verification of a Cache-Coherent System-on-Chip

In this paper we report about a case study on the functional verification of a System-on-Chip (SoC) with a formal system-level model. Our approach improves industrial simulation-based verification techniques in two aspects. First, we suggest to use the formal model to assess the sanity of an interface verification unit. Second, we present a two-step approach to generate clever semi-directed test cases from temporal logic properties: model-based testing tools of the CADP toolbox generate system-level abstract test cases, which are then refined with a commercial Coverage-Directed Test Generation tool into interface-level concrete test cases that can be executed at RTL level. Applied to an AMBA 4 ACE-based cache-coherent SoC, we found that our approach helps in the transition from interface-level to system-level verification, facilitates the validation of system-level properties, and enables early detection of bugs in both the SoC and the commercial test-bench.

Abderahman Kriouile, Wendelin Serwe
Backmatter
Metadaten
Titel
Tools and Algorithms for the Construction and Analysis of Systems
herausgegeben von
Christel Baier
Cesare Tinelli
Copyright-Jahr
2015
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-46681-0
Print ISBN
978-3-662-46680-3
DOI
https://doi.org/10.1007/978-3-662-46681-0