Skip to main content

2016 | Buch

Tools and Algorithms for the Construction and Analysis of Systems

22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings

herausgegeben von: Marsha Chechik, Jean-François Raskin

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2016, which took place in Eindhoven, The Netherlands, in April 2016, held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016.
The 44 full papers presented in this volume were carefully reviewed and selected from 175 submissions. They were organized in topical sections named: abstraction and verification; probabilistic and stochastic systems; synthesis; tool papers; concurrency; tool demos; languages and automata; security; optimization; and competition on software verification – SV-COMP.

Inhaltsverzeichnis

Frontmatter

Unifying Talk

Frontmatter
Robots at the Edge of the Cloud

Computers have come a long way from their roots as fast calculating devices. We live in a world in which computers collect, store, and analyze huge volumes of data. We are seeing the beginnings of a new revolution in the use of computers. In addition to collecting and analyzing data, computers are influencing the physical world and interacting autonomously, and in complex ways, with large groups of humans. These cyber-physical-social systems have the potential to dramatically alter the way we lead our lives. However, designing these systems in a reliable way is a difficult problem. In this paper, we enumerate a set of research challenges that have to be overcome in order to realize the potential of cyber-physical-social systems.

Rupak Majumdar

Abstraction and Verification I

Frontmatter
Finding Recurrent Sets with Backward Analysis and Trace Partitioning

We propose an abstract-interpretation-based analysis for recurrent sets. A recurrent set is a set of states from which the execution of a program cannot or might not (as in our case) escape. A recurrent set is a part of a program’s non-termination proof (that needs to be complemented by reachability analysis). We find recurrent sets by performing a potentially over-approximate backward analysis that produces an initial candidate. We then perform over-approximate forward analysis on the candidate to check and refine it and ensure soundness. In practice, the analysis relies on trace partitioning that predicts future paths through the program that non-terminating executions will take. Using our technique, we were able to find recurrent sets in many benchmarks found in the literature including some that, to our knowledge, cannot be handled by existing tools. In addition, we note that typically, analyses that search for recurrent sets are applied to linear under-approximations of programs or employ some form of non-approximate numeric reasoning. In contrast, our analysis uses standard abstract-interpretation techniques and is potentially applicable to a larger class of abstract domains (and therefore – programs).

Alexey Bakhirkin, Nir Piterman
Tactics for the Dafny Program Verifier

Many modern program verifiers are based on automated theorem provers, which enable full hiding of proof details and allow users to focus all their effort on the program text. This has the advantage that the additional expertise of theorem provers is not required, but has the drawback that when the prover fails to verify a valid program, the user has to annotate the program text with guidance for the verifier. This can be tedious, low-level and repetitive, and may impact on the annotation overhead, readability of the program text and overall development time. Inspired by proof tactics for interactive theorem provers [19], a notion of ‘tactics’ for the state-of-the-art Dafny program verifier, called Tacny, is developed. With only minor extensions to the Dafny syntax, a user can encode high-level proof patterns as Dafny tactics, liberating herself from low-level and repetitive search tasks, whilst still working with familiar Dafny programming constructs. Manual search and guidance can be replaced with calls to such tactics, which will automate this task. We provide syntax and semantics for Tacny, and show feasibility through a prototype implementation, applied to several examples.

Gudmund Grov, Vytautas Tumas
Synthesizing Ranking Functions from Bits and Pieces

In this work, we present a novel approach based on recent advances in software model checking to synthesize ranking functions and prove termination (and non-termination) of imperative programs.Our approach incrementally refines a termination argument from an under-approximation of the terminating program state. Specifically, we learn bits of information from terminating executions, and from these we extrapolate ranking functions over-approximating the number of loop iterations needed for termination. We combine these pieces into piecewise-defined, lexicographic, or multiphase ranking functions.The proposed technique has been implemented in SeaHorn – an LLVM based verification framework – targeting C code. Preliminary experimental evaluation demonstrated its effectiveness in synthesizing ranking functions and proving termination of C programs.

Caterina Urban, Arie Gurfinkel, Temesghen Kahsai
Abstraction Refinement and Antichains for Trace Inclusion of Infinite State Systems

A data automaton is a finite automaton equipped with variables (counters or registers) ranging over infinite data domains. A trace of a data automaton is an alternating sequence of alphabet symbols and values taken by the counters during an execution of the automaton. The problem addressed in this paper is the inclusion between the sets of traces (data languages) recognized by such automata. Since the problem is undecidable in general, we give a semi-algorithm based on abstraction refinement, which is proved to be sound and complete modulo termination. Due to the undecidability of the trace inclusion problem, our procedure is not guaranteed to terminate. We have implemented our technique in a prototype tool and show promising results on several non-trivial examples.

Radu Iosif, Adam Rogalewicz, Tomáš Vojnar

Probabilistic and Stochastic Systems I

Frontmatter
Efficient Syntax-Driven Lumping of Differential Equations

We present an algorithm to compute exact aggregations of a class of systems of ordinary differential equations (ODEs). Our approach consists in an extension of Paige and Tarjan’s seminal solution to the coarsest refinement problem by encoding an ODE system into a suitable discrete-state representation. In particular, we consider a simple extension of the syntax of elementary chemical reaction networks because (i) it can express ODEs with derivatives given by polynomials of degree at most two, which are relevant in many applications in natural sciences and engineering; and (ii) we can build on two recently introduced bisimulations, which yield two complementary notions of ODE lumping. Our algorithm computes the largest bisimulations in $$O(r \cdot s \cdot \log s)$$ time, where r is the number of monomials and s is the number of variables in the ODEs. Numerical experiments on real-world models from biochemistry, electrical engineering, and structural mechanics show that our prototype is able to handle ODEs with millions of variables and monomials, providing significant model reductions.

Luca Cardelli, Mirco Tribastone, Max Tschaikowski, Andrea Vandin
Faster Statistical Model Checking for Unbounded Temporal Properties

We present a new algorithm for the statistical model checking of Markov chains with respect to unbounded temporal properties, including full linear temporal logic. The main idea is that we monitor each simulation run on the fly, in order to detect quickly if a bottom strongly connected component is entered with high probability, in which case the simulation run can be terminated early. As a result, our simulation runs are often much shorter than required by termination bounds that are computed a priori for a desired level of confidence on a large state space. In comparison to previous algorithms for statistical model checking our method is not only faster in many cases but also requires less information about the system, namely, only the minimum transition probability that occurs in the Markov chain. In addition, our method can be generalised to unbounded quantitative properties such as mean-payoff bounds.

Przemysław Daca, Thomas A. Henzinger, Jan Křetínský, Tatjana Petrov
Safety-Constrained Reinforcement Learning for MDPs

We consider controller synthesis for stochastic and partially unknown environments in which safety is essential. Specifically, we abstract the problem as a Markov decision process in which the expected performance is measured using a cost function that is unknown prior to run-time exploration of the state space. Standard learning approaches synthesize cost-optimal strategies without guaranteeing safety properties. To remedy this, we first compute safe, permissive strategies. Then, exploration is constrained to these strategies and thereby meets the imposed safety requirements. Exploiting an iterative learning procedure, the resulting strategy is safety-constrained and optimal. We show correctness and completeness of the method and discuss the use of several heuristics to increase its scalability. Finally, we demonstrate the applicability by means of a prototype implementation.

Sebastian Junges, Nils Jansen, Christian Dehnert, Ufuk Topcu, Joost-Pieter Katoen
Safety Verification of Continuous-Space Pure Jump Markov Processes

We study the probabilistic safety verification problem for pure jump Markov processes, a class of models that generalizes continuous-time Markov chains over continuous (uncountable) state spaces. Solutions of these processes are piecewise constant, right-continuous functions from time to states. Their jump (or reset) times are realizations of a Poisson process, characterized by a jump rate function that can be both time- and state-dependent. Upon jumping in time, the new state of the solution process is specified according to a (continuous) stochastic conditional kernel. After providing a full characterization of safety properties of these processes, we describe a formal method to abstract the process as a finite-state discrete-time Markov chain; this approach is formal in that it provides a-priori error bounds on the precision of the abstraction, based on the continuity properties of the stochastic kernel of the process and of its jump rate function. We illustrate the approach on a case study of thermostatically controlled loads.

Sadegh Esmaeil Zadeh Soudjani, Rupak Majumdar, Alessandro Abate

Synthesis

Frontmatter
Abstract Learning Frameworks for Synthesis

We develop abstract learning frameworks for synthesis that embody the principles of the CEGIS (counterexample-guided inductive synthesis) algorithms in current literature. Our framework is based on iterative learning from a hypothesis space that captures synthesized objects, using counterexamples from an abstract sample space, and a concept space that abstractly defines the semantics of synthesis. We show that a variety of synthesis algorithms in current literature can be embedded in this general framework. We also exhibit three general recipes for convergent synthesis: the first two recipes based on finite spaces and Occam learners generalize all techniques of convergence used in existing engines, while the third, involving well-founded quasi-orderings, is new, and we instantiate it to concrete synthesis problems.

Christof Löding, P. Madhusudan, Daniel Neider
Synthesizing Piece-Wise Functions by Learning Classifiers

We present a novel general technique that uses classifier learning to synthesize piece-wise functions (functions that split the domain into regions and apply simpler functions to each region) against logical synthesis specifications. Our framework works by combining a synthesizer of functions for fixed concrete inputs and a synthesizer of predicates that can be used to define regions. We develop a theory of single-point refutable specifications that facilitate generating concrete counterexamples using constraint solvers. We implement the framework for synthesizing piece-wise functions in linear integer arithmetic, combining leaf expression synthesis using constraint-solving and predicate synthesis using enumeration, and tie them together using a decision tree classifier. We demonstrate that this approach is competitive compared to existing synthesis engines on a set of synthesis specifications.

Daniel Neider, Shambwaditya Saha, P. Madhusudan
An Automaton Learning Approach to Solving Safety Games over Infinite Graphs

We propose a method to construct finite-state reactive controllers for systems whose interactions with their adversarial environment are modeled by infinite-duration two-player games over (possibly) infinite graphs. The method targets safety games with infinitely many states or with such a large number of states that it would be impractical—if not impossible—for conventional synthesis techniques that work on the entire state space. We resort to constructing finite-state controllers for such systems through an automata learning approach, utilizing a symbolic representation of the underlying game that is based on finite automata. Throughout the learning process, the learner maintains an approximation of the winning region (represented as a finite automaton) and refines it using different types of counterexamples provided by the teacher until a satisfactory controller can be derived (if one exists). We present a symbolic representation of safety games (inspired by regular model checking), propose implementations of the learner and teacher, and evaluate their performance on examples motivated by robotic motion planning.

Daniel Neider, Ufuk Topcu

Probabilistic and Stochastic Systems II

Frontmatter
Uncertainty Propagation Using Probabilistic Affine Forms and Concentration of Measure Inequalities

We consider the problem of reasoning about the probability of assertion violations in straight-line, nonlinear computations involving uncertain quantities modeled as random variables. Such computations are quite common in many areas such as cyber-physical systems and numerical computation. Our approach extends probabilistic affine forms, an interval-based calculus for precisely tracking how the distribution of a given program variable depends on uncertain inputs modeled as noise symbols. We extend probabilistic affine forms using the precise tracking of dependencies between noise symbols combined with the expectations and higher order moments of the noise symbols. Next, we show how to prove bounds on the probabilities that program variables take on specific values by using concentration of measure inequalities. Thus, we enable a new approach to this problem that explicitly avoids subdividing the domain of inputs, as is commonly done in the related work. We illustrate the approach in this paper on a variety of challenging benchmark examples, and thus study its applicability to uncertainty propagation.

Olivier Bouissou, Eric Goubault, Sylvie Putot, Aleksandar Chakarov, Sriram Sankaranarayanan
Online and Compositional Learning of Controllers with Application to Floor Heating

Controller synthesis for stochastic hybrid switched systems, like e.g. a floor heating system in a house, is a complex computational task that cannot be solved by an exhaustive search though all the control options. The state-space to be explored is in general uncountable due to the presence of continuous variables (e.g. temperature readings in the different rooms) and even after digitization, the state-space remains huge and cannot be fully explored. We suggest a general and scalable methodology for controller synthesis for such systems. Instead of off-line synthesis of a controller for all possible input temperatures and an arbitrary time horizon, we propose an on-line synthesis methodology, where we periodically compute the controller only for the near future based on the current sensor readings. This computation is itself done by employing machine learning in order to avoid enumeration of the whole state-space. For additional scalability we propose and apply a compositional synthesis approach. Finally, we demonstrate the applicability of the methodology to a concrete floor heating system of a real family house.

Kim G. Larsen, Marius Mikučionis, Marco Muñiz, Jiří Srba, Jakob Haahr Taankvist
Deductive Proofs of Almost Sure Persistence and Recurrence Properties

Martingale theory yields a powerful set of tools that have recently been used to prove quantitative properties of stochastic systems such as stochastic safety and qualitative properties such as almost sure termination. In this paper, we examine proof techniques for establishing almost sure persistence and recurrence properties of infinite-state discrete time stochastic systems. A persistence property $$\Diamond \Box (P)$$ specifies that almost all executions of the stochastic system eventually reach P and stay there forever. Likewise, a recurrence property $$\Box \Diamond (Q)$$ specifies that a target set Q is visited infinitely often by almost all executions of the stochastic system. Our approach extends classic ideas on the use of Lyapunov-like functions to establish qualitative persistence and recurrence properties. Next, we extend known constraint-based invariant synthesis techniques to deduce the necessary supermartingale expressions to partly mechanize such proofs. We illustrate our techniques on a set of interesting examples.

Aleksandar Chakarov, Yuen-Lam Voronin, Sriram Sankaranarayanan
Probabilistic CTL: The Deductive Way

Complex probabilistic temporal behaviours need to be guaranteed in robotics and various other control domains, as well as in the context of families of randomized protocols. At its core, this entails checking infinite-state probabilistic systems with respect to quantitative properties specified in probabilistic temporal logics. Model checking methods are not directly applicable to infinite-state systems, and techniques for infinite-state probabilistic systems are limited in terms of the specifications they can handle.This paper presents a deductive approach to the verification of countable-state systems against properties specified in probabilistic CTL$$^{*}$$, on models featuring both nondeterministic and probabilistic choices. The deductive proof system we propose lifts the classical proof system by Kesten and Pnueli to the probabilistic setting. However, the soundness arguments are completely distinct and go via the theory of martingales. Completeness results for the finite-state case and an infinite-state example illustrate the effectiveness of our approach.

Rayna Dimitrova, Luis María Ferrer Fioriti, Holger Hermanns, Rupak Majumdar

Tool Papers I

Frontmatter
Parametric Runtime Verification of C Programs

Many runtime verification tools are built based on Aspect-Oriented Programming (AOP) tools, most often AspectJ, a mature implementation of AOP for Java. Although already popular in the Java domain, there is few work on runtime verification of C programs via AOP, due to the lack of a solid language and tool support. In this paper, we propose a new general purpose and expressive language for defining monitors as an extension to the C language, and present our tool implementation of the weaver, the Movec compiler, which brings fully-fledged parametric runtime verification support into the C domain.

Zhe Chen, Zhemin Wang, Yunlong Zhu, Hongwei Xi, Zhibin Yang
Coqoon
An IDE for Interactive Proof Development in Coq

User interfaces for interactive proof assistants have always lagged behind those for mainstream programming languages. Whereas integrated development environments—IDEs—have support for features like project management, version control, dependency analysis and incremental project compilation, “IDE”s for proof assistants typically only operate on files in isolation, relying on external tools to integrate those files into larger projects. In this paper we present Coqoon, an IDE for Coq developments integrated into Eclipse. Coqoon manages proofs as projects rather than isolated source files, and compiles these projects using the Eclipse common build system. Coqoon takes advantage of the latest features of Coq, including asynchronous and parallel processing of proofs, and—when used together with a third-party OCaml extension for Eclipse—can even be used to work on large developments containing Coq plugins.

Alexander Faithfull, Jesper Bengtson, Enrico Tassi, Carst Tankink
Multi-core Symbolic Bisimulation Minimisation

Bisimulation minimisation alleviates the exponential growth of transition systems in model checking by computing the smallest system that has the same behavior as the original system according to some notion of equivalence. One popular strategy to compute a bisimulation minimisation is signature-based partition refinement. This can be performed symbolically using binary decision diagrams to allow models with larger state spaces to be minimised.This paper studies strong and branching symbolic bisimulation for labeled transition systems, continuous-time markov chains, and interactive markov chains. We introduce the notion of partition refinement with partial signatures. We extend the parallel BDD library Sylvan to parallelize the signature refinement algorithm, and develop a new parallel BDD algorithm to refine a partition, which conserves previous block numbers and uses a parallel data structure to store block assignments. We also present a specialized BDD algorithm for the computation of inert transitions. The experimental evaluation, based on benchmarks from the literature, demonstrates a speedup of up to 95x sequentially. In addition, we find parallel speedups of up to 17x due to parallelisation with 48 cores. Finally, we present the implementation of these algorithms as a versatile tool that can be customized for bisimulation minimisation in various contexts.

Tom van Dijk, Jaco van de Pol
Advances in Symbolic Probabilistic Model Checking with PRISM

For modeling and reasoning about complex systems, symbolic methods provide a prominent way to tackle the state explosion problem. It is well known that for symbolic approaches based on binary decision diagrams (BDD), the ordering of BDD variables plays a crucial role for compact representations and efficient computations. We have extended the popular probabilistic model checker PRISM with support for automatic variable reordering in its multi-terminal-BDD-based engines and report on benchmark results. Our extensions additionally allow the user to manually control the variable ordering at a finer-grained level. Furthermore, we present our implementation of the symbolic computation of quantiles and support for multi-reward-bounded properties, automata specifications and accepting end component computations for Streett conditions.

Joachim Klein, Christel Baier, Philipp Chrszon, Marcus Daum, Clemens  Dubslaff, Sascha Klüppelholz, Steffen Märcker, David Müller
PRISM-PSY: Precise GPU-Accelerated Parameter Synthesis for Stochastic Systems

In this paper we present PRISM-PSY, a novel tool that performs precise GPU-accelerated parameter synthesis for continuous-time Markov chains and time-bounded temporal logic specifications. We redesign, in terms of matrix-vector operations, the recently formulated algorithms for precise parameter synthesis in order to enable effective data-parallel processing, which results in significant acceleration on many-core architectures. High hardware utilisation, essential for performance and scalability, is achieved by state space and parameter space parallelisation: the former leverages a compact sparse-matrix representation, and the latter is based on an iterative decomposition of the parameter space. Our experiments on several biological and engineering case studies demonstrate an overall speedup of up to 31-fold on a single GPU compared to the sequential implementation.

Milan Češka, Petr Pilař, Nicola Paoletti, Luboš Brim, Marta Kwiatkowska

Tool Papers II

Frontmatter
T2: Temporal Property Verification

We present the open-source tool T2, the first public release from the TERMINATOR project [9]. T2 has been extended over the past decade to support automatic temporal-logic proving techniques and to handle a general class of user-provided liveness and safety properties. Input can be provided in a native format and in C, via the support of the LLVM compiler framework. We briefly discuss T2’s architecture, its underlying techniques, and conclude with an experimental illustration of its competitiveness and directions for future extensions.

Marc Brockschmidt, Byron Cook, Samin Ishtiaq, Heidy Khlaaf, Nir Piterman
RTD-Finder: A Tool for Compositional Verification of Real-Time Component-Based Systems

In this paper we present RTD-Finder, a tool which applies a fully compositional and automatic method for the verification of safety properties for real-time component-based systems modeled in the RT-BIP language. The core method is based on the compositional computation of a global invariant which over-approximates the set of reachable states of the system. The verification results show that when the invariant catches the safety property, the verification time for large systems is drastically reduced in comparison with exploration techniques. Nevertheless, the above method is based on an over-approximation of the reachable states set expressed by the invariant, hence false positives may occur in some cases. We completed our compositional verification method with a counterexample-based invariant refinement algorithm analyzing iteratively the generated counterexamples. The spurious counterexamples which are detected serve to strengthen incrementally the global invariant until a true counterexample is found or until it is proven that all the counterexamples are spurious.

Souha Ben-Rayana, Marius Bozga, Saddek Bensalem, Jacques Combaz
TcT: Tyrolean Complexity Tool

In this paper we present v3.0, the latest version of our fully automated complexity analyser. implements our framework for automated complexity analysis and focuses on extensibility and automation. is open with respect to the input problem under investigation and the resource metric in question. It is the most powerful tool in the realm of automated complexity analysis of term rewrite systems. Moreover it provides an expressive problem-independent strategy language that facilitates proof search. We give insights about design choices, the implementation of the framework and report different case studies where we have applied successfully.

Martin Avanzini, Georg Moser, Michael Schaper
Integrated Environment for Diagnosing Verification Errors

A failed attempt to verify a program’s correctness can result in reports of genuine errors, spurious warnings, and timeouts. The main challenge in debugging a verification failure is to determine whether the complaint is genuine or spurious, and to obtain enough information about the failed verification attempt to debug the error. To help a user with this task, this paper presents an extension of the Dafny IDE that seamlessly integrates the Dafny verifier, a dynamic symbolic execution engine, a verification debugger, and a technique for diagnosing timeouts. The paper also reports on experiments that measure the utility of the combined use of these complementary tools.

Maria Christakis, K. Rustan M. Leino, Peter Müller, Valentin Wüstholz
JDart: A Dynamic Symbolic Analysis Framework

We describe JDart, a dynamic symbolic analysis framework for Java. A distinguishing feature of JDart is its modular architecture: the main component that performs dynamic exploration communicates with a component that efficiently constructs constraints and that interfaces with constraint solvers. These components can easily be extended or modified to support multiple constraint solvers or different exploration strategies. Moreover, JDart has been engineered for robustness, driven by the need to handle complex NASA software. These characteristics, together with its recent open sourcing, make JDart an ideal platform for research and experimentation. In the current release, JDart supports the CORAL, SMTInterpol, and Z3 solvers, and is able to handle NASA software with constraints containing bit operations, floating point arithmetic, and complex arithmetic operations (e.g., trigonometric and nonlinear). We illustrate how JDart has been used to support other analysis techniques, such as automated interface generation and testing of libraries. Finally, we demonstrate the versatility and effectiveness of JDart, and compare it with state-of-the-art dynamic or pure symbolic execution engines through an extensive experimental evaluation.

Kasper Luckow, Marko Dimjašević, Dimitra Giannakopoulou, Falk Howar, Malte Isberner, Temesghen Kahsai, Zvonimir Rakamarić, Vishwanath Raman

Concurrency

Frontmatter
Diagnostic Information for Control-Flow Analysis of Workflow Graphs (a.k.a. Free-Choice Workflow Nets)

A workflow graph is a classical flow graph extended by concurrent fork and join. Workflow graphs can be used to represent the main control-flow of e.g. business process models modeled in languages such as BPMN or UML activity diagrams. They can also be seen as compact representations of free-choice Petri nets with a unique start and a unique end. A workflow graph is said to be sound if it is free of deadlocks and exhibits no lack of synchronization, which correspond to liveness and safeness of a slightly modified version of the corresponding Petri net. We present a new characterization of unsoundness of workflow graphs in terms of three structural, i.e., graphical error patterns. We also present a polynomial-time algorithm that decides unsoundness and returns for each unsound workflow graph, one of the three structural error patterns as diagnostic information. An experimental evaluation on over 1350 workflow graphs derived from industrial business process models suggests that our technique performs well in practice.

Cédric Favre, Hagen Völzer, Peter Müller
Approaching the Coverability Problem Continuously

The coverability problem for Petri nets plays a central role in the verification of concurrent shared-memory programs. However, its high EXPSPACE-complete complexity poses a challenge when encountered in real-world instances. In this paper, we develop a new approach to this problem which is primarily based on applying forward coverability in continuous Petri nets as a pruning criterion inside a backward-coverability framework. A cornerstone of our approach is the efficient encoding of a recently developed polynomial-time algorithm for reachability in continuous Petri nets into SMT. We demonstrate the effectiveness of our approach on standard benchmarks from the literature, which shows that our approach decides significantly more instances than any existing tool and is in addition often much faster, in particular on large instances.

Michael Blondin, Alain Finkel, Christoph Haase, Serge Haddad
On Atomicity in Presence of Non-atomic Writes

The inherently nondeterministic semantics of concurrent programs is the root of many programming errors. Atomicity (more precisely conflict serializability) has been used to reduce the magnitude of this nondeterminism and therefore make it easier to understand the behaviour of the concurrent program. Serializability, however, has not been studied well for programs executed under memory models weaker than sequential consistency (SC), where writes are not atomic, i.e., they may be committed to the main memory later than issued. In this paper, we define the notion of conflict serializability for the Total Store Ordering (TSO) memory model, and study the relation between TSO-serializability and the well-known notions of SC-serializability and robustness. We investigate the algorithmic problem of monitoring program executions for violations of serializability, and provide lower bound complexity results for the problem, and new algorithms to perform the monitoring efficiently.

Constantin Enea, Azadeh Farzan
Formalizing and Checking Thread Refinement for Data-Race-Free Execution Models

When optimizing a thread in a concurrent program (either done manually or by the compiler), it must be guaranteed that the resulting thread is a refinement of the original thread. Most definitions of refinement are formulated in terms of valid syntactic transformations on the program code, or in terms of valid transformations on thread execution traces. We present a new theory formulated instead in terms of state transitions between synchronization operations. Our new method shows refinement in more cases and leads to more efficient and simpler procedures for refinement checking. We develop the theory for the SC-for-DRF execution model (using locks for synchronization), and show that its application in compiler testing yields speedups of on average more than two orders of magnitude compared to a previous approach.

Daniel Poetzl, Daniel Kroening

Tool Demos

Frontmatter
The xSAP Safety Analysis Platform

This paper describes the xSAP safety analysis platform. xSAP provides several model-based safety analysis features for finite- and infinite-state synchronous transition systems. In particular, it supports library-based definition of fault modes, an automatic model extension facility, generation of safety analysis artifacts such as Dynamic Fault Trees and Failure Mode and Effects Analysis tables. Moreover, it supports probabilistic evaluation of Fault Trees, failure propagation analysis using Timed Failure Propagation Graphs, and Common Cause Analysis. xSAP has been used in several industrial projects as verification back-end, and is currently being evaluated in a joint R&D Project involving FBK and The Boeing Company.

Benjamin Bittner, Marco Bozzano, Roberto Cavada, Alessandro Cimatti, Marco Gario, Alberto Griggio, Cristian Mattarei, Andrea Micheli, Gianni Zampedri
FACT: A Probabilistic Model Checker for Formal Verification with Confidence Intervals

We introduce FACT, a probabilistic model checker that computes confidence intervals for the evaluated properties of Markov chains with unknown transition probabilities when observations of these transitions are available. FACT is unaffected by the unquantified estimation errors generated by the use of point probability estimates, a common practice that limits the applicability of quantitative verification. As such, FACT can prevent invalid decisions in the construction and analysis of systems, and extends the applicability of quantitative verification to domains in which unknown estimation errors are unacceptable.

Radu Calinescu, Kenneth Johnson, Colin Paterson
PrDK: Protocol Programming with Automata

We present Prdk: a development kit for programming protocols. Prdk is based on syntactic separation of process code, presumably written in an existing general-purpose language, and protocol code, written in a domain-specific language with explicit, high-level elements of syntax for programming protocols. Prdk supports two complementary syntaxes (one graphical, one textual) with a common automata-theoretic semantics. As a tool for construction of systems, Prdk consists of syntax editors, a translator, a parser, an interpreter, and a compiler into Java. Performance in the Nas Parallel Benchmarks is promising.

Sung-Shik T. Q. Jongmans, Farhad Arbab
DLC: Compiling a Concurrent System Formal Specification to a Distributed Implementation

Formal methods can verify the correctness of a concurrent system by analyzing its model. However, if the actual implementation is written by hand, subtle and hard to detect bugs may be unintentionally introduced, thus ruining the verification effort. In this paper, we present DLC (Distributed LNT Compiler), a tool that automatically generates distributed implementation of concurrent systems modeled in the LNT language, which can be formally verified using the CADP toolbox.

Hugues Evrard
PRISM-Games 2.0: A Tool for Multi-objective Strategy Synthesis for Stochastic Games

We present a new release of PRISM-games, a tool for verification and strategy synthesis for stochastic games. PRISM-games 2.0 significantly extends its functionality by supporting, for the first time: (i) long-run average (mean-payoff) and ratio reward objectives, e.g., to express energy consumption per time unit; (ii) strategy synthesis and Pareto set computation for multi-objective properties; and (iii) compositional strategy synthesis, where strategies for a stochastic game modelled as a composition of subsystems are synthesised from strategies for individual components using assume-guarantee contracts on component interfaces. We demonstrate the usefulness of the new tool on four case studies from autonomous transport and energy management.

Marta Kwiatkowska, David Parker, Clemens Wiltsche
Cerberus: Automated Synthesis of Enforcement Mechanisms for Security-Sensitive Business Processes

Cerberus is a tool to automatically synthesize run-time enforcement mechanisms for security-sensitive Business Processes (BPs). The tool is capable of guaranteeing that the execution constraints $$EC $$ on the tasks together with the authorization policy $$AP $$ and the authorization constraints $$AC $$ are satisfied while ensuring that the process can successfully terminate. Cerberus can be easily integrated in many workflow management systems, it is transparent to process designers, and does not require any knowledge beyond usual BP modeling. The tool works in two phases. At design-time, the enforcement mechanism M, parametric in the authorization policy $$AP $$, is generated from $$EC $$ and $$AC $$; M can thus be used with any instance of the same BP provided that $$EC $$ and $$AC $$ are left unchanged. At run-time, a specific authorization policy is added to M, thereby obtaining an enforcement mechanism $$M^*$$ dedicated to a particular instance of the security-sensitive business process. To validate our approach, we discuss the implementation and usage of Cerberus in the SAP HANA Operational Intelligence platform.

Luca Compagna, Daniel Ricardo dos Santos, Serena Elisa Ponta, Silvio Ranise
Developing and Debugging Proof Strategies by Tinkering

Previously, we have developed a graphical proof strategy language, called PSGraph [4], to support the development and maintenance of large and complex proof tactics for interactive theorem provers. By using labelled hierarchical graphs this formalisation improves upon tactic composition, analysis and maintenance compared with traditional tactic languages. PSGraph has been implemented as the Tinker system, supporting the Isabelle and ProofPower theorem provers [5]. In this paper we present Tinker2, a new version of Tinker, which provides enhancements in user interaction and experience, together with: novel support for controlled inspection; debugging using breakpoints and a logging mechanism; and advanced recording, exporting and reply.

Yuhui Lin, Pierre Le Bras, Gudmund Grov
v2c – A Verilog to C Translator

We present v2c, a tool for translating Verilog to C. The tool accepts synthesizable Verilog as input and generates a word-level C program as an output, which we call the software netlist. The generated program is cycle-accurate and bit precise. The translation is based on the synthesis semantics of Verilog. There are several use cases for v2c, ranging from hardware property verification, co-verification to simulation and equivalence checking. This paper gives details of the translation and demonstrates the utility of the tool.

Rajdeep Mukherjee, Michael Tautschnig, Daniel Kroening

Abstraction and Verification II

Frontmatter
Parameterized Compositional Model Checking

The Parameterized Compositional Model Checking Problem (PCMCP) is to decide, using compositional proofs, whether a property holds for every instance of a parameterized family of process networks. Compositional analysis focuses attention on the neighborhood structure of processes in the network family. For the verification of safety properties, the PCMCP is shown to be much more tractable than the more general Parameterized Model Checking Problem (PMCP). For example, the PMCP is undecidable for ring networks while the PCMCP is decidable in polynomial time. This result generalizes to toroidal mesh networks and related networks for describing parallel architectures. Decidable models of the PCMCP are also shown for networks of control and user processes. The results are based on the demonstration of compositional cutoffs; that is, small instances whose compositional proofs generalize to the entire parametric family. There are, however, control-user models where the PCMCP and the PMCP are both undecidable.

Kedar S. Namjoshi, Richard J. Trefler
An Algorithm for Stuttering Equivalence and Branching Bisimulation

We provide a new algorithm to determine stuttering equivalence with time complexity $$O(m \log n)$$, where n is the number of states and m is the number of transitions of a Kripke structure. This algorithm can also be used to determine branching bisimulation in $$O(m(\log | Act |+\log n))$$ time. Theoretically, our algorithm substantially improves upon existing algorithms which all have time complexity O(mn) [2, 3, 9]. Moreover, it has better or equal space complexity. Practical results confirm these findings showing that our algorithm can outperform existing algorithms with orders of magnitude, especially when the sizes of the Kripke structures are large.

Jan Friso Groote, Anton Wijs
Interpolants in Nonlinear Theories Over the Reals

We develop algorithms for computing Craig interpolants for first-order formulas over real numbers with a wide range of nonlinear functions, including transcendental functions and differential equations. We transform proof traces from $$\delta $$-complete decision procedures into interpolants that consist of Boolean combinations of linear constraints. The algorithms are guaranteed to find the interpolants between two formulas A and B whenever $$A \wedge B$$ is not $$\delta $$-satisfiable. At the same time, by exploiting $$\delta $$-perturbations one can parameterize the algorithm to find interpolants with different positions between A and B. We show applications of the methods in control and robotic design, and hybrid system verification.

Sicun Gao, Damien Zufferey

Abstraction and Verification III

Frontmatter
PTIME Computation of Transitive Closures of Octagonal Relations

Computing transitive closures of integer relations is the key to finding precise invariants of integer programs. In this paper, we study difference bounds and octagonal relations and prove that their transitive closure is a PTIME-computable formula in the existential fragment of Presburger arithmetic. This result marks a significant complexity improvement, as the known algorithms have EXPTIME worst case complexity.

Filip Konečný
Scalable Verification of Linear Controller Software

We consider the problem of verifying software implementations of linear time-invariant controllers against mathematical specifications. Given a controller specification, multiple correct implementations may exist, each of which uses a different representation of controller state (e.g., due to optimizations in a third-party code generator). To accommodate this variation, we first extract a controller’s mathematical model from the implementation via symbolic execution, and then check input-output equivalence between the extracted model and the specification by similarity checking. We show how to automatically verify the correctness of C code controller implementation using the combination of techniques such as symbolic execution, satisfiability solving and convex optimization. Through evaluation using randomly generated controller specifications of realistic size, we demonstrate that the scalability of this approach has significantly improved compared to our own earlier work based on the invariant checking method.

Junkil Park, Miroslav Pajic, Insup Lee, Oleg Sokolsky
Partial Order Reduction for Event-Driven Multi-threaded Programs

Event-driven multi-threaded programming is fast becoming a preferred style of developing efficient and responsive applications. In this concurrency model, multiple threads execute concurrently, communicating through shared objects as well as by posting asynchronous events. In this work, we consider partial order reduction (POR) for this concurrency model. Existing POR techniques treat event queues associated with threads as shared objects and reorder every pair of events handled on the same thread even if reordering them does not lead to different states. We do not treat event queues as shared objects and propose a new POR technique based on a backtracking set called the dependence-covering set. Our POR technique reorders events handled by the same thread only if necessary. We prove that exploring dependence-covering sets suffices to detect all deadlock cycles and assertion violations defined over local variables. To evaluate effectiveness of our POR scheme, we have implemented a dynamic algorithm to compute dependence-covering sets. On execution traces of some Android applications, we demonstrate that our technique explores many fewer transitions —often orders of magnitude fewer— compared to exploration based on persistent sets, in which event queues are considered as shared objects.

Pallavi Maiya, Rahul Gupta, Aditya Kanade, Rupak Majumdar
Acceleration in Multi-PushDown Systems

Multipushdown systems (MPDS) are formal models of multi-threaded recursive programs. They are turing powerful and hence one considers under-approximation techniques in their analysis. We study the use of loop accelerations in conjunction with bounded context analysis.

Mohamed Faouzi Atig, K. Narayan Kumar , Prakash Saivasan

Languages and Automata

Frontmatter
Reduction of Nondeterministic Tree Automata

We present an efficient algorithm to reduce the size of nondeterministic tree automata, while retaining their language. It is based on new transition pruning techniques, and quotienting of the state space w.r.t. suitable equivalences. It uses criteria based on combinations of downward and upward simulation preorder on trees, and the more general downward and upward language inclusions. Since tree-language inclusion is EXPTIME-complete, we describe methods to compute good approximations in polynomial time.We implemented our algorithm as a module of the well-known libvata tree automata library, and tested its performance on a given collection of tree automata from various applications of libvata in regular model checking and shape analysis, as well as on various classes of randomly generated tree automata. Our algorithm yields substantially smaller and sparser automata than all previously known reduction techniques, and it is still fast enough to handle large instances.

Ricardo Almeida, Lukáš Holík, Richard Mayr
Online Timed Pattern Matching Using Derivatives

Timed pattern matching consists in finding all segments of a dense-time Boolean signal that match a pattern defined by a timed regular expression. This problem has been formulated and solved in [17] via an offline algorithm that takes the signal and expression as inputs and produces the set of all matches, represented as a finite union of two-dimensional zones. In this work we develop an online version of this approach where the input signal is presented incrementally and the matching is computed incrementally as well.Naturally, the concept of derivatives of regular expressions due to Brzozowski [6] can play a role in defining what remains to match after having read a prefix of the signal. However the adaptation of this concept is not a straightforward for two reasons: the dense infinite-state nature of timed behaviors and the fact that we are interested in matching, not only in prefix acceptance. To resolve these issues we develop an alternative theory of signals and expressions based on absolute time and show how derivatives are defined and computed in this setting. We then implement an online timed pattern matching algorithm based on these results.

Dogan Ulus, Thomas Ferrère, Eugene Asarin, Oded Maler
Hybridization Based CEGAR for Hybrid Automata with Affine Dynamics

We consider the problem of safety verification for hybrid systems, whose continuous dynamics in each mode is affine, $$\dot{X}=AX+b$$, and invariants and guards are specified using rectangular constraints. We present a counter-example guided abstraction refinement framework (CEGAR), which abstract these hybrid automata into simpler ones with rectangular inclusion dynamics, $$\dot{x} \in \mathcal {I}$$, where x is a variable and $$\mathcal {I}$$ is an interval in $$\mathbb {R}$$. In contrast to existing CEGAR frameworks which consider discrete abstractions, our method provides highly efficient abstraction construction, though model-checking the abstract system is more expensive. Our CEGAR algorithm has been implemented in a prototype tool called $$\mathtt {HARE}$$ (Hybrid Abstraction-Refinement Engine), that makes calls to $$\mathtt {SpaceEx}$$ to validate abstract counterexamples. We analyze the performance of our tool against standard benchmark examples, and show that its performance is promising when compared to state-of-the-art safety verification tools, $$\mathtt {SpaceEx}$$, $$\mathtt {PHAVer}$$, $$\mathtt {SpaceEx~AGAR}$$, and $$\mathtt {HSolver}$$.

Nima Roohi, Pavithra Prabhakar, Mahesh Viswanathan
Complementing Semi-deterministic Büchi Automata

We introduce an efficient complementation technique for semi-deterministic Büchi automata, which are Büchi automata that are deterministic in the limit: from every accepting state onward, their behaviour is deterministic. It is interesting to study semi-deterministic automata, because they play a role in practical applications of automata theory, such as the analysis of Markov decision processes. Our motivation to study their complementation comes from the termination analysis implemented in Ultimate Büchi Automizer, where these automata represent checked runs and have to be complemented to identify runs to be checked. We show that semi-determinism leads to a simpler complementation procedure: an extended breakpoint construction that allows for symbolic implementation. It also leads to significantly improved bounds as the complement of a semi-deterministic automaton with n states has less than $$4^n$$ states. Moreover, the resulting automaton is unambiguous, which again offers new applications, like the analysis of Markov chains. We have evaluated our construction against the semi-deterministic automata produced by the Ultimate Büchi Automizer. The evaluation confirms that our algorithm outperforms the known complementation techniques for general nondeterministic Büchi automata.

František Blahoudek, Matthias Heizmann, Sven Schewe, Jan Strejček, Ming-Hsien Tsai

Security

Frontmatter
Reasoning About Information Flow Security of Separation Kernels with Channel-Based Communication

Assurance of information flow security by formal methods is mandated in security certification of separation kernels. As an industrial standard for separation kernels, ARINC 653 has been complied with by mainstream separation kernels. Security of functionalities defined in ARINC 653 is thus very important for the development and certification of separation kernels. This paper presents the first effort to formally specify and verify separation kernels with ARINC 653 channel-based communication. We provide a reusable formal specification and security proofs for separation kernels in Isabelle/HOL. During reasoning about information flow security, we find some security flaws in the ARINC 653 standard, which can cause information leakage, and fix them in our specification. We also validate the existence of the security flaws in two open-source ARINC 653 compliant separation kernels.

Yongwang Zhao, David Sanán, Fuyuan Zhang, Yang Liu
Some Complexity Results for Stateful Network Verification

In modern networks, forwarding of packets often depends on the history of previously transmitted traffic. Such networks contain stateful middleboxes, whose forwarding behavior depends on a mutable internal state. Firewalls and load balancers are typical examples of stateful middleboxes.This paper addresses the complexity of verifying safety properties, such as isolation, in networks with finite-state middleboxes. Unfortunately, we show that even in the absence of forwarding loops, reasoning about such networks is undecidable due to interactions between middleboxes connected by unbounded ordered channels. We therefore abstract away channel ordering. This abstraction is sound for safety, and makes the problem decidable. Specifically, we show that safety checking is EXPSPACE-complete in the number of hosts and middleboxes in the network. We further identify two useful subclasses of finite-state middleboxes which admit better complexities. The simplest class includes, e.g., firewalls and permits polynomial-time verification. The second class includes, e.g., cache servers and learning switches, and makes the safety problem coNP-complete.Finally, we implement a tool for verifying the correctness of stateful networks.

Yaron Velner, Kalev Alpernas, Aurojit Panda, Alexander Rabinovich, Mooly Sagiv, Scott Shenker, Sharon Shoham

Optimization

Frontmatter
Characteristic Formulae for Session Types

Subtyping is a crucial ingredient of session type theory and its applications, notably to programming language implementations. In this paper, we study effective ways to check whether a session type is a subtype of another by applying a characteristic formulae approach to the problem. Our core contribution is an algorithm to generate a modal $$\mu $$-calculus formula that characterises all the supertypes (or subtypes) of a given type. Subtyping checks can then be off-loaded to model checkers, thus incidentally yielding an efficient algorithm to check safety of session types, soundly and completely. We have implemented our theory and compared its cost with other classical subtyping algorithms.

Julien Lange, Nobuko Yoshida
Bit-Vector Optimization

A variety of applications of Satisfiability Modulo Theories (SMT) require finding a satisfying assignment which optimizes some user-given function. Optimization in the context of SMT is referred to as Optimization Modulo Theories (OMT). Current OMT research is mostly dedicated to optimization in arithmetic domains. This paper is about Optimization modulo Bit-Vectors (OBV). We introduce two OBV algorithms which can easily be implemented in an eager bit-vector solver. We show that an industrial problem of fixing cell placement during the physical design stage of the CAD process can be reduced to optimization modulo either Bit-Vectors (BV) or Linear Integer Arithmetic (LIA). We demonstrate that our resulting OBV tool can solve industrial instances which are out of reach of existing BV and LIA OMT solvers.

Alexander Nadel, Vadim Ryvchin
Runtime Monitoring with Union-Find Structures

This paper is concerned with runtime verification of object-oriented software system. We propose a novel algorithm for monitoring the individual behaviour and interaction of an unbounded number of runtime objects. This allows for evaluating complex correctness properties that take runtime data in terms of object identities into account. In particular, the underlying formal model can express hierarchical interdependencies of individual objects. Currently, the most efficient monitoring approaches for such properties are based on lookup tables. In contrast, the proposed algorithm uses union-find data structures to manage individual instances and thereby accomplishes a significant performance improvement. The time complexity bounds of the very efficient operations on union-find structures transfer to our monitoring algorithm: the execution time of a single monitoring step is guaranteed logarithmic in the number of observed objects. The amortised time is bound by an inverse of Ackermann’s function. We have implemented the algorithm in our monitoring tool Mufin. Benchmarks show that the targeted class of properties can be monitored extremely efficient and runtime overhead is reduced substantially compared to other tools.

Normann Decker, Jannis Harder, Torben Scheffel, Malte Schmitz, Daniel Thoma

Competition on Software Verification: SV-COMP

Frontmatter
Reliable and Reproducible Competition Results with BenchExec and Witnesses (Report on SV-COMP 2016)

The 5$$^{\text {th}}$$ Competition on Software Verification (SV-COMP 2016) continues the tradition of a thorough comparative evaluation of fully-automatic software verifiers. This report presents the results of the competition and includes a special section that describes how SV-COMP ensures that the experiments are reliably executed, precisely measured, and organized such that the results can be reproduced later. SV-COMP uses BenchExec for controlling and measuring the verification runs, and requires violation witnesses in an exchangeable format, whenever a verifier reports that a property is violated. Each witness was validated by two independent and publicly-available witness validators. The tables report the state of the art in software verification in terms of effectiveness and efficiency. The competition used 6 661 verification tasks that each consisted of a C program and a property (reachability, memory safety, termination). SV-COMP 2016 had 35 participating verification systems (22 in 2015) from 16 countries.

Dirk Beyer
2LS for Program Analysis
(Competition Contribution)

2LS is a program analysis tool for C programs built upon the CPROVER infrastructure. 2LS is bit-precise and it can verify and refute program assertions. 2LS implements invariant generation techniques, incremental bounded model checking and incremental k-induction. The competition submission uses an algorithm combining all three techniques, called kIkI (k-invariants and k-induction). As a back end, the competition submission of 2LS uses Glucose 4.0.

Peter Schrammel, Daniel Kroening
CIVL: Applying a General Concurrency Verification Framework to C/Pthreads Programs (Competition Contribution)

CIVL is a framework for the analysis and verification of concurrent programs. The front-end translates C programs that use (subsets of) Pthreads, MPI, OpenMP, or CUDA—alone or in combination—to an intermediate verification language CIVL-C. The back-end uses symbolic execution and model checking techniques to verify a number of safety properties of a CIVL-C program, such as absence of assertion violations, deadlocks, or out-of-bound indexes. We submit CIVL for verifying Pthreads programs in the concurrency category.

Manchun Zheng, John G. Edenhofner, Ziqing Luo, Mitchell J. Gerrard, Michael S. Rogers, Matthew B. Dwyer, Stephen F. Siegel
CPA-BAM: Block-Abstraction Memoization with Value Analysis and Predicate Analysis
(Competition Contribution)

The software verification framework CPAchecker is built on basic approaches like CPA and CEGAR. The configuration for the SV-COMP’16 uses the concept of block-abstraction memoization and combines it with the parallel execution of value analysis and predicate analysis. The CEGAR loop uses a refinement strategy that prefers to refine the precision of the lightweight value analysis, such that the precision of the predicate analysis remains abstract and concise as long as possible. The usage of mature analyses like value analysis and predicate analysis allows us to bring together the potential of lazy abstraction and interpolation and the benefits of block-abstraction memoization.

Karlheinz Friedberger
CPA-RefSel: CPAchecker with Refinement Selection
(Competition Contribution)

Our submission to SV-COMP’16 is based on the software verification framework CPAchecker. We suggest to combine the value and predicate analysis of the framework, both performing CEGAR based on interpolation. The novelty of our approach is that both analyses perform intra-analysis refinement selection, with a top-level refinement component additionally employing inter-analysis refinement selection. All in all, this allows for an efficient verification process, as intra-analysis refinement selection selects a suitable refinement for an analysis and inter-analysis refinement selection selects the analysis that is best to be refined.

Stefan Löwe
DIVINE: Explicit-State LTL Model Checker
(Competition Contribution)

DIVINE is an LLVM-based LTL model checker that follows the standard automata-based approach to explicit-state model checking. It aims at verification of unmodified parallel C & C++ programs without inputs. To achieve this DIVINE employs several reduction techniques combined with high-performance parallel and distributed computing.

Vladimír Štill, Petr Ročkai, Jiří Barnat
Run Forester, Run Backwards!
(Competition Contribution)

This paper briefly describes the Forester tree automata-based shape analyser and its participation in the SV-COMP’16 competition on software verification. In particular, it summarizes the verification approach used by Forester, its architecture and setup for the competition, as well as its strengths and weaknesses observed in the competition run. The paper highlights the newly added counterexample validation and use of refinable predicate language abstraction.

Lukáš Holík, Martin Hruška, Ondřej Lengál, Adam Rogalewicz, Jiří Šimáček, Tomáš Vojnar
LCTD: Tests-Guided Proofs for C Programs on LLVM
(Competition Contribution)

LCTD is an open source verification tool for C programs. It uses the LLVM compiler framework to instrument programs for verification with the DASH algorithm. LCTD has been submitted to the BitVectorsReach category of SV-COMP 2016.

Olli Saarikivi, Keijo Heljanko
LPI: Software Verification with Local Policy Iteration
(Competition Contribution)

LPI is a module for invariant generation embedded inside the CPAchecker framework. It uses a local policy iteration approach, which allows it to obtain precise numerical invariants. The approach performs computations in the template constraints domain using maximization moduloSMT, and terminates with a potentially over-approximating inductive invariant.Local policy iteration is a sound, but incomplete technique which obtains numerical, conjunctive inductive invariants for the analyzed programs. It can prove programs to be safe by finding a separating inductive invariant, but can not find counterexamples to safety. We supply the generated inductive invariant to the k-induction procedure, which terminates with either a counterexample or a proof of safety.

Egor George Karpenkov
Hunting Memory Bugs in C Programs with Map2Check
(Competition Contribution)

Map2Check is a tool for automatically generating and checking unit tests for C programs. The generation of unit tests is based on assertions extracted from (memory) safety properties, which are generated by the ESBMC tool. In particular, Map2Check checks for SV-COMP invalid-free, invalid-dereference, and memory-leak properties in C programs.

Herbert O. Rocha, Raimundo S. Barreto, Lucas C. Cordeiro
MU-CSeq 0.4: Individual Memory Location Unwindings
(Competition Contribution)

We present the MU-CSeq tool for the verification of multi-threaded C programs with dynamic thread creation, dynamic memory allocation, and pointer arithmetic. It is based on sequentializing the programs over the new notion of individual memory location unwinding (IMU). IMU is derived from the notion of memory unwinding that has been implemented in the previous versions of MU-CSeq. The main concepts of IMU are: (1) the use of multiple write sequences, one for each individual shared memory location that is effectively used in the executions and (2) the use of memory addresses rather than variable names in the operations on the shared memory, which requires a separate table to map write sequences but supports pointer arithmetic.

Ermenegildo Tomasco, Truc L. Nguyen, Omar Inverso, Bernd Fischer, Salvatore La Torre, Gennaro Parlato
Optimized PredatorHP and the SV-COMP Heap and Memory Safety Benchmark
(Competition Contribution)

This paper describes shortly the PredatorHP (Predator Hunting Party) analyzer and its participation in the SV-COMP’16 software verification competition. The paper starts by a brief sketch of the Predator shape analyzer on which PredatorHP is built, using multiple, concurrently running, specialised instances of Predator. The paper explains why the concrete mix of the different Predators was used, based on some characteristics of the SV-COMP benchmark.

Michal Kotoun, Petr Peringer, Veronika Šoková, Tomáš Vojnar
Symbiotic 3: New Slicer and Error-Witness Generation
(Competition Contribution)

Symbiotic 3 is a new generation of a bug-detection tool for C programs. The tool sticks to the combination of program instrumentation, slicing, and symbolic execution. Large parts of the tool are rewritten, in particular the managing and instrumentation scripts and slicer (including points-to analysis). Further, the symbolic executor Klee has been modified to produce error-witnesses. The changes are commented in the description of the tool workflow.

Marek Chalupa, Martin Jonáš, Jiri Slaby, Jan Strejček, Martina Vitovská
Ultimate Automizer with Two-track Proofs
(Competition Contribution)

Ultimate Automizer is a software verification tool that implements an automata-based approach for the analysis of safety and liveness problems. The version that participates in this year’s competition is able to analyze non-reachability, memory safety, termination, and overflow problems. In this paper we present the new features of our tool as well as the instructions how to install and use it.

Matthias Heizmann, Daniel Dietsch, Marius Greitschus, Jan Leike, Betim Musa, Claus Schätzle, Andreas Podelski
Vienna Verification Tool: IC3 for Parallel Software
(Competition Contribution)

Recently proposed extensions of the IC3 model checking algorithm offer a powerful new way to symbolically verify software. The Vienna Verification Tool (VVT) implements these techniques with the aim to tackle the problem of parallel software verification. Its SMT-based abstraction mechanisms allow VVT to deal with infinite state systems. In addition, VVT utilizes a coarse-grained large-block encoding and a variant of Lipton’s reduction to reduce the number of interleavings. This paper introduces VVT, its underlying architecture and use.

Henning Günther, Alfons Laarman, Georg Weissenbacher
Backmatter
Metadaten
Titel
Tools and Algorithms for the Construction and Analysis of Systems
herausgegeben von
Marsha Chechik
Jean-François Raskin
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-49674-9
Print ISBN
978-3-662-49673-2
DOI
https://doi.org/10.1007/978-3-662-49674-9