Skip to main content

2014 | Buch

Tools and Algorithms for the Construction and Analysis of Systems

20th International Conference, TACAS 2014, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2014, Grenoble, France, April 5-13, 2014. Proceedings

herausgegeben von: Erika Ábrahám, Klaus Havelund

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 20th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2014, which took place in Grenoble, France, in April 2014, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2014. The total of 42 papers included in this volume, consisting of 26 research papers, 3 case study papers, 6 regular tool papers and 7 tool demonstrations papers, were carefully reviewed and selected from 161 submissions. In addition the book contains one invited contribution. The papers are organized in topical sections named: decision procedures and their application in analysis; complexity and termination analysis; modeling and model checking discrete systems; timed and hybrid systems; monitoring, fault detection and identification; competition on software verification; specifying and checking linear time properties; synthesis and learning; quantum and probabilistic systems; as well as tool demonstrations and case studies.

Inhaltsverzeichnis

Frontmatter

Invited Contribution

Variations on Safety

Of special interest in formal verification are

safety

properties, which assert that the system always stays within some allowed region, in which nothing “bad” happens. Equivalently, a property is a safety property if every violation of it occurs after a finite execution of the system. Thus, a computation violates the property if it has a “bad prefix”, all whose extensions violate the property. The theoretical properties of safety properties as well as their practical advantages with respect to general properties have been widely studied. The paper surveys several extensions and variations of safety. We start with

bounded

and

checkable

properties – fragments of safety properties that enable an even simpler reasoning. We proceed to a

reactive

setting, where safety properties require the system to stay in a region of states that is both allowed and from which the environment cannot force it out. Finally, we describe a probability-based approach for defining different levels of safety.

Orna Kupferman

Decision Procedures and Their Application in Analysis

Decision Procedures for Flat Array Properties

We present new decidability results for quantified fragments of theories of arrays. Our decision procedures are fully declarative, parametric in the theories of indexes and elements and orthogonal with respect to known results. We also discuss applications to the analysis of programs handling arrays.

Francesco Alberti, Silvio Ghilardi, Natasha Sharygina
SATMC: A SAT-Based Model Checker for Security-Critical Systems

We present SATMC 3.0, a SAT-based bounded model checker for security-critical systems that stems from a successful combination of encoding techniques originally developed for planning with techniques developed for the analysis of reactive systems. SATMC has been successfully applied in a variety of application domains (security protocols, security-sensitive business processes, and cryptographic APIs) and for different purposes (design-time security analysis and security testing). SATMC strikes a balance between general purpose model checkers and security protocol analyzers as witnessed by a number of important success stories including the discovery of a serious man-in-the-middle attack on the SAML-based Single Sign-On (SSO) for Google Apps, an authentication flaw in the SAML 2.0 Web Browser SSO Profile, and a number of attacks on PKCS#11 Security Tokens. SATMC is integrated and used as back-end in a number of research prototypes (e.g., the AVISPA Tool, Tookan, the SPaCIoS Tool) and industrial-strength tools (e.g., the Security Validator plugin for SAP NetWeaver BPM).

Alessandro Armando, Roberto Carbone, Luca Compagna
IC3 Modulo Theories via Implicit Predicate Abstraction

We present a novel approach for generalizing the IC3 algorithm for invariant checking from finite-state to infinite-state transition systems, expressed over some background theories. The procedure is based on a tight integration of IC3 with Implicit (predicate) Abstraction, a technique that expresses abstract transitions without computing explicitly the abstract system and is incremental with respect to the addition of predicates. In this scenario, IC3 operates only at the Boolean level of the abstract state space, discovering inductive clauses over the abstraction predicates. Theory reasoning is confined within the underlying SMT solver, and applied transparently when performing satisfiability checks. When the current abstraction allows for a spurious counterexample, it is refined by discovering and adding a sufficient set of new predicates. Importantly, this can be done in a completely incremental manner, without discarding the clauses found in the previous search.

The proposed approach has two key advantages. First, unlike current SMT generalizations of IC3, it allows to handle a wide range of background theories without relying on ad-hoc extensions, such as quantifier elimination or theory-specific clause generalization procedures, which might not always be available, and can moreover be inefficient. Second, compared to a direct exploration of the concrete transition system, the use of abstraction gives a significant performance improvement, as our experiments demonstrate.

Alessandro Cimatti, Alberto Griggio, Sergio Mover, Stefano Tonetta
SMT-Based Verification of Software Countermeasures against Side-Channel Attacks

A common strategy for designing countermeasures against side channel attacks is using randomization techniques to remove the statistical dependency between sensitive data and side-channel emissions. However, this process is both labor intensive and error prone, and currently, there is a lack of automated tools to formally access how secure a countermeasure really is. We propose the first SMT solver based method for formally verifying the security of a countermeasures against such attacks. In addition to checking whether the sensitive data are

masked

, we also check whether they are

perfectly masked

, i.e., whether the joint distribution of any

d

intermediate computation results is independent of the secret key. We encode this verification problem into a series of quantifier-free first-order logic formulas, whose satisfiability can be decided by an off-the-shelf SMT solver. We have implemented the new method in a tool based on the LLVM compiler and the Yices SMT solver. Our experiments on recently proposed countermeasures show that the method is both effective and efficient for practical use.

Hassan Eldib, Chao Wang, Patrick Schaumont
Detecting Unrealizable Specifications of Distributed Systems

Writing formal specifications for distributed systems is difficult. Even simple consistency requirements often turn out to be unrealizable because of the complicated information flow in the distributed system: not every information is available in every component, and information transmitted from other components may arrive with a delay or not at all, especially in the presence of faults. The problem of checking the distributed realizability of a temporal specification is, in general, undecidable. Semi-algorithms for synthesis, such as bounded synthesis, are only useful in the positive case, where they construct an implementation for a realizable specification, but not in the negative case: if the specification is unrealizable, the search for the implementation never terminates. In this paper, we introduce

counterexamples to distributed realizability

and present a method for the detection of such counterexamples for specifications given in linear-time temporal logic (LTL). A counterexample consists of a set of paths, each representing a different sequence of inputs from the environment, such that, no matter how the components are implemented, the specification is violated on

at least one

of these paths. We present a method for finding such counterexamples both for the classic distributed realizability problem and for the distributed realizability problem with faulty nodes. Our method considers, incrementally, larger and larger sets of paths until a counterexample is found. While counterexamples for full LTL may consist of infinitely many paths, we give a semantic characterization such that the required number of paths can be bounded. For this fragment, we thus obtain a decision procedure. Experimental results, obtained with a QBF-based prototype implementation, show that our method finds simple errors very quickly, and even problems with high combinatorial complexity, like the Byzantine Generals’ Problem, are tractable.

Bernd Finkbeiner, Leander Tentrup
Synthesizing Safe Bit-Precise Invariants

Bit-precise software verification is an important and difficult problem. While there has been an amazing progress in SAT solving, Satisfiability Modulo Theory of Bit Vectors, and bit-precise Bounded Model Checking, proving bit-precise safety, i.e. synthesizing a safe inductive invariant, remains a challenge. Although the problem is decidable and is reducible to propositional safety by bit-blasting, the approach does not scale in practice. The alternative approach of lifting propositional algorithms to bit-vectors is difficult. In this paper, we propose a novel technique that uses unsound approximations (i.e., neither over- nor under-) for synthesizing sound bit-precise invariants. We prototyped the technique using Z3/PDR engine and applied it to bit-precise verification of benchmarks from SVCOMP’13. Even with our preliminary implementation we were able to demonstrate significant (orders of magnitude) performance improvements with respect to bit-precise verificaton using Z3/PDR directy.

Arie Gurfinkel, Anton Belov, Joao Marques-Silva
PEALT: An Automated Reasoning Tool for Numerical Aggregation of Trust Evidence

We present a tool PEALT that supports the understanding and validation of mechanisms that numerically aggregate trust evidence of potentially heterogenous sources. Such mechanisms are expressed in the policy composition language

Peal

and subjected to vacuity checking, sensitivity analysis of thresholds, and policy refinement. Verification code is generated by either compiling away numerical references prior to constraint solving or by delegating numerical reasoning to Z3, the common back-end constraint solver of PEALT. The former gives compact diagnostics but restricts value ranges and may be space intensive. The latter generates compact verification code, but gives verbose diagnostics, and may struggle with multiplicative reasoning. We experimentally compare code generation and verification running times of these methods on randomly generated analyses and on a non-random benchmark modeling majority voting. Our findings suggest both methods have complementary value and may scale up well for the analysis of most realistic case studies.

Michael Huth, Jim Huan-Pu Kuo
GRASShopper
Complete Heap Verification with Mixed Specifications

We present

GRASShopper

, a tool for compositional verification of heap-manipulating programs against user-provided specifications. What makes our tool unique is its decidable specification language, which supports mixing of assertions expressed in separation logic and first-order logic. The user of the tool can thus take advantage of the succinctness of separation logic specifications and the discipline of local reasoning. Yet, at the same time, she can revert to classical logic in the cases where decidable separation logic fragments are less suited, such as reasoning about constraints on data and heap structures with complex sharing. We achieve this combination of specification languages through a translation to programs whose specifications are expressed in a decidable fragment of first-order logic called

GRASS

. This logic is well-suited for automation using satisfiability modulo theory solvers. Unlike other tools that provide similar features, our decidability guarantees enable

GRASShopper

to produce detailed counterexamples for incorrect or underspecified programs.We have found this feature to be invaluable when debugging specifications. We present the underlying philosophy of the tool, describe the major technical challenges, and discuss implementation details. We conclude with an evaluation that considers challenging benchmarks such as sorting algorithms and a union/find data structure.

Ruzica Piskac, Thomas Wies, Damien Zufferey

Complexity and Termination Analysis

Alternating Runtime and Size Complexity Analysis of Integer Programs

We present a modular approach to automatic complexity analysis. Based on a novel alternation between finding symbolic time bounds for program parts and using these to infer size bounds on program variables, we can restrict each analysis step to a small part of the program while maintaining a high level of precision. Extensive experiments with the implementation of our method demonstrate its performance and power in comparison with other tools.

Marc Brockschmidt, Fabian Emmes, Stephan Falke, Carsten Fuhs, Jürgen Giesl
Proving Nontermination via Safety

We show how the problem of nontermination proving can be reduced to a question of underapproximation search guided by a safety prover. This reduction leads to new nontermination proving implementation strategies based on existing tools for safety proving. Our preliminary implementation beats existing tools. Furthermore, our approach leads to easy support for programs with unbounded nondeterminism.

Hong-Yi Chen, Byron Cook, Carsten Fuhs, Kaustubh Nimkar, Peter O’Hearn
Ranking Templates for Linear Loops

We present a new method for the constraint-based synthesis of termination arguments for linear loop programs based on

linear ranking templates

. Linear ranking templates are parametrized, well-founded relations such that an assignment to the parameters gives rise to a ranking function. This approach generalizes existing methods and enables us to use templates for many different ranking functions with affine-linear components. We discuss templates for multiphase, piecewise, and lexicographic ranking functions. Because these ranking templates require both strict and non-strict inequalities, we use Motzkin’s Transposition Theorem instead of Farkas Lemma to transform the generated ∃ ∀-constraint into an ∃-constraint.

Jan Leike, Matthias Heizmann

Modeling and Model Checking Discrete Systems

FDR3 — A Modern Refinement Checker for CSP

FDR3 is a complete rewrite of the CSP refinement checker FDR2, incorporating a significant number of enhancements. In this paper we describe the operation of FDR3 at a high level and then give a detailed description of several of its more important innovations. This includes the new multi-core refinement-checking algorithm that is able to achieve a near linear speed up as the number of cores increase. Further, we describe the new algorithm that FDR3 uses to construct its internal representation of CSP processes—this algorithm is more efficient than FDR2’s, and is able to compile a large class of CSP processes to more efficient internal representations. We also present experimental results that compare FDR3 to related tools, which show it is unique (as far as we know) in being able to scale beyond the bounds of main memory.

Thomas Gibson-Robinson, Philip Armstrong, Alexandre Boulgakov, Andrew W. Roscoe
Concurrent Depth-First Search Algorithms

We present concurrent algorithms, based on depth-first search, for three problems relevant to model checking: given a state graph, to find its strongly connected components, which states are in loops, and which states are in “lassos”. Our algorithms typically exhibit about a four-fold speed-up over the corresponding sequential algorithms on an eight-core machine.

Gavin Lowe
Basic Problems in Multi-View Modeling

Modeling all aspects of a complex system within a single model is a difficult, if not impossible, task. Multi-view modeling is a methodology where different aspects of the system are captured by different models, or

views

. A key question then is

consistency

: if different views of a system have some degree of overlap, how can we guarantee that they are consistent, i.e., that they do not contradict each other? In this paper we formulate this and other basic problems in multi-view modeling within an abstract formal framework. We then instantiate this framework in a discrete, finite-state system setting, and study how some key verification and synthesis problems can be solved in that setting.

Jan Reineke, Stavros Tripakis
GPUexplore: Many-Core On-the-Fly State Space Exploration Using GPUs

In recent years, General Purpose Graphics Processors (GPUs) have been successfully applied in multiple application domains to drastically speed up computations. Model checking is an automatic method to formally verify the correctness of a system specification. Such specifications can be viewed as implicit descriptions of a large directed graph or state space, and for most model checking operations, this graph must be analysed. Constructing it, or on-the-fly exploring it, however, is computationally intensive, so it makes sense to try to implement this for GPUs. In this paper, we explain the limitations involved, and how to overcome these. We discuss the possible approaches involving related work, and propose an alternative, using a new hash table approach for GPUs. Experimental results with our prototype implementations show significant speed-ups compared to the established sequential counterparts.

Anton Wijs, Dragan Bošnački

Timed and Hybrid Systems

Forward Reachability Computation for Autonomous Max-Plus-Linear Systems

This work discusses the computation of forward reachability for autonomous (that is, deterministic) Max-Plus-Linear (MPL) systems, a class of continuous-space discrete-event models that are relevant for applications dealing with synchronization and scheduling. Given an MPL model and a set of initial states, we characterize and compute its “reach tube,” namely the sequential collection of the sets of reachable states (these sets are regarded step-wise as “reach sets”). We show that the exact computation of the reach sets can be quickly and compactly performed by manipulations of difference-bound matrices, and derive explicit worst-case bounds for the complexity of these operations. The concepts and techniques are implemented within the toolbox

VeriSiMPL

, and are practically elucidated by a running example. We further display the computational performance of the approach by two concluding numerical benchmarks: the technique comfortably handles reachability computations over twenty-dimensional MPL models (i.e., models with twenty continuous variables), and it clearly outperforms an alternative state-of-the-art approach in the literature.

Dieky Adzkiya, Bart De Schutter, Alessandro Abate
Compositional Invariant Generation for Timed Systems

In this paper we address the state space explosion problem inherent to model-checking timed systems with a large number of components. The main challenge is to obtain pertinent global timing constraints from the timings in the components alone. To this end, we make use of auxiliary clocks to automatically generate new invariants which capture the constraints induced by the synchronisations between components. The method has been implemented as an extension of the D-Finder tool and successfully experimented on several benchmarks.

Lacramioara Aştefănoaei, Souha Ben Rayana, Saddek Bensalem, Marius Bozga, Jacques Combaz
Characterizing Algebraic Invariants by Differential Radical Invariants

We prove that any invariant algebraic set of a given polynomial vector field can be algebraically represented by one polynomial and a finite set of its successive Lie derivatives. This so-called

differential radical characterization

relies on a sound abstraction of the reachable set of solutions by the smallest variety that contains it. The characterization leads to a differential radical invariant proof rule that is sound and complete, which implies that invariance of algebraic equations over real-closed fields is decidable. Furthermore, the problem of generating invariant varieties is shown to be as hard as minimizing the rank of a symbolic matrix, and is therefore NP-hard. We investigate symbolic linear algebra tools based on Gaussian elimination to efficiently automate the generation. The approach can, e.g., generate nontrivial algebraic invariant equations capturing the airplane behavior during take-off or landing in longitudinal motion.

Khalil Ghorbal, André Platzer
Quasi-Equal Clock Reduction: More Networks, More Queries

Quasi-equal clock reduction for networks of timed automata replaces equivalence classes of clocks which are equal except for

unstable phases

, i.e., points in time where these clocks differ on their valuation, by a single representative clock. An existing approach yields significant reductions of the overall verification time but is limited to so-called well-formed networks and

local queries

, i.e., queries which refer to a single timed automaton only. In this work we present two new transformations. The first, for networks of timed automata, summarises unstable phases without losing information under weaker well-formedness assumptions than needed by the existing approach. The second, for queries, now supports the full query language of Uppaal. We demonstrate that the cost of verifying non-local properties is much lower in transformed networks than in their original counterparts with quasi-equal clocks.

Christian Herrera, Bernd Westphal, Andreas Podelski
Are Timed Automata Bad for a Specification Language? Language Inclusion Checking for Timed Automata

Given a timed automaton

$\cal P$

modeling an implementation and a timed automaton

$\cal S$

as a specification, language inclusion checking is to decide whether the language of

$\cal P$

is a subset of that of

$\cal S$

. It is known that this problem is undecidable and “this result is an obstacle in using timed automata as a specification language” [2]. This undecidability result, however, does not imply that all timed automata are bad for specification. In this work, we propose a zone-based semi-algorithm for language inclusion checking, which implements simulation reduction based on Anti-Chain and LU-simulation. Though it is not guaranteed to terminate, we show that it does in many cases through both theoretical and empirical analysis. The semi-algorithm has been incorporated into the PAT model checker, and applied to multiple systems to show its usefulness and scalability.

Ting Wang, Jun Sun, Yang Liu, Xinyu Wang, Shanping Li

Monitoring, Fault Detection and Identification

Formal Design of Fault Detection and Identification Components Using Temporal Epistemic Logic

Automated detection of faults and timely recovery are fundamental features for autonomous critical systems. Fault Detection and Identification (FDI) components are designed to detect faults on-board, by reading data from sensors and triggering predefined alarms.

The design of effective FDI components is an extremely hard problem, also due to the lack of a complete theoretical foundation, and of precise specification and validation techniques.

In this paper, we present the first formal framework for the design of FDI for discrete event systems. We propose a logical language for the specification of FDI requirements that accounts for a wide class of practical requirements, including novel aspects such as maximality and nondiagnosability. The language is equipped with a clear semantics based on temporal epistemic logic. We discuss how to validate the requirements and how to verify that a given FDI component satisfies them. Finally, we develop an algorithm for the synthesis of correct-by-construction FDI components, and report on the applicability of the framework on an industrial case-study coming from aerospace.

Marco Bozzano, Alessandro Cimatti, Marco Gario, Stefano Tonetta
Monitoring Modulo Theories

This paper considers a generic approach to enhance traditional runtime verification techniques towards first-order theories in order to reason about data. This allows especially for the verification of multi-threaded, object-oriented systems. It presents a general framework lifting the monitor synthesis for propositional temporal logics to a temporal logic over structures within some first-order theory. To evaluate such temporal properties, SMT solving and classical monitoring of propositional temporal properties is combined. The monitoring procedure was implemented for linear-time temporal logic (LTL) based on the Z3 SMT solver and evaluated regarding runtime performance.

Normann Decker, Martin Leucker, Daniel Thoma
Temporal-Logic Based Runtime Observer Pairs for System Health Management of Real-Time Systems

We propose a real-time, Realizable, Responsive, Unobtrusive Unit (rt-R2U2) to meet the emerging needs for System Health Management (SHM) of new safety-critical embedded systems like automated vehicles, Unmanned Aerial Systems (UAS), or small satellites. SHM for these systems must be able to handle unexpected situations and adapt specifications quickly during flight testing between closely-timed consecutive missions, not mid-mission, necessitating fast reconfiguration. They must enable more advanced probabilistic reasoning for diagnostics and prognostics while running aboard limited hardware without affecting the certified on-board software. We define and prove correct translations of two real-time projections of Linear Temporal Logic to two types of efficient observer algorithms to continuously assess the status of the system. A

synchronous

observer yields an instant abstraction of the satisfaction check, whereas an

asynchronous

observer concretizes this abstraction at a later, a priori known, time. By feeding the system’s real-time status into a statistical reasoning unit, e.g., based on Bayesian networks, we enable advanced health estimation and diagnosis. We experimentally demonstrate our novel framework on real flight data from NASA’s Swift UAS. By on-boarding rt-R2U2 aboard an existing FPGA already built into the standard UAS design and seamlessly intercepting sensor values through read-only observations of the system bus, we avoid system integration problems of software instrumentation or added hardware. The flexibility of our approach with regard to changes in the monitored specification is not due to the reconfigurability offered by FPGAs; it is a benefit of the modularity of our observers and would also be available on non-reconfigurable hardware platforms such as ASICs.

Thomas Reinbacher, Kristin Yvonne Rozier, Johann Schumann

Competition on Software Verification

Status Report on Software Verification
(Competition Summary SV-COMP 2014)

This report describes the 3rd International Competition on Software Verification (SV-COMP 2014), which is the third edition of a thorough comparative evaluation of fully automatic software verifiers. The reported results represent the state of the art in automatic software verification, in terms of effectiveness and efficiency. The verification tasks of the competition consist of nine categories containing a total of 2 868 C programs, covering bit-vector operations, concurrent execution, control-flow and integer data-flow, device-drivers, heap data structures, memory manipulation via pointers, recursive functions, and sequentialized concurrency. The specifications include reachability of program labels and memory safety. The competition is organized as a satellite event at TACAS 2014 in Grenoble, France.

Dirk Beyer
CBMC – C Bounded Model Checker
(Competition Contribution)

CBMC implements bit-precise bounded model checking for C programs and has been developed and maintained for more than ten years. CBMC verifies the absence of violated assertions under a given loop unwinding bound. Other properties, such as SV-COMP’s ERROR labels or memory safety properties are reduced to assertions via automated instrumentation. Only recently support for efficiently checking concurrent programs, including support for weak memory models, has been added. Thus, CBMC is now capable of finding counterexamples in all of SV-COMP’s categories. As back end, the competition submission of CBMC uses MiniSat 2.2.0.

Daniel Kroening, Michael Tautschnig
CPAchecker with Sequential Combination of Explicit-Value Analyses and Predicate Analyses
(Competition Contribution)

CPAchecker

is a framework for software verification, built on the foundations of

Configurable Program Analysis

(CPA). For the SV-COMP’14, we file a

CPAchecker

configuration that runs up to five analyses in sequence. The first two analyses of our approach utilize the explicit-value domain for modeling the state space, while the remaining analyses are based on predicate abstraction. In addition to that, a bit-precise counterexample checker comes into action whenever an analysis finds a counterexample. The combination of conceptually different analyses is key to the success of our verification approach, as the diversity of verification tasks is taken into account.

Stefan Löwe, Mikhail Mandrykin, Philipp Wendler
CPAlien: Shape Analyzer for CPAChecker
(Competition Contribution)

CPAlien

is a configurable program analysis framework instance. It uses an extension of the symbolic memory graphs (SMGs) abstract domain for shape analysis of programs manipulating the heap. In particular,

CPAlien

extends SMGs with a simple integer value analysis in order to handle programs with both pointers and integer data. The current version of

CPAlien

is an early prototype intended as a basis for a future research in the given area. The version submitted for SV-COMP’14 does not contain any shape abstraction, but it is still powerful enough to participate in several categories.

Petr Muller, Tomáš Vojnar
Lazy-CSeq: A Lazy Sequentialization Tool for C
(Competition Contribution)

We describe a version of the lazy sequentialization schema by La Torre, Madhusudan, and Parlato that is optimized for bounded programs, and avoids the re-computation of the local state of each process at each context switch. Lazy-CSeq implements this sequentialization schema for sequentially consistent C programs using POSIX threads. Experiments show that it is very competitive.

Omar Inverso, Ermenegildo Tomasco, Bernd Fischer, Salvatore La Torre, Gennaro Parlato
MU-CSeq: Sequentialization of C Programs by Shared Memory Unwindings
(Competition Contribution)

We implement a new sequentialization algorithm for multi-threaded C programs with dynamic thread creation as a new CSeq module. The novel basic idea of this algorithm is to fix (by a nondeterministic guess) the sequence of write operations in the shared memory and then simulate the behavior of the program according to any scheduling that respects this choice. Simulation is done thread-by-thread and the thread creation mechanism is replaced by function calls.

Ermenegildo Tomasco, Omar Inverso, Bernd Fischer, Salvatore La Torre, Gennaro Parlato
ESBMC 1.22
(Competition Contribution)

We have implemented an improved memory model for ESBMC which better takes into account C’s memory alignment rules and optimizes the generated SMT formulae. This simultaneously improves ESBMC’s precision and performance.

Jeremy Morse, Mikhail Ramalho, Lucas Cordeiro, Denis Nicole, Bernd Fischer
FrankenBit: Bit-Precise Verification with Many Bits
(Competition Contribution)

Bit-precise software verification is an important and difficult problem. While there has been an amazing progress in SAT solving, Satisfiability Modulo Theory of Bit Vectors, and bit-precise Bounded Model Checking, proving bit-precise safety, i.e. synthesizing a safe inductive invariant, remains a challenge. In this paper, we present

FrankenBit

— a tool that combines bit-precise invariant synthesis with BMC counterexample search. As the name suggests,

FrankenBit

combines a large variety of existing verification tools and techniques, including LLBMC, UFO, Z3, Boolector, MiniSAT and STP.

Arie Gurfinkel, Anton Belov
Predator: A Shape Analyzer Based on Symbolic Memory Graphs
(Competition Contribution)

Predator is a shape analyzer that uses the abstract domain of symbolic memory graphs in order to support various forms of low-level memory manipulation commonly used in optimized C code. This paper briefly describes the verification approach taken by Predator and its strengths and weaknesses revealed during its participation in the Software Verification Competition (SV-COMP’14).

Kamil Dudka, Petr Peringer, Tomáš Vojnar
Symbiotic 2: More Precise Slicing
(Competition Contribution)

Symbiotic

2 keeps the concept and the structure of the original bug-finding tool

Symbiotic

, but it uses a more precise slicing based on a field-sensitive pointer analysis instead of field-insensitive analysis of the original tool. The paper discusses this improvement and its consequences. We also briefly recall basic principles of the tool, its strong and weak points, installation, and running instructions. Finally, we comment the results achieved by

Symbiotic

2 in the competition.

Jiri Slaby, Jan Strejček
Ultimate Automizer with Unsatisfiable Cores
(Competition Contribution)

Ultimate

Automizer

is an automatic software verification tool for C programs. This tool is a prototype implementation of an automata-theoretic approach that allows a modular verification of programs. Furthermore, this is the first implementation of a novel interpolation technique where interpolants are not obtained from an interpolating theorem prover but from a combination of a live variable analysis, interprocedural predicate transformers and unsatisfiable cores.

Matthias Heizmann, Jürgen Christ, Daniel Dietsch, Jochen Hoenicke, Markus Lindenmann, Betim Musa, Christian Schilling, Stefan Wissert, Andreas Podelski
Ultimate Kojak
(Competition Contribution)

Ultimate Kojak

is a symbolic software model checker for C programs. It is based on CEGAR and Craig interpolation. The basic algorithm, described in an earlier work [1], was extended to be able to deal with recursive programs using nested word automata and nested (tree) interpolants.

Evren Ermis, Alexander Nutz, Daniel Dietsch, Jochen Hoenicke, Andreas Podelski

Specifying and Checking Linear Time Properties

Discounting in LTL

In recent years, there is growing need and interest in formalizing and reasoning about the quality of software and hardware systems. As opposed to traditional verification, where one handles the question of whether a system satisfies, or not, a given specification, reasoning about quality addresses the question of

how well

the system satisfies the specification. One direction in this effort is to refine the “eventually” operators of temporal logic to

discounting operators

: the satisfaction value of a specification is a value in [0,1], where the longer it takes to fulfill eventuality requirements, the smaller the satisfaction value is.

In this paper we introduce an augmentation by discounting of Linear Temporal Logic (LTL), and study it, as well as its combination with propositional quality operators. We show that one can augment LTL with an arbitrary set of discounting functions, while preserving the decidability of the model-checking problem. Further augmenting the logic with unary propositional quality operators preserves decidability, whereas adding an average-operator makes the model-checking problem undecidable. We also discuss the complexity of the problem, as well as various extensions.

Shaull Almagor, Udi Boker, Orna Kupferman
Symbolic Model Checking of Stutter-Invariant Properties Using Generalized Testing Automata

In a previous work, we showed that a kind of

ω

-automata known as

Transition-based Generalized Testing Automata

(TGTA) can outperform the Büchi automata traditionally used for

explicit

model checking when verifying stutter-invariant properties.

In this work, we investigate the use of these generalized testing automata to improve

symbolic

model checking of stutter-invariant LTL properties. We propose an efficient symbolic encoding of stuttering transitions in the product between a model and a TGTA. Saturation techniques available for decision diagrams then benefit from the presence of stuttering self-loops on all states of TGTA. Experimentation of this approach confirms that it outperforms the symbolic approach based on (transition-based) Generalized Büchi Automata.

Ala Eddine Ben Salem, Alexandre Duret-Lutz, Fabrice Kordon, Yann Thierry-Mieg

Synthesis and Learning

Symbolic Synthesis for Epistemic Specifications with Observational Semantics

The paper describes a framework for the synthesis of protocols for distributed and multi-agent systems from specifications that give a program structure that may include variables in place of conditional expressions, together with specifications in a temporal epistemic logic that constrain the values of these variables. The epistemic operators are interpreted with respect to an observational semantics. The framework generalizes the notion of

knowledge-based program

proposed by Fagin et al (Dist. Comp. 1997). An algorithmic approach to the synthesis problem is developed that computes all solutions, using a reduction to epistemic model checking, that has been implemented using symbolic techniques. An application of the approach to synthesize mutual exclusion protocols is presented.

Xiaowei Huang, Ron van der Meyden
Synthesis for Human-in-the-Loop Control Systems

Several control systems in safety-critical applications involve the interaction of an autonomous controller with one or more human operators. Examples include pilots interacting with an autopilot system in an aircraft, and a driver interacting with automated driver-assistance features in an automobile. The correctness of such systems depends not only on the autonomous controller, but also on the actions of the human controller. In this paper, we present a formalism for human-in-the-loop (HuIL) control systems. Particularly, we focus on the problem of synthesizing a semi-autonomous controller from high-level temporal specifications that expect occasional human intervention for correct operation. We present an algorithm for this problem, and demonstrate its operation on problems related to driver assistance in automobiles.

Wenchao Li, Dorsa Sadigh, S. Shankar Sastry, Sanjit A. Seshia
Learning Regular Languages over Large Alphabets

This work is concerned with regular languages defined over large alphabets, either infinite or just too large to be expressed enumeratively. We define a generic model where transitions are labeled by elements of a finite partition of the alphabet. We then extend Angluin’s

L

*

algorithm for learning regular languages from examples for such automata. We have implemented this algorithm and we demonstrate its behavior where the alphabet is the set of natural numbers.

Oded Maler, Irini-Eleftheria Mens

Quantum and Probabilistic Systems

Verification of Concurrent Quantum Protocols by Equivalence Checking

We present a tool which uses a concurrent language for describing quantum systems, and performs verification by checking equivalence between specification and implementation. In general, simulation of quantum systems using current computing technology is infeasible. We restrict ourselves to the

stabilizer

formalism, in which there are efficient simulation algorithms. In particular, we consider concurrent quantum protocols that behave

functionally

in the sense of computing a deterministic input-output relation for all interleavings of the concurrent system. Crucially, these input-output relations can be abstracted by superoperators, enabling us to take advantage of linearity. This allows us to analyse the behaviour of protocols with arbitrary input, by simulating their operation on a finite basis set consisting of stabilizer states. Despite the limitations of the stabilizer formalism and also the range of protocols that can be analysed using this approach, we have applied our equivalence checking tool to specify and verify interesting and practical quantum protocols from teleportation to secret sharing.

Ebrahim Ardeshir-Larijani, Simon J. Gay, Rajagopal Nagarajan
Computing Conditional Probabilities in Markovian Models Efficiently

The fundamentals of probabilistic model checking for Markovian models and temporal properties have been studied extensively in the past 20 years. Research on methods for computing conditional probabilities for temporal properties under temporal conditions is, however, comparably rare. For computing conditional probabilities or expected values under

ω

-regular conditions in Markov chains, we introduce a new transformation of Markov chains that incorporates the effect of the condition into the model. For Markov decision processes, we show that the task to compute maximal reachability probabilities under reachability conditions is solvable in polynomial time, while it was conjectured to be computationally hard. Using adaptions of known automata-based methods, our algorithm can be generalized for computing the maximal conditional probabilities for

ω

-regular events under

ω

-regular conditions. The feasibility of our algorithms is studied in two benchmark examples.

Christel Baier, Joachim Klein, Sascha Klüppelholz, Steffen Märcker
Permissive Controller Synthesis for Probabilistic Systems

We propose novel controller synthesis techniques for probabilistic systems modelled using stochastic two-player games: one player acts as a controller, the second represents its environment, and probability is used to capture uncertainty arising due to, for example, unreliable sensors or faulty system components. Our aim is to generate robust controllers that are resilient to unexpected system changes at runtime, and flexible enough to be adapted if additional constraints need to be imposed. We develop a

permissive

controller synthesis framework, which generates

multi-strategies

for the controller, offering a choice of control actions to take at each time step. We formalise the notion of permissiveness using penalties, which are incurred each time a possible control action is blocked by a multi-strategy. Permissive controller synthesis aims to generate a multi-strategy that minimises these penalties, whilst guaranteeing the satisfaction of a specified system property. We establish several key results about the optimality of multi-strategies and the complexity of synthesising them. Then, we develop methods to perform permissive controller synthesis using mixed integer linear programming and illustrate their effectiveness on a selection of case studies.

Klaus Dräger, Vojtěch Forejt, Marta Kwiatkowska, David Parker, Mateusz Ujma
Precise Approximations of the Probability Distribution of a Markov Process in Time: An Application to Probabilistic Invariance

The goal of this work is to formally abstract a Markov process evolving over a general state space as a finite-state Markov chain, with the objective of precisely approximating the state probability distribution of the Markov process in time. The approach uses a partition of the state space and is based on the computation of the average transition probability between partition sets. In the case of unbounded state spaces, a procedure for precisely truncating the state space within a compact set is provided, together with an error bound that depends on the asymptotic properties of the transition kernel of the Markov process. In the case of compact state spaces, the work provides error bounds that depend on the diameters of the partitions, and as such the errors can be tuned. The method is applied to the problem of computing probabilistic invariance of the model under study, and the result is compared to an alternative approach in the literature.

Sadegh Esmaeil Zadeh Soudjani, Alessandro Abate

Tool Demonstrations

SACO: Static Analyzer for Concurrent Objects

We present the main concepts, usage and implementation of SACO, a static analyzer for concurrent objects. Interestingly, SACO is able to infer both

liveness

(namely termination and resource boundedness) and

safety

properties (namely deadlock freedom) of programs based on concurrent objects. The system integrates auxiliary analyses such as points-to and

may-happen-in-parallel

, which are essential for increasing the accuracy of the aforementioned more complex properties. SACO provides accurate information about the dependencies which may introduce deadlocks, loops whose termination is not guaranteed, and upper bounds on the resource consumption of methods.

Elvira Albert, Puri Arenas, Antonio Flores-Montoya, Samir Genaim, Miguel Gómez-Zamalloa, Enrique Martin-Martin, German Puebla, Guillermo Román-Díez
VeriMAP: A Tool for Verifying Programs through Transformations

We present VeriMAP, a tool for the verification of C programs based on the transformation of constraint logic programs, also called constrained Horn clauses. VeriMAP makes use of Constraint Logic Programming (CLP) as a metalanguage for representing: (i) the operational semantics of the C language, (ii) the program, and (iii) the property to be verified. Satisfiability preserving transformations of the CLP representations are then applied for generating verification conditions and checking their satisfiability. VeriMAP has an interface with various solvers for reasoning about constraints that express the properties of the data (in particular, integers and arrays). Experimental results show that VeriMAP is competitive with respect to state-of-the-art tools for program verification.

Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi, Maurizio Proietti
CIF 3: Model-Based Engineering of Supervisory Controllers

The engineering of supervisory controllers for large and complex cyber-physical systems requires dedicated engineering support. The Compositional Interchange Format language and toolset have been developed for this purpose. We highlight a model-based engineering framework for the engineering of supervisory controllers and explain how the CIF language and accompanying tools can be used for typical activities in that framework such as modeling, supervisory control synthesis, simulation-based validation, verification, and visualization, real-time testing, and code generation. We mention a number of case studies for which this approach was used in the recent past. We discuss future developments on the level of language and tools as well as research results that may be integrated in the longer term.

D. A. van Beek, W. J. Fokkink, D. Hendriks, A. Hofkamp, J. Markovski, J. M. van de Mortel-Fronczak, M. A. Reniers
EDD: A Declarative Debugger for Sequential Erlang Programs

Declarative debuggers are semi-automatic debugging tools that abstract the execution details to focus on the program semantics. This paper presents a tool implementing this approach for the sequential subset of Erlang, a functional language with dynamic typing and strict evaluation. Given an erroneous computation, it first detects an erroneous function (either a “named” function or a lambda-abstraction), and then continues the process to identify the fragment of the function responsible for the error. Among its features it includes support for exceptions, predefined and built-in functions, higher-order functions, and trusting and undo commands.

Rafael Caballero, Enrique Martin-Martin, Adrian Riesco, Salvador Tamarit
APTE: An Algorithm for Proving Trace Equivalence

This paper presents

APTE

, a new tool for automatically proving the security of cryptographic protocols. It focuses on proving trace equivalence between processes, which is crucial for specifying privacy type properties such as anonymity and unlinkability.

The tool can handle protocols expressed in a calculus similar to the applied-pi calculus, which allows us to capture most existing protocols that rely on classical cryptographic primitives. In particular, APTE handles private channels and else branches in protocols with bounded number of sessions. Unlike most equivalence verifier tools, APTE is guaranteed to terminate

Moreover,

APTE

is the only tool that extends the usual notion of trace equivalence by considering “side-channel” information leaked to the attacker such as the length of messages and the execution times. We illustrate

APTE

on different case studies which allowed us to automatically (re)-discover attacks on protocols such as the

Private Authentication

protocol or the protocols of the electronic passports.

Vincent Cheval
The Modest Toolset: An Integrated Environment for Quantitative Modelling and Verification

Probabilities, real-time behaviour and continuous dynamics are the key ingredients of quantitative models enabling formal studies of non-functional properties such as dependability and performance. The

Toolset

is based on networks of stochastic hybrid automata (SHA) as an overarching semantic foundation. Many existing automata-based formalisms are special cases of SHA. The toolset aims to facilitate reuse of modelling expertise via

Modest

, a high-level compositional modelling language; to allow reuse of existing models by providing import and export facilities for existing languages; and to permit reuse of existing tools by integrating them in a unified modelling and analysis environment.

Arnd Hartmanns, Holger Hermanns
Bounds2: A Tool for Compositional Multi-parametrised Verification

Bounds2 is a two-part tool for parametrised verification. The instance generator inputs a parametrised system implementation and specification, computes cut-offs for the values of the parameters and outputs the specification and implementation instances up to the cut-offs. After that, the outputted instances are verified by using an instance checker. Bounds2 is unique since it lends support to compositional reasoning through three refinement-based notions of correctness and allows for parametrising not only the number of processes but also the size of data types as well as the structure of a system. Bounds2 provides a sound and complete approach to parametrised verification under explicit assumptions checked automatically by the tool. The decidable fragment covers, e.g., mutual exclusion properties of systems with shared resources.

Antti Siirtola

Case Studies

On the Correctness of a Branch Displacement Algorithm

The branch displacement problem is a well-known problem in assembler design. It revolves around the feature, present in several processor families, of having different instructions, of different sizes, for jumps of different displacements. The problem, which is provably NP-hard, is then to select the instructions such that one ends up with the smallest possible program.

During our research with the CerCo project on formally verifying a C compiler, we have implemented and proven correct an algorithm for this problem. In this paper, we discuss the problem, possible solutions, our specific solutions and the proofs.

Jaap Boender, Claudio Sacerdoti Coen
Analyzing the Next Generation Airborne Collision Avoidance System

The next generation airborne collision avoidance system, ACAS X, departs from the traditional deterministic model on which the current system, TCAS, is based. To increase robustness, ACAS X relies on probabilistic models to represent the various sources of uncertainty. The work reported in this paper identifies verification challenges for ACAS X, and studies the applicability of probabilistic verification and synthesis techniques in addressing these challenges. Due to shortcomings of off-the-shelf probabilistic analysis tools, we developed a framework that is designed to handle systems with similar characteristics as ACAS X. We describe the application of our framework to AC

Christian von Essen, Dimitra Giannakopoulou
Environment-Model Based Testing of Control Systems: Case Studies

A reactive system reacts to an environment it tries to control. Lurette is a black-box testing tool for such closed-loop systems. It focuses on environment modeling using Lutin, a language designed to perform guided random exploration of the System Under Test (SUT) environment, taking into account the feedback. The test decision is automated using Lustre oracles resulting from the formalisation of functional requirements.

In this article, we report on experimentations conducted with Lurette on two industrial case studies. One deals with a dynamic system which simulates the behavior of the temperature and the pressure of a fluid in a pipe. The other one reports on how Lurette can be used to automate the processing of an existing test booklet of a Supervisory Control and Data Acquisition (SCADA) library module.

Erwan Jahier, Simplice Djoko-Djoko, Chaouki Maiza, Eric Lafont
Backmatter
Metadaten
Titel
Tools and Algorithms for the Construction and Analysis of Systems
herausgegeben von
Erika Ábrahám
Klaus Havelund
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-54862-8
Print ISBN
978-3-642-54861-1
DOI
https://doi.org/10.1007/978-3-642-54862-8