Skip to main content

2017 | Buch

Tools and Algorithms for the Construction and Analysis of Systems

23rd International Conference, TACAS 2017, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings, Part I

herausgegeben von: Axel Legay, Tiziana Margaria

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-book set LNCS 10205 + 10206 constitutes the proceedings of the 23rd International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2017, which took place in Uppsala, Sweden in April 2017, held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017.

The 48 full papers, 4 tool demonstration papers, and 12 software competition papers presented in these volumes were carefully reviewed and selected from 181 submissions to TACAS and 32 submissions to the software competition. They were organized in topical sections named: verification techniques; learning; synthesis; automata; concurrency and bisimulation; hybrid systems; security; run-time verification and logic; quantitative systems; SAT and SMT; and SV COMP.

Inhaltsverzeichnis

Frontmatter

Invited Talk

Frontmatter
Validation, Synthesis and Optimization for Cyber-Physical Systems

The growing complexity of Cyber-Physical Systems increasingly challenges existing methods and techniques. What is needed is a new generation of scalable tools for model-based learning, analysis, synthesis and optimization based on a mathematical sound foundation, that enables trade-offs between functional safety and quantitative performance. In paper we illustrate how recent branches of the Uppaal tool suit are making an effort in this direction.

Kim Guldstrand Larsen

Verification Techniques I

Frontmatter
An Abstraction Technique for Parameterized Model Checking of Leader Election Protocols: Application to FTSP

We consider distributed timed systems that implement leader election protocols which are at the heart of clock synchronization protocols. We develop abstraction techniques for parameterized model checking of such protocols under arbitrary network topologies, where nodes have independently evolving clocks. We apply our technique for model checking the root election part of the flooding time synchronisation protocol (FTSP), and obtain improved results compared to previous work. We model check the protocol for all topologies in which the distance to the node to be elected leader is bounded by a given parameter.

Ocan Sankur, Jean-Pierre Talpin
Combining String Abstract Domains for JavaScript Analysis: An Evaluation

Strings play a central role in JavaScript and similar scripting languages. Owing to dynamic features such as the eval function and dynamic property access, precise string analysis is a prerequisite for automated reasoning about practically any kind of runtime property. Although the literature presents a considerable number of abstract domains for capturing and representing specific aspects of strings, we are not aware of tools that allow flexible combination of string abstract domains. Indeed, support for string analysis is often confined to a single, dedicated string domain. In this paper we describe a framework that allows us to combine multiple string abstract domains for the analysis of JavaScript programs. It is implemented as an extension of SAFE, an open-source static analysis tool. We investigate different combinations of abstract domains that capture various aspects of strings. Our evaluation suggests that a combination of a few, simple abstract domains suffice to outperform the precision of state-of-the-art static analysis tools for JavaScript.

Roberto Amadini, Alexander Jordan, Graeme Gange, François Gauthier, Peter Schachte, Harald Søndergaard, Peter J. Stuckey, Chenyi Zhang
Invariant Checking of NRA Transition Systems via Incremental Reduction to LRA with EUF

Model checking invariant properties of designs, represented as transition systems, with non-linear real arithmetic (NRA), is an important though very hard problem. On the one hand NRA is a hard-to-solve theory; on the other hand most of the powerful model checking techniques lack support for NRA. In this paper, we present a counterexample-guided abstraction refinement (CEGAR) approach that leverages linearization techniques from differential calculus to enable the use of mature and efficient model checking algorithms for transition systems on linear real arithmetic (LRA) with uninterpreted functions (EUF). The results of an empirical evaluation confirm the validity and potential of this approach.

Alessandro Cimatti, Alberto Griggio, Ahmed Irfan, Marco Roveri, Roberto Sebastiani
Bounded Quantifier Instantiation for Checking Inductive Invariants

We consider the problem of checking whether a proposed invariant $$\varphi $$ expressed in first-order logic with quantifier alternation is inductive, i.e. preserved by a piece of code. While the problem is undecidable, modern SMT solvers can sometimes solve it automatically. However they employ powerful quantifier instantiation methods that may diverge, especially when $$\varphi $$ is not preserved. A notable difficulty arises due to counterexamples of infinite size.This paper studies Bounded-Horizon instantiation, a natural method for guaranteeing the termination of SMT solvers. The method bounds the depth of terms used in the quantifier instantiation process. We show that this method is surprisingly powerful for checking quantified invariants in uninterpreted domains. Furthermore, by producing partial models it can help the user diagnose the case when $$\varphi $$ is not inductive, especially when the underlying reason is the existence of infinite counterexamples.Our main technical result is that Bounded-Horizon is at least as powerful as instrumentation, which is a manual method to guarantee convergence of the solver by modifying the program so that it admits a purely universal invariant. We show that with a bound of 1 we can simulate a natural class of instrumentations, without the need to modify the code and in a fully automatic way. We also report on a prototype implementation on top of Z3, which we used to verify several examples by Bounded-Horizon of bound 1.

Yotam M. Y. Feldman, Oded Padon, Neil Immerman, Mooly Sagiv, Sharon Shoham

Verification Techniques II

Frontmatter
Proving Termination Through Conditional Termination

We present a constraint-based method for proving conditional termination of integer programs. Building on this, we construct a framework to prove (unconditional) program termination using a powerful mechanism to combine conditional termination proofs. Our key insight is that a conditional termination proof shows termination for a subset of program execution states which do not need to be considered in the remaining analysis. This facilitates more effective termination as well as non-termination analyses, and allows handling loops with different execution phases naturally. Moreover, our method can deal with sequences of loops compositionally. In an empirical evaluation, we show that our implementation VeryMax outperforms state-of-the-art tools on a range of standard benchmarks.

Cristina Borralleras, Marc Brockschmidt, Daniel Larraz, Albert Oliveras, Enric Rodríguez-Carbonell, Albert Rubio
Efficient Certified Resolution Proof Checking

We present a novel propositional proof tracing format that eliminates complex processing, thus enabling efficient (formal) proof checking. The benefits of this format are demonstrated by implementing a proof checker in C, which outperforms a state-of-the-art checker by two orders of magnitude. We then formalize the theory underlying propositional proof checking in Coq, and extract a correct-by-construction proof checker for our format from the formalization. An empirical evaluation using 280 unsatisfiable instances from the 2015 and 2016 SAT competitions shows that this certified checker usually performs comparably to a state-of-the-art non-certified proof checker. Using this format, we formally verify the recent 200 TB proof of the Boolean Pythagorean Triples conjecture.

Luís Cruz-Filipe, Joao Marques-Silva, Peter Schneider-Kamp
Precise Widening Operators for Proving Termination by Abstract Interpretation

FuncTion is a static analyzer designed for proving conditional termination of C programs by means of abstract interpretation. Its underlying abstract domain is based on piecewise-defined functions, which provide an upper bound on the number of program execution steps until termination as a function of the program variables.In this paper, we fully parameterize various aspects of the abstract domain, gaining a flexible balance between the precision and the cost of the analysis. We propose heuristics to improve the fixpoint extrapolation strategy (i.e., the widening operator) of the abstract domain. In particular we identify new widening operators, which combine these heuristics to dramatically increase the precision of the analysis while offering good cost compromises. We also introduce a more precise, albeit costly, variable assignment operator and the support for choosing between integer and rational values for the piecewise-defined functions.We combined these improvements to obtain an implementation of the abstract domain which subsumes the previous implementation. We provide experimental evidence in comparison with state-of-the-art tools showing a considerable improvement in precision at a minor cost in performance.

Nathanaël Courant, Caterina Urban
Automatic Verification of Finite Precision Implementations of Linear Controllers

We consider the problem of verifying finite precision implementation of linear time-invariant controllers against mathematical specifications. A specification may have multiple correct implementations which are different from each other in controller state representation, but equivalent from a perspective of input-output behavior (e.g., due to optimization in a code generator). The implementations may use finite precision computations (e.g. floating-point arithmetic) which cause quantization (i.e., roundoff) errors. To address these challenges, we first extract a controller’s mathematical model from the implementation via symbolic execution and floating-point error analysis, and then check approximate input-output equivalence between the extracted model and the specification by similarity checking. We show how to automatically verify the correctness of floating-point controller implementation in C language using the combination of techniques such as symbolic execution and convex optimization problem solving. We demonstrate the scalability of our approach through evaluation with randomly generated controller specifications of realistic size.

Junkil Park, Miroslav Pajic, Oleg Sokolsky, Insup Lee

Learning

Frontmatter
Learning Symbolic Automata

Symbolic automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend classic automata to operate over infinite alphabets, such as the set of rational numbers. In this paper, we study the foundational problem of learning symbolic automata. We first present $$\mathrm {\Lambda }^*$$, a symbolic automata extension of Angluin’s L$$^*$$ algorithm for learning regular languages. Then, we define notions of learnability that are parametric in the alphabet theories of the symbolic automata and show how these notions nicely compose. Specifically, we show that if two alphabet theories are learnable, then the theory accepting the Cartesian product or disjoint union of their alphabets is also learnable. Using these properties, we show how existing algorithms for learning automata over large alphabets nicely fall in our framework. Finally, we implement our algorithm in an open-source library and evaluate it on existing automata learning benchmarks.

Samuel Drews, Loris D’Antoni
ML for ML: Learning Cost Semantics by Experiment

It is an open problem in static resource bound analysis to connect high-level resource bounds with the actual execution time and memory usage of compiled machine code. This paper proposes to use machine learning to derive a cost model for a high-level source language that approximates the execution cost of compiled programs on a specific hardware platform. The proposed technique starts by fixing a cost semantics for the source language in which certain constants are unknown. To learn the constants for a specific hardware, a machine learning algorithm measures the resource cost of a set of training programs and compares the cost with the prediction of the cost semantics. The quality of the learned cost model is evaluated by comparing the model with the measured cost on a set of independent control programs. The technique has been implemented for a subset of OCaml using Inria’s OCaml compiler on an Intel x86-64 and ARM 64-bit v8-A platform. The considered resources in the implementation are heap allocations and execution time. The training programs are deliberately simple, handwritten micro benchmarks and the control programs are retrieved from the standard library, an OCaml online tutorial, and local OCaml projects. Different machine learning techniques are applied, including (weighted) linear regression and (weighted) robust regression. To model the execution time of programs with garbage collection (GC), the system combines models for memory allocations and executions without GC, which are derived first. Experiments indicate that the derived cost semantics for the number of heap allocations on both hardware platforms is accurate. The error of the cost semantics on the control programs for the x86-64 architecture for execution time with and without GC is about 19.80% and 13.04%, respectively. The derived cost semantics are combined with RAML, a state-of-the-art system for automatically deriving resource bounds for OCaml programs. Using these semantics, RAML is for the first time able to make predictions about the actual worst-case execution time.

Ankush Das, Jan Hoffmann
A Novel Learning Algorithm for Büchi Automata Based on Family of DFAs and Classification Trees

In this paper, we propose a novel algorithm to learn a Büchi automaton from a teacher who knows an $$\omega $$-regular language. The algorithm is based on learning a formalism named family of DFAs (FDFAs) recently proposed by Angluin and Fisman [10]. The main catch is that we use a classification tree structure instead of the standard observation table structure. The worst case storage space required by our algorithm is quadratically better than the table-based algorithm proposed in [10]. We implement the first publicly available library ROLL (Regular Omega Language Learning), which consists of all $$\omega $$-regular learning algorithms available in the literature and the new algorithms proposed in this paper. Experimental results show that our tree-based algorithms have the best performance among others regarding the number of solved learning tasks.

Yong Li, Yu-Fang Chen, Lijun Zhang, Depeng Liu

Synthesis I

Frontmatter
Hierarchical Network Formation Games

Classical network-formation games (NFGs) are played on directed graphs, and are used in network design and analysis. Edges in the network are associated with costs and players have reachability objectives, which they try to fulfill at a minimal cost. When several players use the same edge, they share its cost. The theoretical and practical aspects of NFGs have been extensively studied and are well understood. All studies of NFGs, however, consider an explicit representation of the network. In practice, networks are often built in a hierarchical manner. Technically, some of the vertices in the network are boxes, associated with nested sub-networks, where a sub-network may be “called” by several boxes in the network. This makes hierarchical networks exponentially more succinct than traditional “flat” networks.We introduce hierarchical network formation games (HNFGs) and study theoretical and practical aspects of the hierarchical setting. Different applications call for different cost-sharing mechanisms, which define how edge-formation costs are shared by their users. Indeed, in some applications, cost sharing should refer to the flat expansion of the network and in some it should take into account the hierarchical structure of the network. We study properties of HNFGs like stability and equilibrium inefficiency in the different mechanisms. We also study computational aspects of HNFGs, where the principal question is whether their exponential succinctness with respect to NFGs leads to an exponential increase in the complexity of reasoning about them. This question is analogous to research done in the formal-verification community about the ability to model-check hierarchical systems in their succinct presentation. We show that the picture is diverse and depends on the mechanism applied.

Orna Kupferman, Tami Tamir
Synthesis of Recursive ADT Transformations from Reusable Templates

Recent work has proposed a promising approach to improving scalability of program synthesis by allowing the user to supply a syntactic template that constrains the space of potential programs. Unfortunately, creating templates often requires nontrivial effort from the user, which impedes the usability of the synthesizer. We present a solution to this problem in the context of recursive transformations on algebraic data-types. Our approach relies on polymorphic synthesis constructs: a small but powerful extension to the language of syntactic templates, which makes it possible to define a program space in a concise and highly reusable manner, while at the same time retains the scalability benefits of conventional templates. This approach enables end-users to reuse predefined templates from a library for a wide variety of problems with little effort. The paper also describes a novel optimization that further improves the performance and the scalability of the system. We evaluated the approach on a set of benchmarks that most notably includes desugaring functions for lambda calculus, which force the synthesizer to discover Church encodings for pairs and boolean operations.

Jeevana Priya Inala, Nadia Polikarpova, Xiaokang Qiu, Benjamin S. Lerner, Armando Solar-Lezama
Counterexample-Guided Model Synthesis

In this paper we present a new approach for solving quantified formulas in Satisfiability Modulo Theories (SMT), with a particular focus on the theory of fixed-size bit-vectors. We combine counterexample-guided quantifier instantiation with a syntax-guided synthesis approach, which allows us to synthesize both Skolem functions and terms for quantifier instantiations. Our approach employs two ground theory solvers to reason about quantified formulas. It neither relies on quantifier specific simplifications nor heuristic quantifier instantiation techniques, which makes it a simple yet effective approach for solving quantified formulas. We implemented our approach in our SMT solver Boolector and show in our experiments that our techniques are competitive compared to the state-of-the-art in solving quantified bit-vectors.

Mathias Preiner, Aina Niemetz, Armin Biere
Interpolation-Based GR(1) Assumptions Refinement

This paper considers the problem of assumptions refinement in the context of unrealizable specifications for reactive systems. We propose a new counterstrategy-guided synthesis approach for GR(1) specifications based on Craig’s interpolants. Our interpolation-based method identifies causes for unrealizability and computes assumptions that directly target unrealizable cores, without the need for user input. Thereby, we discuss how this property reduces the maximum number of steps needed to converge to realizability compared with other techniques. We describe properties of interpolants that yield helpful GR(1) assumptions and prove the soundness of the results. Finally, we demonstrate that our approach yields weaker assumptions than baseline techniques.

Davide G. Cavezza, Dalal Alrajeh

Synthesis II

Frontmatter
Connecting Program Synthesis and Reachability: Automatic Program Repair Using Test-Input Generation

We prove that certain formulations of program synthesis and reachability are equivalent. Specifically, our constructive proof shows the reductions between the template-based synthesis problem, which generates a program in a pre-specified form, and the reachability problem, which decides the reachability of a program location. This establishes a link between the two research fields and allows for the transfer of techniques and results between them.To demonstrate the equivalence, we develop a program repair prototype using reachability tools. We transform a buggy program and its required specification into a specific program containing a location reachable only when the original program can be repaired, and then apply an off-the-shelf test-input generation tool on the transformed program to find test values to reach the desired location. Those test values correspond to repairs for the original program. Preliminary results suggest that our approach compares favorably to other repair methods.

ThanhVu Nguyen, Westley Weimer, Deepak Kapur, Stephanie Forrest
Scaling Enumerative Program Synthesis via Divide and Conquer

Given a semantic constraint specified by a logical formula, and a syntactic constraint specified by a context-free grammar, the Syntax-Guided Synthesis (SyGuS) problem is to find an expression that satisfies both the syntactic and semantic constraints. An enumerative approach to solve this problem is to systematically generate all expressions from the syntactic space with some pruning, and has proved to be surprisingly competitive in the newly started competition of SyGuS solvers. It performs well on small to medium sized benchmarks, produces succinct expressions, and has the ability to generalize from input-output examples. However, its performance degrades drastically with the size of the smallest solution. To overcome this limitation, in this paper we propose an alternative approach to solve SyGuS instances.The key idea is to employ a divide-and-conquer approach by separately enumerating (a) smaller expressions that are correct on subsets of inputs, and (b) predicates that distinguish these subsets. These expressions and predicates are then combined using decision trees to obtain an expression that is correct on all inputs. We view the problem of combining expressions and predicates as a multi-label decision tree learning problem. We propose a novel technique of associating a probability distribution over the set of labels that a sample can be labeled with. This enables us to use standard information-gain based heuristics to learn compact decision trees.We report a prototype implementation eusolver. Our tool is able to match the running times and the succinctness of solutions of both standard enumerative solver and the latest white-box solvers on most benchmarks from the SyGuS competition. In the 2016 edition of the SyGuS competition, eusolver placed first in the general track and the programming-by-examples track, and placed second in the linear integer arithmetic track.

Rajeev Alur, Arjun Radhakrishna, Abhishek Udupa
Towards Parallel Boolean Functional Synthesis

Given a relational specification $$\varphi (X, Y)$$, where X and Y are sequences of input and output variables, we wish to synthesize each output as a function of the inputs such that the specification holds. This is called the Boolean functional synthesis problem and has applications in several areas. In this paper, we present the first parallel approach for solving this problem, using compositional and CEGAR-style reasoning as key building blocks. We show by means of extensive experiments that our approach outperforms existing tools on a large class of benchmarks.

S. Akshay, Supratik Chakraborty, Ajith K. John, Shetal Shah
Encodings of Bounded Synthesis

The reactive synthesis problem is to compute a system satisfying a given specification in temporal logic. Bounded synthesis is the approach to bound the maximum size of the system that we accept as a solution to the reactive synthesis problem. As a result, bounded synthesis is decidable whenever the corresponding verification problem is decidable, and can be applied in settings where classic synthesis fails, such as in the synthesis of distributed systems. In this paper, we study the constraint solving problem behind bounded synthesis. We consider different reductions of the bounded synthesis problem of linear-time temporal logic (LTL) to constraint systems given as boolean formulas (SAT), quantified boolean formulas (QBF), and dependency quantified boolean formulas (DQBF). The reductions represent different trade-offs between conciseness and algorithmic efficiency. In the SAT encoding, both inputs and states of the system are represented explicitly; in QBF, inputs are symbolic and states are explicit; in DQBF, both inputs and states are symbolic. We evaluate the encodings systematically using benchmarks from the reactive synthesis competition (SYNTCOMP) and state-of-the-art solvers. Our key, and perhaps surprising, empirical finding is that QBF clearly dominates both SAT and DQBF.

Peter Faymonville, Bernd Finkbeiner, Markus N. Rabe, Leander Tentrup

Tools

Frontmatter
HQSpre – An Effective Preprocessor for QBF and DQBF

We present our new preprocessor HQSpre, a state-of-the-art tool for simplifying quantified Boolean formulas (QBFs) and the first available preprocessor for dependency quantified Boolean formulas (DQBFs). The latter are a generalization of QBFs, resulting from adding so-called Henkin-quantifiers to QBFs. HQSpre applies most of the preprocessing techniques that have been proposed in the literature. It can be used both as a standalone tool and as a library. It is possible to tailor it towards different solver back-ends, e. g., to preserve the circuit structure of the formula when a non-CNF solver back-end is used. Extensive experiments show that HQSpre allows QBF solvers to solve more benchmark instances and is able to decide more instances on its own than state-of-the-art tools. The same impact can be observed in the DQBF domain as well.

Ralf Wimmer, Sven Reimer, Paolo Marin, Bernd Becker
RPP: Automatic Proof of Relational Properties by Self-composition

Self-composition provides a powerful theoretical approach to prove relational properties, i.e. properties relating several program executions, that has been applied to compare two runs of one or similar programs (in secure dataflow properties, code transformations, etc.). This tool demo paper presents RPP, an original implementation of self-composition for specification and verification of relational properties in C programs in the Frama-C platform. We consider a very general notion of relational properties invoking any finite number of function calls of possibly dissimilar functions with possible nested calls. The new tool allows the user to specify a relational property, to prove it in a completely automatic way using classic deductive verification, and to use it as a hypothesis in the proof of other properties that may rely on it.

Lionel Blatter, Nikolai Kosmatov, Pascale Le Gall, Virgile Prevosto
autoCode4: Structural Controller Synthesis

autoCode4 synthesizes structured reactive controllers from realizable specifications in the GXW subset of linear temporal logic (LTL). Generated reactive controllers are expressed in terms of an intermediate synchronous dataflow (SDF) format, which is further translated, using an open interface, into SCADE/Lustre and Ptolemy II. Moreover, autoCode4 generates and maintains a traceability relation between individual requirements and generated code blocks, as mandated by current standards for certifying safety-critical control code.

Chih-Hong Cheng, Edward A. Lee, Harald Ruess

Automata

Frontmatter
Lazy Automata Techniques for WS1S

We present a new decision procedure for the logic WS1S. It originates from the classical approach, which first builds an automaton accepting all models of a formula and then tests whether its language is empty. The main novelty is to test the emptiness on the fly, while constructing a symbolic, term-based representation of the automaton, and prune the constructed state space from parts irrelevant to the test. The pruning is done by a generalization of two techniques used in antichain-based language inclusion and universality checking of finite automata: subsumption and early termination. The richer structure of the WS1S decision problem allows us, however, to elaborate on these techniques in novel ways. Our experiments show that the proposed approach can in many cases significantly outperform the classical decision procedure (implemented in the Mona tool) as well as recently proposed alternatives.

Tomáš Fiedor, Lukáš Holík, Petr Janků, Ondřej Lengál, Tomáš Vojnar
From LTL and Limit-Deterministic Büchi Automata to Deterministic Parity Automata

Controller synthesis for general linear temporal logic (LTL) objectives is a challenging task. The standard approach involves translating the LTL objective into a deterministic parity automaton (DPA) by means of the Safra-Piterman construction. One of the challenges is the size of the DPA, which often grows very fast in practice, and can reach double exponential size in the length of the LTL formula. In this paper we describe a single exponential translation from limit-deterministic Büchi automata (LDBA) to DPA, and show that it can be concatenated with a recent efficient translation from LTL to LDBA to yield a double exponential, “Safraless” LTL-to-DPA construction. We also report on an implementation, a comparison with the SPOT library, and performance on several sets of formulas, including instances from the 2016 SyntComp competition.

Javier Esparza, Jan Křetínský, Jean-François Raskin, Salomon Sickert
Index Appearance Record for Transforming Rabin Automata into Parity Automata

Transforming deterministic $$\omega $$ -automata into deterministic parity automata is traditionally done using variants of appearance records. We present a more efficient variant of this approach, tailored to Rabin automata, and several optimizations applicable to all appearance records. We compare the methods experimentally and find out that our method produces smaller automata than previous approaches. Moreover, the experiments demonstrate the potential of our method for LTL synthesis, using LTL-to-Rabin translators. It leads to significantly smaller parity automata when compared to state-of-the-art approaches on complex formulae.

Jan Křetínský, Tobias Meggendorfer, Clara Waldmann, Maximilian Weininger
Minimization of Visibly Pushdown Automata Using Partial Max-SAT

We consider the problem of state-space reduction for nondeterministic weakly-hierarchical visibly pushdown automata (Vpa). Vpa recognize a robust and algorithmically tractable fragment of context-free languages that is natural for modeling programs.We define an equivalence relation that is sufficient for language-preserving quotienting of Vpa. Our definition allows to merge states that have different behavior, as long as they show the same behavior for reachable equivalent stacks. We encode the existence of such a relation as a Boolean partial maximum satisfiability (PMax-Sat) problem and present an algorithm that quickly finds satisfying assignments. These assignments are sub-optimal solutions to the PMax-Sat problem but can still lead to a significant reduction of states.We integrated our method in the automata-based software verifier Ultimate Automizer and show performance improvements on benchmarks from the software verification competition SV-COMP.

Matthias Heizmann, Christian Schilling, Daniel Tischner

Concurrency and Bisimulation

Frontmatter
CSimpl: A Rely-Guarantee-Based Framework for Verifying Concurrent Programs

It is essential to deal with the interference of the environment between programs in concurrent program verification. This has led to the development of concurrent program reasoning techniques such as rely-guarantee. However, the source code of the programs to be verified often involves language features such as exceptions and procedures which are not supported by the existing mechanizations of those concurrent reasoning techniques. Schirmer et al. have solved a similar problem for sequential programs by developing a verification framework in the Isabelle/HOL theorem prover called Simpl, which provides a rich sequential language that can encode most of the features in real world programming languages. However Simpl only aims to verify sequential programs, and it does not support the specification nor the verification of concurrent programs. In this paper we introduce CSimpl, an extension of Simpl with concurrency-oriented language features and verification techniques. We prove the compositionality of the CSimpl semantics and we provide inference rules for the language constructors to reason about CSimpl programs using rely-guarantee, showing that the inference rules are sound w.r.t. the language semantics. Finally, we run a case study where we use CSimpl to specify and prove functional correctness of an abstract communication model of the XtratuM partitioning separation micro-kernel.

David Sanán, Yongwang Zhao, Zhe Hou, Fuyuan Zhang, Alwen Tiu, Yang Liu
Fair Termination for Parameterized Probabilistic Concurrent Systems

We consider the problem of automatically verifying that a parameterized family of probabilistic concurrent systems terminates with probability one for all instances against adversarial schedulers. A parameterized family defines an infinite-state system: for each number n, the family consists of an instance with n finite-state processes. In contrast to safety, the parameterized verification of liveness is currently still considered extremely challenging especially in the presence of probabilities in the model. One major challenge is to provide a sufficiently powerful symbolic framework. One well-known symbolic framework for the parameterized verification of non-probabilistic concurrent systems is regular model checking. Although the framework was recently extended to probabilistic systems, incorporating fairness in the framework—often crucial for verifying termination—has been especially difficult due to the presence of an infinite number of fairness constraints (one for each process). Our main contribution is a systematic, regularity-preserving, encoding of finitary fairness (a realistic notion of fairness proposed by Alur and Henzinger) in the framework of regular model checking for probabilistic parameterized systems. Our encoding reduces termination with finitary fairness to verifying parameterized termination without fairness over probabilistic systems in regular model checking (for which a verification framework already exists). We show that our algorithm could verify termination for many interesting examples from distributed algorithms (Herman’s protocol) and evolutionary biology (Moran process, cell cycle switch), which do not hold under the standard notion of fairness. To the best of our knowledge, our algorithm is the first fully-automatic method that can prove termination for these examples.

Ondřej Lengál, Anthony W. Lin, Rupak Majumdar, Philipp Rümmer
Forward Bisimulations for Nondeterministic Symbolic Finite Automata

Symbolic automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend classic automata to operate over infinite alphabets, such as the set of rational numbers. Existing automata algorithms rely on the alphabet being finite, and generalizing them to the symbolic setting is not a trivial task. In our earlier work, we proposed new techniques for minimizing deterministic symbolic automata and, in this paper, we generalize these techniques and study the foundational problem of computing forward bisimulations of nondeterministic symbolic finite automata. We propose three algorithms. Our first algorithm generalizes Moore’s algorithm for minimizing deterministic automata. Our second algorithm generalizes Hopcroft’s algorithm for minimizing deterministic automata. Since the first two algorithms have quadratic complexity in the number of states and transitions in the automaton, we propose a third algorithm that only requires a number of iterations that is linearithmic in the number of states and transitions at the cost of an exponential worst-case complexity in the number of distinct predicates appearing in the automaton. We implement our algorithms and evaluate them on 3,625 nondeterministic symbolic automata from real-world applications.

Loris D’Antoni, Margus Veanes
Up-To Techniques for Weighted Systems

We show how up-to techniques for (bi-)similarity can be used in the setting of weighted systems. The problems we consider are language equivalence, language inclusion and the threshold problem (also known as universality problem) for weighted automata. We build a bisimulation relation on the fly and work up-to congruence and up-to similarity. This requires to determine whether a pair of vectors (over a semiring) is in the congruence closure of a given relation of vectors. This problem is considered for rings and l-monoids, for the latter we provide a rewriting algorithm and show its confluence and termination. We then explain how to apply these up-to techniques to weighted automata and provide runtime results.

Filippo Bonchi, Barbara König, Sebastian Küpper

Hybrid Systems

Frontmatter
Rigorous Simulation-Based Analysis of Linear Hybrid Systems

Design analysis of Cyber-Physical Systems (CPS) with complex continuous and discrete behaviors, in-practice, relies heavily on numerical simulations. While useful for evaluation and debugging, such analysis is often incomplete owing to the nondeterminism in the discrete transitions and the uncountability of the continuous space. In this paper, we present a precise notion of simulations for CPS called simulation-equivalent reachability, which includes all the states that can be reached by any simulation. While this notion is weaker than traditional reachability, we present a technique that performs simulation-equivalent reachability in an efficient, scalable, and theoretically sound and complete manner. For achieving this, we describe two improvements, namely invariant constraint propagation for handling invariants and on-demand successor deaggregation for handling discrete transitions. We use our tool implementation of the approach, HyLAA (Hybrid Linear Automata Analyzer), to evaluate the improvements, and demonstrate computing the simulation-equivalent reachable set for a replicated helicopter systems with over 1000 dimensions in about 10 min.

Stanley Bak, Parasara Sridhar Duggirala
HARE: A Hybrid Abstraction Refinement Engine for Verifying Non-linear Hybrid Automata

(Hybrid Abstraction-Refinement Engine) is a counterexample guided abstraction-refinement (CEGAR) based tool to verify safety properties of hybrid automata, whose continuous dynamics in each mode is non-linear, but initial values, invariants, and transition relations are specified using polyhedral constraints. works by abstracting non-linear hybrid automata into hybrid automata with polyhedral inclusion dynamics, and uses to validate counterexamples. We show that the CEGAR framework forming the theoretical basis of , makes provable progress in each iteration of the abstraction-refinement loop. The current tool is a significant advance on previous versions of —it considers a richer class of abstract models (polyhedral flows as opposed to rectangular flows), and can be applied to a larger class of concrete models (non-linear hybrid automata as opposed to affine hybrid automata). These advances have led to better performance results for a wider class of examples. We report an experimental comparison of against other state of the art tools for affine models (, , and ) and non-linear models (, , and ).

Nima Roohi, Pavithra Prabhakar, Mahesh Viswanathan
Counterexample-Guided Refinement of Template Polyhedra

Template polyhedra generalize intervals and octagons to polyhedra whose facets are orthogonal to a given set of arbitrary directions. They have been employed in the abstract interpretation of programs and, with particular success, in the reachability analysis of hybrid automata. While previously, the choice of directions has been left to the user or a heuristic, we present a method for the automatic discovery of directions that generalize and eliminate spurious counterexamples. We show that for the class of convex hybrid automata, i.e., hybrid automata with (possibly nonlinear) convex constraints on derivatives, such directions always exist and can be found using convex optimization. We embed our method inside a CEGAR loop, thus enabling the time-unbounded reachability analysis of an important and richer class of hybrid automata than was previously possible. We evaluate our method on several benchmarks, demonstrating also its superior efficiency for the special case of linear hybrid automata.

Sergiy Bogomolov, Goran Frehse, Mirco Giacobbe, Thomas A. Henzinger
Backmatter
Metadaten
Titel
Tools and Algorithms for the Construction and Analysis of Systems
herausgegeben von
Axel Legay
Tiziana Margaria
Copyright-Jahr
2017
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-54577-5
Print ISBN
978-3-662-54576-8
DOI
https://doi.org/10.1007/978-3-662-54577-5