Skip to main content

2016 | Buch

Verification, Model Checking, and Abstract Interpretation

17th International Conference, VMCAI 2016, St. Petersburg, FL, USA, January 17-19, 2016. Proceedings

herausgegeben von: Barbara Jobstmann, K. Rustan M. Leino

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 17th International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2016, held in St. Petersburg, FL, USA, in January 2016. The 24 full papers together with 2 invited talks and 1 abstract presented were carefully reviewed and selected from 67 submissions. VMCAI provides topics including: program verification, model checking, abstractinterpretation and abstract domains, program synthesis, static analysis,type systems, deductive methods, program certification, debugging techniques,program transformation, optimization, hybrid and cyber-physical systems.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Frontmatter
Automating Abstract Interpretation
Abstract
Abstract interpretation has a reputation of being a kind of “black art,” and consequently difficult to work with. This paper describes a twenty-year quest by the first author to address this issue by raising the level of automation in abstract interpretation. The most recent leg of this journey is the subject of the second author’s 2014 Ph.D. dissertation. The paper discusses several different approaches to creating correct-by-construction analyzers. Our research has allowed us to establish connections between this problem and several other areas of computer science, including automated reasoning/decision procedures, concept learning, and constraint programming.
Thomas Reps, Aditya Thakur
Viper: A Verification Infrastructure for Permission-Based Reasoning
Abstract
The automation of verification techniques based on first-order logic specifications has benefitted greatly from verification infrastructures such as Boogie and Why. These offer an intermediate language that can express diverse language features and verification techniques, as well as back-end tools: in particular, verification condition generators.
However, these infrastructures are not well suited to verification techniques based on separation logic and other permission logics, because they do not provide direct support for permissions and because existing tools for these logics often favour symbolic execution over verification condition generation. Consequently, tool support for these logics (where available) is typically developed independently for each technique, dramatically increasing the burden of developing automatic tools for permission-based verification.
In this paper, we present a verification infrastructure whose intermediate language supports an expressive permission model natively. We provide tool support including two back-end verifiers: one based on symbolic execution, and one on verification condition generation; an inference tool based on abstract interpretion is currently under development. A wide range of existing verification techniques can be implemented via this infrastructure, alleviating much of the burden of building permission-based verifiers, and allowing the developers of higher-level reasoning techniques to focus their efforts at an appropriate level of abstraction.
Peter Müller, Malte Schwerhoff, Alexander J. Summers

Abstract Interpretation

Frontmatter
Predicate Abstraction for Linked Data Structures
Abstract
We present Alias Refinement Types (Art), a new approach that uses predicate-abstraction to automate the verification of correctness properties of linked data structures. While there are many techniques for checking that a heap-manipulating program adheres to its specification, they often require that the programmer annotate the behavior of each procedure, for example, in the form of loop invariants and pre- and post-conditions. We introduce a technique that lifts predicate abstraction to the heap by factoring the analysis of data structures into two orthogonal components: (1) Alias Types, which reason about the physical shape of heap structures, and (2) Refinement Types, which use simple predicates from an SMT decidable theory to capture the logical or semantic properties of the structures. We evaluate Art by implementing a tool that performs type inference for an imperative language, and empirically show, using a suite of data-structure benchmarks, that Art requires only 21 % of the annotations needed by other state-of-the-art verification techniques.
Alexander Bakst, Ranjit Jhala
An Abstract Domain of Uninterpreted Functions
Abstract
We revisit relational static analysis of numeric variables. Such analyses face two difficulties. First, even inexpensive relational domains scale too poorly to be practical for large code-bases. Second, to remain tractable they have extremely coarse handling of non-linear relations. In this paper, we introduce the subterm domain, a weakly relational abstract domain for inferring equivalences amongst sub-expressions, based on the theory of uninterpreted functions. This provides an extremely cheap approach for enriching non-relational domains with relational information, and enhances precision of both relational and non-relational domains in the presence of non-linear operations. We evaluate the idea in the context of the software verification tool SeaHorn.
Graeme Gange, Jorge A. Navas, Peter Schachte, Harald Søndergaard, Peter J. Stuckey
Property Directed Abstract Interpretation
Abstract
Recently, Bradley proposed the PDR/IC3 model checking algorithm for verifying safety properties, where forward and backward reachability analyses are intertwined, and guide each other. Many variants of Bradley’s original algorithm have been developed and successfully applied to both hardware and software verification. However, these algorithms have been presented in an operational manner, in disconnect with the rich literature concerning the theoretical foundation of static analysis formulated by abstract interpretation.
Inspired by PDR, we develop a nonstandard semantics which computes for every \(0 \le N\) an over-approximation of the set of traces of length N leading to a safety violation. The over approximation is precise, in the sense that it only includes traces that do not start at an initial state, unless the program is unsafe, and in this case the semantics aborts at a special error state. In a way, the semantics computes multiple over-approximations of bounded unsafe program behaviors using a sequence of abstractions whose precision grows automatically with N.
We show that existing PDR algorithms can be described as a specific implementation of our semantics, performing an abstract interpretation of the program, but instead of aiming for a fixpoint, they stop early when either the backward analysis finds a counterexample or the states comprising one of the bounded traces provides sufficient evidence that the program is safe. This places PDR within the solid framework of abstract interpretation, and thus provides a unified explanation of the different PDR algorithms as well as a new proof of their soundness.
Noam Rinetzky, Sharon Shoham

Abstraction

Frontmatter
Program Analysis with Local Policy Iteration
Abstract
We present local policy iteration (LPI), a new algorithm for deriving numerical invariants that combines the precision of max-policy iteration with the flexibility and scalability of conventional Kleene iterations. It is defined in the Configurable Program Analysis (CPA) framework, thus allowing inter-analysis communication.
LPI uses adjustable-block encoding in order to traverse loop-free program sections, possibly containing branching, without introducing extra abstraction. Our technique operates over any template linear constraint domain, including the interval and octagon domains; templates can also be derived from the program source.
The implementation is evaluated on a set of benchmarks from the International Competition on Software Verification (SV-COMP). It competes favorably with state-of-the-art analyzers.
Egor George Karpenkov, David Monniaux, Philipp Wendler
Lazy Constrained Monotonic Abstraction
Abstract
We introduce Lazy Constrained Monotonic Abstraction (lazy CMA for short) for lazily and soundly exploring well structured abstractions of infinite state non-monotonic systems. CMA makes use of infinite state and well structured abstractions by forcing monotonicity wrt. refinable orderings. The new orderings can be refined based on obtained false positives in a CEGAR like fashion. This allows for the verification of systems that are not monotonic and are hence inherently beyond the reach of classical analysis based on the theory of well structured systems. In this paper, we consistently improve on the existing approach by localizing refinements and by avoiding to trash the explored state space each time a refinement step is required for the ordering. To this end, we adapt ideas from classical lazy predicate abstraction and explain how we address the fact that the number of control points (i.e., minimal elements to be visited) is a priori unbounded. This is unlike the case of plain lazy abstraction which relies on the fact that the number of control locations is finite. We propose several heuristics and report on our experiments using our open source prototype. We consider both backward and forward explorations on non-monotonic systems automatically derived from concurrent programs. Intuitively, the approach could be regarded as using refinable upward closure operators as localized widening operators for an a priori arbitrary number of control points.
Zeinab Ganjei, Ahmed Rezine, Petru Eles, Zebo Peng
Polyhedral Approximation of Multivariate Polynomials Using Handelman’s Theorem
Abstract
Convex polyhedra are commonly used in the static analysis of programs to represent over-approximations of sets of reachable states of numerical program variables. When the analyzed programs contain nonlinear instructions, they do not directly map to standard polyhedral operations: some kind of linearization is needed. Convex polyhedra are also used in satisfiability modulo theory solvers which combine a propositional satisfiability solver with a fast emptiness check for polyhedra. Existing decision procedures become expensive when nonlinear constraints are involved: a fast procedure to ensure emptiness of systems of nonlinear constraints is needed. We present a new linearization algorithm based on Handelman’s representation of positive polynomials. Given a polyhedron and a polynomial (in)equality, we compute a polyhedron enclosing their intersection as the solution of a parametric linear programming problem. To get a scalable algorithm, we provide several heuristics that guide the construction of the Handelman’s representation. To ensure the correctness of our polyhedral approximation, our ocaml implementation generates certificates verified by a checker certified in coq.
Alexandre Maréchal, Alexis Fouilhé, Tim King, David Monniaux, Michael Périn
$$D^3$$ : Data-Driven Disjunctive Abstraction
Abstract
We address the problem of computing an abstraction for a set of examples, which is precise enough to separate them from a set of counterexamples. The challenge is to find an over-approximation of the positive examples that does not represent any negative example. Conjunctive abstractions (e.g., convex numerical domains) and limited disjunctive abstractions, are often insufficient, as even the best such abstraction might include negative examples. One way to improve precision is to consider a general disjunctive abstraction.
We present \(D^3\), a new algorithm for learning general disjunctive abstractions. Our algorithm is inspired by widely used machine-learning algorithms for obtaining a classifier from positive and negative examples. In contrast to these algorithms which cannot generalize from disjunctions, \(D^3\) obtains a disjunctive abstraction that minimizes the number of disjunctions. The result generalizes the positive examples as much as possible without representing any of the negative examples. We demonstrate the value of our algorithm by applying it to the problem of data-driven differential analysis, computing the abstract semantic difference between two programs. Our evaluation shows that \(D^3\) can be used to effectively learn precise differences between programs even when the difference requires a disjunctive representation.
Hila Peleg, Sharon Shoham, Eran Yahav
Exact Heap Summaries for Symbolic Execution
Abstract
A recent trend in the analysis of object-oriented programs is the modeling of references as sets of guarded values, enabling multiple heap shapes to be represented in a single state. A fundamental problem with using these guarded value sets is the inability to generate test inputs in a manner similar to symbolic execution based analyses. Although several solutions have been proposed, none have been proven to be sound and complete with respect to the heap properties provable by generalized symbolic execution (GSE). This work presents a method for initializing input references in a symbolic input heap using guarded value sets that exactly preserves GSE semantics. A correctness proof for the initialization scheme is provided with a proof-of-concept implementation. Results from an empirical evaluation on a common set of GSE data structure benchmarks show an increase in the size and number of analyzed heaps over existing GSE representations. The initialization technique can be used to ensure that guarded value set based symbolic execution engines operate in a provably correct manner with regards to symbolic references as well as provide the ability to generate concrete heaps that serve as test inputs to the program.
Benjamin Hillery, Eric Mercer, Neha Rungta, Suzette Person

Hybrid and Timed Systems

Frontmatter
Abstract Interpretation with Infinitesimals
Towards Scalability in Nonstandard Static Analysis
Abstract
We extend abstract interpretation for the purpose of verifying hybrid systems. Abstraction has been playing an important role in many verification methodologies for hybrid systems, but some special care is needed for abstraction of continuous dynamics defined by ODEs. We apply Cousot and Cousot’s framework of abstract interpretation to hybrid systems, almost as it is, by regarding continuous dynamics as an infinite iteration of infinitesimal discrete jumps. This extension follows the recent line of work by Suenaga, Hasuo and Sekine, where deductive verification is extended for hybrid systems by (1) introducing a constant \(\mathtt {dt}\) for an infinitesimal value; and (2) employing Robinson’s nonstandard analysis (NSA) to define mathematically rigorous semantics. Our theoretical results include soundness and termination via uniform widening operators; and our prototype implementation successfully verifies some benchmark examples.
Kengo Kido, Swarat Chaudhuri, Ichiro Hasuo
Lipschitz Robustness of Timed I/O Systems
Abstract
We present the first study of robustness of systems that are both timed as well as reactive (I/O). We study the behavior of such timed I/O systems in the presence of uncertain inputs and formalize their robustness using the analytic notion of Lipschitz continuity: a timed I/O system is K-(Lipschitz) robust if the perturbation in its output is at most K times the perturbation in its input. We quantify input and output perturbation using similarity functions over timed words such as the timed version of the Manhattan distance and the Skorokhod distance. We consider two models of timed I/O systems — timed transducers and asynchronous sequential circuits. We show that K-robustness of timed transducers can be decided in polynomial space under certain conditions. For asynchronous sequential circuits, we reduce K-robustness w.r.t. timed Manhattan distances to K-robustness of discrete letter-to-letter transducers and show PSpace-completeness of the problem.
Thomas A. Henzinger, Jan Otop, Roopsha Samanta
A Method for Invariant Generation for Polynomial Continuous Systems
Abstract
This paper presents a method for generating semi-algebraic invariants for systems governed by non-linear polynomial ordinary differential equations under semi-algebraic evolution constraints. Based on the notion of discrete abstraction, our method eliminates unsoundness and unnecessary coarseness found in existing approaches for computing abstractions for non-linear continuous systems and is able to construct invariants with intricate boolean structure, in contrast to invariants typically generated using template-based methods. In order to tackle the state explosion problem associated with discrete abstraction, we present invariant generation algorithms that exploit sound proof rules for safety verification, such as differential cut (\({\text {DC}}\)), and a new proof rule that we call differential divide-and-conquer (\({\text {DDC}}\)), which splits the verification problem into smaller sub-problems. The resulting invariant generation method is observed to be much more scalable and efficient than the naïve approach, exhibiting orders of magnitude performance improvement on many of the problems.
Andrew Sogokon, Khalil Ghorbal, Paul B. Jackson, André Platzer

Dynamic and Static Verification

Frontmatter
Hybrid Analysis for Partial Order Reduction of Programs with Arrays
Abstract
An important component of efficient approaches to software model checking and systematic concurrency testing is partial order reduction, which eliminates redundant non-deterministic thread scheduling choices during the state space traversal. Thread choices have to be created only at the execution of actions that access the global state visible by multiple threads, so the key challenge is to precisely determine the set of such globally-relevant actions. This includes accesses to object fields and array elements, and thread synchronization.
However, some tools completely disable thread choices at actions that access individual array elements in order to avoid state explosion. We show that they can miss concurrency errors in such a case. Then, as the main contribution, we present a new hybrid analysis that identifies globally-relevant actions that access arrays. Our hybrid analysis combines static analysis with dynamic analysis, usage of information from dynamic program states, and symbolic interpretation of program statements. Results of experiments with two popular approaches to partial order reduction show that usage of the hybrid analysis (1) eliminates many additional redundant thread choices and (2) improves the performance of software model checking on programs that use arrays.
Pavel Parízek
Cloud-Based Verification of Concurrent Software
Abstract
Logic model checkers are unparalleled in their ability to reveal subtle bugs in multi-threaded software systems. The underlying verification procedure is based on a systematic search of potentially faulty system behaviors, which can be computationally expensive for larger problem sizes. In this paper we consider if it is possible to significantly reduce the runtime requirements of a verification with cloud computing techniques. We explore the use of large numbers of CPU-cores, that each perform small, fast, independent, and randomly different searches to achieve the same problem coverage as a much slower stand-alone run on a single CPU. We present empirical results to demonstrate what is achievable.
Gerard J. Holzmann
Abstraction-driven Concolic Testing
Abstract
Concolic testing is a promising method for generating test suites for large programs. However, it suffers from the path-explosion problem and often fails to find tests that cover difficult-to-reach parts of programs. In contrast, model checkers based on counterexample-guided abstraction refinement explore programs exhaustively, while failing to scale on large programs with precision. In this paper, we present a novel method that iteratively combines concolic testing and model checking to find a test suite for a given coverage criterion. If concolic testing fails to cover some test goals, then the model checker refines its program abstraction to prove more paths infeasible, which reduces the search space for concolic testing. We have implemented our method on top of the concolic-testing tool Crest and the model checker CpaChecker. We evaluated our tool on a collection of programs and a category of SvComp benchmarks. In our experiments, we observed an improvement in branch coverage compared to Crest from \(48\,\%\) to \(63\,\%\) in the best case, and from \(66\,\%\) to \(71\,\%\) on average.
Przemysław Daca, Ashutosh Gupta, Thomas A. Henzinger

Probabilistic Systems

Frontmatter
Reward-Bounded Reachability Probability for Uncertain Weighted MDPs
Abstract
In this paper we present a decision algorithm for computing maximal/minimal reward-bounded reachability probabilities in weighted MDPs with uncertainties. Even though an uncertain weighted MDP (\(\textit{UwMDP}\)) represents an equivalent weighted MDP which may be exponentially larger, our algorithm does not cause an exponentially blow-up and will terminate in polynomial time with respect to the size of \(\textit{UwMDP}\)s. We also define bisimulation relations for \(\textit{UwMDP}\)s, which are compositional and can be decided in polynomial time as well. We develop a prototype tool and apply it to some case studies to show its effectiveness.
Vahid Hashemi, Holger Hermanns, Lei Song
Parameter Synthesis for Parametric Interval Markov Chains
Abstract
Interval Markov Chains (IMCs) are the base of a classic probabilistic specification theory introduced by Larsen and Jonsson in 1991. They are also a popular abstraction for probabilistic systems. In this paper we study parameter synthesis for a parametric extension of Interval Markov Chains in which the endpoints of intervals may be replaced with parameters. In particular, we propose constructions for the synthesis of all parameter values ensuring several properties such as consistency and consistent reachability in both the existential and universal settings with respect to implementations. We also discuss how our constructions can be modified in order to synthesise all parameter values ensuring other typical properties.
Benoît Delahaye, Didier Lime, Laure Petrucci

Concurrent Programs

Frontmatter
Pointer Race Freedom
Abstract
We propose a novel notion of pointer race for concurrent programs manipulating a shared heap. A pointer race is an access to a memory address which was freed, and it is out of the accessor’s control whether or not the cell has been re-allocated. We establish two results. (1) Under the assumption of pointer race freedom, it is sound to verify a program running under explicit memory management as if it was running with garbage collection. (2) Even the requirement of pointer race freedom itself can be verified under the garbage-collected semantics. We then prove analogues of the theorems for a stronger notion of pointer race needed to cope with performance-critical code purposely using racy comparisons and even racy dereferences of pointers. As a practical contribution, we apply our results to optimize a thread-modular analysis under explicit memory management. Our experiments confirm a speed-up of up to two orders of magnitude.
Frédéric Haziza, Lukáš Holík, Roland Meyer, Sebastian Wolff
A Program Logic for C11 Memory Fences
Abstract
We describe a simple, but powerful, program logic for reasoning about C11 relaxed accesses used in conjunction with release and acquire memory fences. Our logic, called fenced separation logic (FSL), extends relaxed separation logic with special modalities for describing state that has to be protected by memory fences. Like its precursor, FSL allows ownership transfer over synchronizations and can be used to verify the message-passing idiom and other similar programs. The soundness of FSL has been established in Coq.
Marko Doko, Viktor Vafeiadis
From Low-Level Pointers to High-Level Containers
Abstract
We propose a method that transforms a C program manipulating containers using low-level pointer statements into an equivalent program where the containers are manipulated via calls of standard high-level container operations like push_back or pop_front. The input of our method is a C program annotated by a special form of shape invariants which can be obtained from current automatic shape analysers after a slight modification. The resulting program where the low-level pointer statements are summarized into high-level container operations is more understandable and (among other possible benefits) better suitable for program analysis. We have implemented our approach and successfully tested it through a number of experiments with list-based containers, including experiments with simplification of program analysis by separating shape analysis from analysing data-related properties.
Kamil Dudka, Lukáš Holík, Petr Peringer, Marek Trtík, Tomáš Vojnar

Parameterized and Component-Based Systems

Frontmatter
Regular Symmetry Patterns
Abstract
Symmetry reduction is a well-known approach for alleviating the state explosion problem in model checking. Automatically identifying symmetries in concurrent systems, however, is computationally expensive. We propose a symbolic framework for capturing symmetry patterns in parameterised systems (i.e. an infinite family of finite-state systems): two regular word transducers to represent, respectively, parameterised systems and symmetry patterns. The framework subsumes various types of “symmetry relations” ranging from weaker notions (e.g. simulation preorders) to the strongest notion (i.e. isomorphisms). Our framework enjoys two algorithmic properties: (1) symmetry verification: given a transducer, we can automatically check whether it is a symmetry pattern of a given system, and (2) symmetry synthesis: we can automatically generate a symmetry pattern for a given system in the form of a transducer. Furthermore, our symbolic language allows additional constraints that the symmetry patterns need to satisfy to be easily incorporated in the verification/synthesis. We show how these properties can help identify symmetry patterns in examples like dining philosopher protocols, self-stabilising protocols, and prioritised resource-allocator protocol. In some cases (e.g. Gries’s coffee can problem), our technique automatically synthesises a safety-preserving finite approximant, which can then be verified for safety solely using a finite-state model checker.
Anthony W. Lin, Truong Khanh Nguyen, Philipp Rümmer, Jun Sun
Tight Cutoffs for Guarded Protocols with Fairness
Abstract
Guarded protocols were introduced in a seminal paper by Emerson and Kahlon (2000), and describe systems of processes whose transitions are enabled or disabled depending on the existence of other processes in certain local states. We study parameterized model checking and synthesis of guarded protocols, both aiming at formal correctness arguments for systems with any number of processes. Cutoff results reduce reasoning about systems with an arbitrary number of processes to systems of a determined, fixed size. Our work stems from the observation that existing cutoff results for guarded protocols (i) are restricted to closed systems, and (ii) are of limited use for liveness properties because reductions do not preserve fairness. We close these gaps and obtain new cutoff results for open systems with liveness properties under fairness assumptions. Furthermore, we obtain cutoffs for the detection of global and local deadlocks, which are of paramount importance in synthesis. Finally, we prove tightness or asymptotic tightness for the new cutoffs.
Simon Außerlechner, Swen Jacobs, Ayrat Khalimov
A General Modular Synthesis Problem for Pushdown Systems
Abstract
The modular synthesis from a library of components (Lms) asks to compose a recursive state machine satisfying a given specification, by modularly controlling a finite set of component instances taken from the library. It combines and subsumes two synthesis problems studied in the literature: the synthesis from libraries of recursive components and the modular strategy synthesis. We consider standard specifications as reachability and safety (expressed by finite automata), and visibly pushdown automata specifications, and show that for all these classes of specifications the Lms problem is EXPTIME-complete.
Ilaria De Crescenzo, Salvatore La Torre

Solver Improvements

Frontmatter
Model Checking with Multi-threaded IC3 Portfolios
Abstract
Three variants of multi-threaded ic3 are presented. Each variant has a fixed number of ic3s running in parallel, and communicating by sharing lemmas. They differ in the degree of synchronization between threads, and the aggressiveness with which proofs are checked. The correctness of all three variants is shown. The variants have unpredictable runtime. On the same input, the time to find the solution over different runs varies randomly depending on the thread interleaving. The use of a portfolio of solvers to maximize the likelihood of a quick solution is investigated. Using the Extreme Value theorem, the runtime of each variant, as well as their portfolios is analyzed statistically. A formula for the portfolio size needed to achieve a verification time with high probability is derived, and validated empirically. Using a portfolio of 20 parallel ic3s, speedups over 300 are observed compared to the sequential ic3 on hardware model checking competition examples. The use of parameter sweeping to implement a solver that performs well over a wide range of problems with unknown “hardness” is investigated.
Sagar Chaki, Derrick Karimi
Automatic Generation of Propagation Complete SAT Encodings
Abstract
Almost all applications of SAT solvers generate Boolean formulae from higher level expression graphs by encoding the semantics of each operation or relation into propositional logic. All non-trivial relations have many different possible encodings and the encoding used can have a major effect on the performance of the system. This paper gives an abstract satisfaction based formalisation of one aspect of encoding quality, the propagation strength, and shows that propagation complete SAT encodings can be modelled by our formalism and automatically computed for key operations. This allows a more rigorous approach to designing encodings as well as improved performance.
Martin Brain, Liana Hadarean, Daniel Kroening, Ruben Martins
Backmatter
Metadaten
Titel
Verification, Model Checking, and Abstract Interpretation
herausgegeben von
Barbara Jobstmann
K. Rustan M. Leino
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-49122-5
Print ISBN
978-3-662-49121-8
DOI
https://doi.org/10.1007/978-3-662-49122-5