Skip to main content

2014 | Buch

Computer Aided Verification

26th International Conference, CAV 2014, Held as Part of the Vienna Summer of Logic, VSL 2014, Vienna, Austria, July 18-22, 2014. Proceedings

herausgegeben von: Armin Biere, Roderick Bloem

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 26th International Conference on Computer Aided Verification, CAV 2014, held as part of the Vienna Summer of Logic, VSL 2014, in Vienna, Austria, in July 2014. The 46 regular papers and 11 short papers presented in this volume were carefully reviewed and selected from a total of 175 regular and 54 short paper submissions. The contributions are organized in topical sections named: software verification; automata; model checking and testing; biology and hybrid systems; games and synthesis; concurrency; SMT and theorem proving; bounds and termination; and abstraction.

Inhaltsverzeichnis

Frontmatter

Software Verification

The Spirit of Ghost Code

In the context of deductive program verification, ghost code is part of the program that is added for the purpose of specification. Ghost code must not interfere with regular code, in the sense that it can be erased without observable difference in the program outcome. In particular, ghost data cannot participate in regular computations and ghost code cannot mutate regular data or diverge. The idea exists in the folklore since the early notion of auxiliary variables and is implemented in many state-of-the-art program verification tools. However, a rigorous definition and treatment of ghost code is surprisingly subtle and few formalizations exist.

In this article, we describe a simple ML-style programming language with mutable state and ghost code. Non-interference is ensured by a type system with effects, which allows, notably, the same data types and functions to be used in both regular and ghost code. We define the procedure of ghost code erasure and we prove its safety using bisimulation. A similar type system, with numerous extensions which we briefly discuss, is implemented in the program verification environment Why3.

Jean-Christophe Filliâtre, Léon Gondelman, Andrei Paskevich
SMT-Based Model Checking for Recursive Programs

We present an SMT-based symbolic model checking algorithm for safety verification of recursive programs. The algorithm is modular and analyzes procedures individually. Unlike other SMT-based approaches, it maintains both

over-

and

under-approximations

of procedure summaries. Under-approximations are used to analyze procedure calls without inlining. Over-approximations are used to block infeasible counterexamples and detect convergence to a proof. We show that for programs and properties over a decidable theory, the algorithm is guaranteed to find a counterexample, if one exists. However, efficiency depends on an oracle for quantifier elimination (QE). For Boolean Programs, the algorithm is a polynomial decision procedure, matching the worst-case bounds of the best BDD-based algorithms. For Linear Arithmetic (integers and rationals), we give an efficient instantiation of the algorithm by applying QE

lazily

. We use existing interpolation techniques to over-approximate QE and introduce

Model Based Projection

to under-approximate QE. Empirical evaluation on SV-COMP benchmarks shows that our algorithm improves significantly on the state-of-the-art.

Anvesh Komuravelli, Arie Gurfinkel, Sagar Chaki
Property-Directed Shape Analysis

This paper addresses the problem of automatically generating quantified invariants for programs that manipulate singly and doubly linked-list data structures. Our algorithm is

property-directed

—i.e., its choices are driven by the properties to be proven. The algorithm is able to establish that a correct program has no memory-safety violations—e.g., null-pointer dereferences, double frees—and that data-structure invariants are preserved. For programs with errors, the algorithm produces concrete counterexamples.

More broadly, the paper describes how to integrate IC3 with full predicate abstraction. The analysis method is complete in the following sense: if an inductive invariant that proves that the program satisfies a given property is expressible as a Boolean combination of a given set of predicates, then the analysis will find such an invariant. To the best of our knowledge, this method represents the first shape-analysis algorithm that is capable of (i) reporting concrete counterexamples, or alternatively (ii) establishing that the predicates in use are not capable of proving the property in question.

Shachar Itzhaky, Nikolaj Bjørner, Thomas Reps, Mooly Sagiv, Aditya Thakur
Shape Analysis via Second-Order Bi-Abduction

We present a new modular shape analysis that can synthesize heap memory specification on a per method basis. We rely on a

second-order biabduction

mechanism that can give interpretations to unknown shape predicates. There are several novel features in our shape analysis. Firstly, it is grounded on second-order bi-abduction. Secondly, we distinguish unknown pre-predicates in pre-conditions, from unknown post-predicates in post-condition; since the former may be strengthened, while the latter may be weakened. Thirdly, we provide a new

heap guard

mechanism to support more precise preconditions for heap specification. Lastly, we formalise a set of derivation and normalization rules to give concise definitions for unknown predicates. Our approach has been proven sound and is implemented on top of an existing automated verification system.We show its versatility in synthesizing a wide range of intricate shape specifications.

Quang Loc Le, Cristian Gherghina, Shengchao Qin, Wei-Ngan Chin
ICE: A Robust Framework for Learning Invariants

We introduce ICE, a robust learning paradigm for synthesizing invariants, that learns using examples, counter-examples, and

implications

, and show that it admits honest teachers and strongly convergent mechanisms for invariant synthesis. We observe that existing algorithms for black-box abstract interpretation can be interpreted as ICE-learning algorithms. We develop new strongly convergent ICE-learning algorithms for two domains, one for learning Boolean combinations of numerical invariants for scalar variables and one for

quantified

invariants for arrays and dynamic lists. We implement these ICE-learning algorithms in a verification tool and show they are robust, practical, and efficient.

Pranav Garg, Christof Löding, P. Madhusudan, Daniel Neider
From Invariant Checking to Invariant Inference Using Randomized Search

We describe a general framework

c2i

for generating an invariant inference procedure from an invariant checking procedure. Given a checker and a language of possible invariants,

c2i

generates an inference procedure that iteratively invokes two phases. The search phase uses randomized search to discover candidate invariants and the validate phase uses the checker to either prove or refute that the candidate is an actual invariant. To demonstrate the applicability of

c2i

, we use it to generate inference procedures that prove safety properties of numerical programs, prove non-termination of numerical programs, prove functional specifications of array manipulating programs, prove safety properties of string manipulating programs, and prove functional specifications of heap manipulating programs that use linked list data structures.

Rahul Sharma, Alex Aiken
SMACK: Decoupling Source Language Details from Verifier Implementations

A major obstacle to putting software verification research into practice is the high cost of developing the infrastructure enabling the application of verification algorithms to actual production code, in all of its complexity. Handling an entire programming language is a huge endeavor that few researchers are willing to undertake; even fewer could invest the effort to implement a verification algorithm for many source languages. To decouple the implementations of verification algorithms from the details of source languages, and enable rapid prototyping on production code, we have developed SMACK. At its core, SMACK is a translator from the LLVM intermediate representation (IR) into the Boogie intermediate verification language (IVL). Sourcing LLVM exploits an increasing number of compiler front ends, optimizations, and analyses. Targeting Boogie exploits a canonical platform which simplifies the implementation of algorithms for verification, model checking, and abstract interpretation. Our initial experience in verifying C-language programs is encouraging: SMACK is competitive in SV-COMP benchmarks, is able to translate large programs (100 KLOC), and is being used in several verification research prototypes.

Zvonimir Rakamarić, Michael Emmi

Security

Synthesis of Masking Countermeasures against Side Channel Attacks

We propose a new synthesis method for generating countermeasures for cryptographic software code to mitigate power analysis based side channel attacks. Side channel attacks may arise when computers and microchips leak sensitive information about the software code and data that they process, e.g., through power dissipation or electromagnetic radiation. Such information leaks have been exploited in commercial systems in the embedded space. Our new method takes an unprotected C program as input and returns a functionally equivalent but side channel leak free new program as output. The new program is guaranteed to be

perfectly masked

in that all intermediate computation results are made statistically independent from the secret data. We have implemented our new method in a tool based on the LLVM compiler and the Yices SMT solver. Our experiments on a set of cryptographic software benchmarks show that the new method is both effective and scalable for applications of realistic size.

Hassan Eldib, Chao Wang
Temporal Mode-Checking for Runtime Monitoring of Privacy Policies

Fragments of first-order temporal logic are useful for representing many practical privacy and security policies. Past work has proposed two strategies for checking event trace (audit log) compliance with policies: online monitoring and offline audit. Although online monitoring is space- and time-efficient, existing techniques insist that satisfying instances of all subformulas of the policy be amenable to caching, which limits expressiveness when some subformulas have infinite support. In contrast, offline audit is brute force and can handle more policies but is not as efficient. This paper proposes a new online monitoring algorithm that caches satisfying instances when it can, and falls back to the brute force search when it cannot. Our key technical insight is a new flow- and time-sensitive static check of variable groundedness, called the

temporal mode check

, which determines subformulas for which such caching is feasible and those for which it is not and, hence, guides our algorithm. We prove the correctness of our algorithm and evaluate its performance over synthetic traces and realistic policies.

Omar Chowdhury, Limin Jia, Deepak Garg, Anupam Datta
String Constraints for Verification

We present a decision procedure for a logic that combines (i) word equations over string variables denoting words of arbitrary lengths, together with (ii) constraints on the length of words, and on (iii) the regular languages to which words belong. Decidability of this general logic is still open. Our procedure is sound for the general logic, and a decision procedure for a particularly rich fragment that restricts the form in which word equations are written. In contrast to many existing procedures, our method does not make assumptions about the maximum length of words. We have developed a prototypical implementation of our decision procedure, and integrated it into a CEGAR-based model checker for the analysis of programs encoded as Horn clauses. Our tool is able to automatically establish the correctness of several programs that are beyond the reach of existing methods.

Parosh Aziz Abdulla, Mohamed Faouzi Atig, Yu-Fang Chen, Lukáš Holík, Ahmed Rezine, Philipp Rümmer, Jari Stenman
A Conference Management System with Verified Document Confidentiality

We present a case study in verified security for realistic systems: the implementation of a conference management system, whose functional kernel is faithfully represented in the Isabelle theorem prover, where we specify and verify confidentiality properties. The various theoretical and practical challenges posed by this development led to a novel security model and verification method generally applicable to systems describable as input–output automata.

Sudeep Kanav, Peter Lammich, Andrei Popescu
Vac - Verifier of Administrative Role-Based Access Control Policies

In this paper we present

Vac

, an automatic tool for verifying security properties of administrative Role-based Access Control (

RBAC

). RBAC has become an increasingly popular access control model, particularly suitable for large organizations, and it is implemented in several software. Automatic security analysis of administrative

RBAC

systems is recognized as an important problem, as an analysis tool can help designers check whether their policies meet expected security properties.

Vac

converts administrative

RBAC

policies to imperative programs that simulate the policies both precisely and abstractly and supports several automatic verification back-ends to analyze the resulting programs. In this paper, we describe the architecture of

Vac

and overview the analysis techniques that have been implemented in the tool. We also report on experiments with several benchmarks from the literature.

Anna Lisa Ferrara, P. Madhusudan, Truc L. Nguyen, Gennaro Parlato

Automata

From LTL to Deterministic Automata: A Safraless Compositional Approach

We present a new algorithm to construct a (generalized) deterministic Rabin automaton for an LTL formula

φ

. The automaton is the product of a master automaton and an array of slave automata, one for each

G

-subformula of

φ

. The slave automaton for

G

ψ

is in charge of recognizing whether

FG

ψ

holds. As opposed to standard determinization procedures, the states of all our automata have a clear logical structure, which allows for various optimizations. Our construction subsumes former algorithms for fragments of LTL. Experimental results show improvement in the sizes of the resulting automata compared to existing methods.

Javier Esparza, Jan Křetínský
Symbolic Visibly Pushdown Automata

Nested words model data with both linear and hierarchical structure such as XML documents and program traces. A nested word is a sequence of positions together with a matching relation that connects open tags (calls) with the corresponding close tags (returns). Visibly Pushdown Automata are a restricted class of pushdown automata that process nested words, and have many appealing theoretical properties such as closure under Boolean operations and decidable equivalence. However, like any classical automata models, they are limited to finite alphabets. This limitation is restrictive for practical applications to both XML processing and program trace analysis, where values for individual symbols are usually drawn from an unbounded domain. With this motivation, we introduce Symbolic Visibly Pushdown Automata (SVPA) as an executable model for nested words over infinite alphabets. In this model, transitions are labeled with predicates over the input alphabet, analogous to symbolic automata processing strings over infinite alphabets. A key novelty of SVPAs is the use of binary predicates to model relations between open and close tags in a nested word. We show how SVPAs still enjoy the decidability and closure properties of Visibly Pushdown Automata. We use SVPAs to model XML validation policies and program properties that are not naturally expressible with previous formalisms and provide experimental results for our implementation.

Loris D’Antoni, Rajeev Alur

Model Checking and Testing

Engineering a Static Verification Tool for GPU Kernels

We report on practical experiences over the last 2.5 years related to the engineering of GPUVerify, a static verification tool for OpenCL and CUDA GPU kernels, plotting the progress of GPUVerify from a prototype to a fully functional and relatively efficient analysis tool. Our hope is that this experience report will serve the verification community by helping to inform future tooling efforts.

Ethel Bardsley, Adam Betts, Nathan Chong, Peter Collingbourne, Pantazis Deligiannis, Alastair F. Donaldson, Jeroen Ketema, Daniel Liew, Shaz Qadeer
Lazy Annotation Revisited

Lazy Annotation is a method of software model checking that performs a backtracking search for a symbolic counterexample. When the search backtracks, the program is annotated with a learned fact that constrains future search. In this sense, the method is closely analogous to conflict-driven clause learning in SAT solvers.

In this paper, we develop several improvements to the basic Lazy Annotation approach. The resulting algorithm is compared both conceptually and experimentally to two approaches based on similar principles but using different learning strategies: unfolding-based Bounded Model Checking and Property-Driven Reachability.

Kenneth L. McMillan
Interpolating Property Directed Reachability

Current SAT-based Model Checking is based on two major approaches: Interpolation-based (

Imc

) (global, with unrollings) and Property Directed Reachability/IC3 (

Pdr

) (local, without unrollings).

Imc

generates candidate invariants using interpolation over an unrolling of a system, without putting any restrictions on the SAT-solver’s search.

Pdr

generates candidate invariants by a local search over a single instantiation of the transition relation, effectively guiding the SAT solver’s search. The two techniques are considered to be orthogonal and have different strength and limitations. In this paper, we present a new technique, called

Avy

, that effectively combines the key insights of the two approaches. Like

Imc

, it uses unrollings and interpolants to construct an initial candidate invariant, and, like

Pdr

, it uses local inductive generalization to keep the invariants in compact clausal form. On the one hand,

Avy

is an incremental

Imc

extended with a local search for CNF interpolants. On the other, it is

Pdr

extended with a global search for bounded counterexamples. We implemented the technique using ABC and have evaluated it on the HWMCC benchmark-suite from 2012 and 2013. Our results show that the prototype significantly outperforms

Pdr

and McMillan’s interpolation algorithm (as implemented in ABC) on the industrial sub-category of the benchmark.

Yakir Vizel, Arie Gurfinkel
Verifying Relative Error Bounds Using Symbolic Simulation

In this paper we consider the problem of formally verifying hardware that is specified to compute reciprocal, reciprocal square root, and power-of-two functions on floating point numbers to within a given

relative error

. Such specifications differ from the common case in which any given input is specified to have

exactly one

correct output. Our approach is based on symbolic simulation with binary decision diagrams, and involves two distinct steps. First, we prove a lemma that reduces the relative error specification to several inequalities that involve reasoning about natural numbers only. The most complex of these inequalities asserts that the product of several naturals is less-than/greater-than another natural. Second, we invoke one of several customized algorithms that decides the inequality, without performing the expensive symbolic multiplications directly. We demonstrate the effectiveness of our approach on a next-generation Intel

®

processor design and report encouraging time and space metrics for these proofs.

Jesse Bingham, Joe Leslie-Hurd
Regression Test Selection for Distributed Software Histories

Regression test selection analyzes incremental changes to a codebase and chooses to run only those tests whose behavior may be affected by the latest changes in the code. By focusing on a small subset of all the tests, the testing process runs faster and can be more tightly integrated into the development process. Existing techniques for regression test selection consider two versions of the code at a time, effectively assuming a development process where changes to the code occur in a linear sequence.

Modern development processes that use

distributed

version-control systems are more complex. Software version histories are generally modeled as directed graphs; in addition to version changes occurring linearly, multiple versions can be related by other commands, e.g., branch, merge, rebase, cherry-pick, revert, etc. This paper describes a regression test-selection technique for software developed using modern distributed version-control systems. By modeling different branch or merge commands directly in our technique, it computes safe test sets that can be substantially smaller than applying previous techniques to a linearization of the software history.

We evaluate our technique on software histories of several large open-source projects. The results are encouraging: our technique obtained an average of 10.89× reduction in the number of tests over an existing technique while still selecting all tests whose behavior may differ.

Milos Gligoric, Rupak Majumdar, Rohan Sharma, Lamyaa Eloussi, Darko Marinov
GPU-Based Graph Decomposition into Strongly Connected and Maximal End Components

This paper presents parallel algorithms for component decomposition of graph structures on General Purpose Graphics Processing Units (GPUs). In particular, we consider the problem of decomposing sparse graphs into strongly connected components, and decomposing stochastic games (such as Markov decision processes) into maximal end components. These problems are key ingredients of many (probabilistic) model-checking algorithms. We explain the main rationales behind our GPU-algorithms, and show a significant speed-up over the sequential counterparts in several case studies.

Anton Wijs, Joost-Pieter Katoen, Dragan Bošnački
Software Verification in the Google App-Engine Cloud

Software verification often requires a large amount of computing resources. In the last years, cloud services emerged as an inexpensive, flexible, and energy-efficient source of computing power. We have investigated if such cloud resources can be used effectively for verification. We chose the platform-as-a-service offer Google App Engine and ported the open-source verification framework

CPAchecker

to it. We provide our new verification service as a web front-end to users who wish to solve single verification tasks (tutorial usage), and an API for integrating the service into existing verification infrastructures (massively parallel bulk usage). We experimentally evaluate the effectiveness of this service and show that it can be successfully used to offload verification work to the cloud, considerably sparing local verification resources.

Dirk Beyer, Georg Dresler, Philipp Wendler
The nuXmv Symbolic Model Checker

This paper describes the

nuXmv

symbolic model checker for finite- and infinite-state synchronous transition systems.

nuXmv

is the evolution of the

nuXmv

open source model checker. It builds on and extends

nuXmv

along two main directions. For finite-state systems it complements the basic verification techniques of

nuXmv

with state-of-the-art verification algorithms. For infinite-state systems, it extends the

nuXmv

language with new data types, namely Integers and Reals, and it provides advanced SMT-based model checking techniques.

Besides extended functionalities,

nuXmv

has been optimized in terms of performance to be competitive with the state of the art.

nuXmv

has been used in several industrial projects as verification back-end, and it is the basis for several extensions to cope with requirements analysis, contract based design, model checking of hybrid systems, safety assessment, and software model checking.

Roberto Cavada, Alessandro Cimatti, Michele Dorigatti, Alberto Griggio, Alessandro Mariotti, Andrea Micheli, Sergio Mover, Marco Roveri, Stefano Tonetta

Biology and Hybrid Systems

Analyzing and Synthesizing Genomic Logic Functions

Deciphering the developmental program of an embryo is a fundamental question in biology. Landmark papers [9,10] have recently shown how computational models of gene regulatory networks provide system-level causal understanding of the developmental processes of the sea urchin, and enable powerful predictive capabilities. A crucial aspect of the work is empirically deriving plausible models that explain all the known experimental data, a task that becomes infeasible in practice due to the inherent complexity of the biological systems. We present a generic Satisfiability Modulo Theories based approach to analyze and synthesize data constrained models. We apply our approach to the sea urchin embryo, and successfully improve the state-of-the-art by synthesizing, for the first time, models that explain all the experimental observations in [10]. A strength of the proposed approach is the combination of accurate synthesis procedures for deriving biologically plausible models with the ability to prove inconsistency results, showing that for a given set of experiments and possible class of models no solution exists, and thus enabling practical refutation of biological models.

Nicola Paoletti, Boyan Yordanov, Youssef Hamadi, Christoph M. Wintersteiger, Hillel Kugler
Finding Instability in Biological Models

The stability of biological models is an important test for establishing their soundness and accuracy. Stability in biological systems represents the ability of a robust system to always return to homeostasis. In recent work, modular approaches for proving stability have been found to be swift and scalable. If stability is however not proved, the currently available techniques apply an exhaustive search through the unstable state space to find loops. This search is frequently prohibitively computationally expensive, limiting its usefulness. Here we present a new modular approach eliminating the need for an exhaustive search for loops. Using models of biological systems we show that the technique finds loops significantly faster than brute force approaches. Furthermore, for a subset of stable systems which are resistant to modular proofs, we observe a speed up of up to 3 orders of magnitude as the exhaustive searches for loops which cause instability are avoided. With our new procedure we are able to prove instability and stability in a number of realistic biological models, including adaptation in bacterial chemotaxis, the lambda phage lysogeny/lysis switch, voltage gated channel opening and cAMP oscillations in the slime mold

Dictyostelium discoideum

. This new approach will support the development of new tools for biomedicine.

Byron Cook, Jasmin Fisher, Benjamin A. Hall, Samin Ishtiaq, Garvit Juniwal, Nir Piterman
Invariant Verification of Nonlinear Hybrid Automata Networks of Cardiac Cells

Verification algorithms for networks of nonlinear hybrid automata (HA) can aid us understand and control biological processes such as cardiac arrhythmia, formation of memory, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. First, it uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network

$\mathcal{A}$

for constructing a low-dimensional model

$\mathcal{M}$

. Simulations of both

$\mathcal{A}$

and

$\mathcal{M}$

are then used to compute the reach tubes for

$\mathcal{A}$

. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells, where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with 5 cells (20 continuous variables, 29

5

locations) typically in less than 15 minutes for up to reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.

Zhenqi Huang, Chuchu Fan, Alexandru Mereacre, Sayan Mitra, Marta Kwiatkowska
Diamonds Are a Girl’s Best Friend: Partial Order Reduction for Timed Automata with Abstractions

A major obstacle for using partial order reduction in the context of real time verification is that the presence of clocks and clock constraints breaks the usual

diamond structure

of otherwise independent transitions. This is especially true when information of the relative values of clocks is preserved in the form of diagonal constraints. However, when diagonal constraints are relaxed by a suitable abstraction, some diamond structure is re-introduced in the zone graph. In this article, we introduce a variant of the stubborn set method for reducing an abstracted zone graph. Our method works with all abstractions, but especially targets situations where one abstract execution can simulate several permutations of the corresponding concrete execution, even though it might not be able to simulate the permutations of the abstract execution. We define independence relations that capture this “hidden” diamond structure, and define stubborn sets using these relations. We provide a reference implementation for verifying timed language inclusion, to demonstrate the effectiveness of our method.

Henri Hansen, Shang-Wei Lin, Yang Liu, Truong Khanh Nguyen, Jun Sun
Reachability Analysis of Hybrid Systems Using Symbolic Orthogonal Projections

This paper deals with reachability analysis of hybrid systems with continuous dynamics described by linear differential inclusions and arbitrary linear maps for discrete updates. The invariants, guards, and sets of reachable states are given as convex polyhedra. Our reachability algorithm is based on a novel representation class for convex polyhedra, the symbolic orthogonal projections (sops), on which various geometric operations, including convex hulls, Minkowski sums, linear maps, and intersections, can be performed efficiently and exactly. The capability to represent intersections of convex polyhedra exactly is superior to support function-based approaches like the LGG-algorithm (Le Guernic and Girard [21]).

Accompanied by some simple examples, we address the problem of the monotonic growth of the exact representation and propose a combination of our reachability algorithm with the LGG-algorithm. This results in an efficient method of better accuracy than the LGG-algorithm and its productive implementation in

SpaceEx

[13].

Willem Hagemann
Verifying LTL Properties of Hybrid Systems with K-Liveness

The verification of liveness properties is an important challenge in the design of real-time and hybrid systems.

In contrast to the verification of safety properties, for which there are several solutions available, there are really few tools that support liveness properties such as general LTL formulas for hybrid systems, even in the case of timed automata.

In the context of finite-state model checking, K-Liveness is a recently proposed algorithm that tackles the problem by proving that an accepting condition can be visited at most K times. K-Liveness has shown to be very efficient, thanks also to its tight integration with IC3, a very efficient technique for safety verification. Unfortunately, the approach is neither complete nor effective (even for simple properties) in the case of infinite-state systems with continuous time.

In this paper, we extend K-Liveness to deal with LTL for hybrid systems. On the theoretical side, we show how to extend the reduction from LTL to the reachability of an accepting condition in order to make the algorithm work with continuous time. In particular, we prove that the new reduction is complete for a class of rectangular hybrid automata, in the sense that the LTL property holds if and only if there exists K such that the accepting condition is visited at most K times. On the practical side, we present an efficient integration of K-Liveness in an SMT-version of IC3, and demonstrate its effectiveness on several benchmarks.

Alessandro Cimatti, Alberto Griggio, Sergio Mover, Stefano Tonetta

Games and Synthesis

Safraless Synthesis for Epistemic Temporal Specifications

In this paper we address the synthesis problem for specifications given in linear temporal single-agent epistemic logic, KLTL (or

KL

1

), over single-agent systems having imperfect information of the environment state. [17] have shown that this problem is 2Exptime complete. However, their procedure relies on complex automata constructions that are notoriously resistant to efficient implementations as they use Safra-like determinization.

We propose a “Safraless” synthesis procedure for a large fragment of KLTL. The construction transforms first the synthesis problem into the problem of checking emptiness for universal co-Büchi tree automata using an information-set construction. Then we build a safety game that can be solved using an antichain-based symbolic technique exploiting the structure of the underlying automata. The technique is implemented and applied to a couple of case studies.

Rodica Bozianu, Cătălin Dima, Emmanuel Filiot
Minimizing Running Costs in Consumption Systems

A standard approach to optimizing long-run running costs of discrete systems is based on minimizing the

mean-payoff

, i.e., the long-run average amount of resources (“energy”) consumed per transition. However, this approach inherently assumes that the energy source has an unbounded capacity, which is not always realistic. For example, an autonomous robotic device has a battery of finite capacity that has to be recharged periodically, and the total amount of energy consumed between two successive charging cycles is bounded by the capacity. Hence, a controller minimizing the mean-payoff must obey this restriction. In this paper we study the controller synthesis problem for

consumption systems

with a finite battery capacity, where the task of the controller is to minimize the mean-payoff while preserving the functionality of the system encoded by a given linear-time property. We show that an optimal controller always exists, and it may either need only finite memory or require infinite memory (it is decidable in polynomial time which of the two cases holds). Further, we show how to compute an effective description of an optimal controller in polynomial time. Finally, we consider the limit values achievable by larger and larger battery capacity, show that these values are computable in polynomial time, and we also analyze the corresponding rate of convergence. To the best of our knowledge, these are the first results about optimizing the long-run running costs in systems with bounded energy stores.

Tomáš Brázdil, David Klaška, Antonín Kučera, Petr Novotný
CEGAR for Qualitative Analysis of Probabilistic Systems

We consider Markov decision processes (MDPs) which are a standard model for probabilistic systems.We focus on qualitative properties forMDPs that can express that desired behaviors of the system arise almost-surely (with probability 1) or with positive probability. We introduce a new simulation relation to capture the refinement relation ofMDPs with respect to qualitative properties, and present discrete graph theoretic algorithms with quadratic complexity to compute the simulation relation.We present an automated technique for assume-guarantee style reasoning for compositional analysis ofMDPs with qualitative properties by giving a counterexample guided abstraction-refinement approach to compute our new simulation relation. We have implemented our algorithms and show that the compositional analysis leads to significant improvements.

Krishnendu Chatterjee, Martin Chmelík, Przemysław Daca
Optimal Guard Synthesis for Memory Safety

This paper presents a new synthesis-based approach for writing low-level memory-safe code. Given a partial program with missing guards, our algorithm synthesizes concrete predicates to plug in for the missing guards such that all buffer accesses in the program are memory safe. Furthermore, guards synthesized by our technique are the simplest and weakest among guards that guarantee memory safety, relative to the inferred loop invariants. Our approach is fully automatic and does not require any hints from the user. We have implemented our algorithm in a prototype synthesis tool for C programs, and we show that the proposed approach is able to successfully synthesize guards that closely match hand-written programmer code in a set of real-world C programs.

Thomas Dillig, Isil Dillig, Swarat Chaudhuri
Don’t Sit on the Fence
A Static Analysis Approach to Automatic Fence Insertion

Modern architectures rely on memory fences to prevent undesired weakenings of memory consistency. As the fences’ semantics may be subtle, the automation of their placement is highly desirable. But precise methods for restoring consistency do not scale to deployed systems code. We choose to trade some precision for genuine scalability: our technique is suitable for large code bases. We implement it in our new

musketeer

tool, and detail experiments on more than 350 executables of packages found in Debian Linux 7.1, e.g.

memcached

(about 10000 LoC).

Jade Alglave, Daniel Kroening, Vincent Nimal, Daniel Poetzl
MCMAS-SLK: A Model Checker for the Verification of Strategy Logic Specifications

Model checking has come of age. A number of techniques are increasingly used in industrial setting to verify hardware and software systems, both against models and concrete implementations. While it is generally accepted that obstacles still remain, notably handling infinite state systems efficiently, much of current work involves refining and improving existing techniques such as predicate abstraction.

Petr Čermák, Alessio Lomuscio, Fabio Mogavero, Aniello Murano
Solving Games without Controllable Predecessor

Two-player games are a useful formalism for the synthesis of reactive systems. The traditional approach to solving such games iteratively computes the set of winning states for one of the players. This requires keeping track of all discovered winning states and can lead to space explosion even when using efficient symbolic representations. We propose a new method for solving reachability games. Our method works by exploring a subset of the possible concrete runs of the game and proving that these runs can be generalised into a winning strategy on behalf of one of the players. We use counterexample-guided backtracking search to identify a subset of runs that are sufficient to consider to solve the game. We evaluate our algorithm on several families of benchmarks derived from real-world device driver synthesis problems.

Nina Narodytska, Alexander Legg, Fahiem Bacchus, Leonid Ryzhyk, Adam Walker
G4LTL-ST: Automatic Generation of PLC Programs

G4LTL-ST

automatically synthesizes control code for industrial Programmable Logic Controls (PLC) from timed behavioral specifications of input-output signals. These specifications are expressed in a linear temporal logic (LTL) extended with non-linear arithmetic constraints and timing constraints on signals.

G4LTL-ST

generates code in IEC 61131-3-compatible

Structured Text

, which is compiled into executable code for a large number of industrial field-level devices. The synthesis algorithm of

G4LTL-ST

implements pseudo-Boolean abstraction of data constraints and the compilation of timing constraints into LTL, together with a counterstrategy-guided abstraction-refinement synthesis loop. Since temporal logic specifications are notoriously difficult to use in practice,

G4LTL-ST

supports engineers in specifying realizable control problems by suggesting suitable restrictions on the behavior of the control environment from failed synthesis attempts.

Chih-Hong Cheng, Chung-Hao Huang, Harald Ruess, Stefan Stattelmann

Concurrency

Automatic Atomicity Verification for Clients of Concurrent Data Structures

Mainstream programming languages offer libraries of concurrent data structures. Each method call on a concurrent data structure appears to take effect atomically. However, clients of such data structures often require stronger guarantees. For instance, a histogram class that is implemented using a concurrent map may require a method to atomically increment a histogram bar, but its implementation requires multiple calls to the map and hence is not atomic by default. Indeed, prior work has shown that atomicity errors in clients of concurrent data structures occur frequently in production code.

We present an automatic and modular verification technique for clients of concurrent data structures. We define a novel sufficient condition for atomicity of clients called condensability. We present a tool called Snowflake that generates proof obligations for condensability of Java client methods and discharges them using an off-the-shelf SMT solver. We applied Snowflake to an existing suite of client methods from several open-source applications. It successfully verified 76.9% of the atomic methods without any change and verified the rest of them with small code refactoring and/or annotations.

Mohsen Lesani, Todd Millstein, Jens Palsberg
Regression-Free Synthesis for Concurrency

While fixing concurrency bugs, program repair algorithms may introduce new concurrency bugs. We present an algorithm that avoids such regressions. The solution space is given by a set of program transformations we consider in for repair process. These include reordering of instructions within a thread and inserting atomic sections. The new algorithm learns a constraint on the space of candidate solutions, from both positive examples (error-free traces) and counterexamples (error traces). From each counterexample, the algorithm learns a constraint necessary to remove the errors. From each positive examples, it learns a constraint that is necessary in order to prevent the repair from turning the trace into an error trace. We implemented the algorithm and evaluated it on simplified Linux device drivers with known bugs.

Pavol Černý, Thomas A. Henzinger, Arjun Radhakrishna, Leonid Ryzhyk, Thorsten Tarrach
Bounded Model Checking of Multi-threaded C Programs via Lazy Sequentialization

Bounded model checking (BMC) has successfully been used for many practical program verification problems, but concurrency still poses a challenge. Here we describe a new approach to BMC of sequentially consistent C programs using POSIX threads. Our approach first translates a multi-threaded C program into a nondeterministic sequential C program that preserves reachability for all round-robin schedules with a given bound on the number of rounds. It then re-uses existing high-performance BMC tools as backends for the sequential verification problem. Our translation is carefully designed to introduce very small memory overheads and very few sources of nondeterminism, so that it produces tight SAT/SMT formulae, and is thus very effective in practice: our prototype won the concurrency category of SV-COMP14. It solved all verification tasks successfully and was 30x faster than the best tool with native concurrency handling.

Omar Inverso, Ermenegildo Tomasco, Bernd Fischer, Salvatore La Torre, Gennaro Parlato
An SMT-Based Approach to Coverability Analysis

Model checkers based on Petri net coverability have been used successfully in recent years to verify safety properties of concurrent shared-memory or asynchronous message-passing software. We revisit a constraint approach to coverability based on classical Petri net analysis techniques. We show how to utilize an SMT solver to implement the constraint approach, and additionally, to generate an inductive invariant from a safety proof. We empirically evaluate our procedure on a large set of existing Petri net benchmarks. Even though our technique is incomplete, it can quickly discharge most of the safe instances. Additionally, the inductive invariants computed are usually orders of magnitude smaller than those produced by existing solvers.

Javier Esparza, Ruslán Ledesma-Garza, Rupak Majumdar, Philipp Meyer, Filip Niksic
LEAP: A Tool for the Parametrized Verification of Concurrent Datatypes

This tool paper describes Leap, a tool for the verification of concurrent datatypes and parametrized systems composed by an unbounded number of threads that manipulate infinite data.

Leap receives as input a concurrent program description and a specification and automatically generates a finite set of verification conditions which are then discharged to specialized decision procedures. The validity of all discharged verification conditions implies that the program executed by any number of threads satisfies the specification. Currently, Leap includes not only decision procedures for integers and Booleans, but it also implements specific theories for heap memory layouts such as linked-lists and skiplists.

Alejandro Sánchez, César Sánchez

SMT and Theorem Proving

Monadic Decomposition

Monadic predicates play a prominent role in many decidable cases, including decision procedures for symbolic automata. We are here interested in

discovering

whether a formula can be rewritten into a Boolean combination of monadic predicates. Our setting is quantifier-free formulas over a decidable background theory, such as arithmetic and we here develop a semi-decision procedure for extracting a monadic decomposition of a formula when it exists.

Margus Veanes, Nikolaj Bjørner, Lev Nachmanson, Sergey Bereg
A DPLL(T) Theory Solver for a Theory of Strings and Regular Expressions

An increasing number of applications in verification and security rely on or could benefit from automatic solvers that can check the satisfiability of constraints over a rich set of data types that includes character strings. Unfortunately, most string solvers today are standalone tools that can reason only about (some fragment) of the theory of strings and regular expressions, sometimes with strong restrictions on the expressiveness of their input language. These solvers are based on reductions to satisfiability problems over other data types, such as bit vectors, or to automata decision problems. We present a set of algebraic techniques for solving constraints over the theory of unbounded strings natively, without reduction to other problems. These techniques can be used to integrate string reasoning into general, multi-theory SMT solvers based on the DPLL(

T

) architecture. We have implemented them in our SMT solver

cvc4

to expand its already large set of built-in theories to a theory of strings with concatenation, length, and membership in regular languages. Our initial experimental results show that, in addition, over pure string problems,

cvc4

is highly competitive with specialized string solvers with a comparable input language.

Tianyi Liang, Andrew Reynolds, Cesare Tinelli, Clark Barrett, Morgan Deters
Bit-Vector Rewriting with Automatic Rule Generation

Rewriting is essential for efficient bit-vector SMT solving. The rewriting algorithm commonly used by modern SMT solvers iteratively applies a set of ad hoc rewriting rules hard-coded into the solver to simplify the given formula at the preprocessing stage. This paper proposes an automatic approach to rewriting. The solver starts each invocation with an empty set of rewriting rules. The set is extended by applying at run-time an automatic SAT-based algorithm for new rewriting rule generation. The set of rules differs from instance to instance. We implemented our approach in the framework of an algorithm for equivalence and constant propagation, called 0-saturation, which we extended from purely propositional reasoning to bit-vector reasoning. Our approach results in a substantial performance improvement in a state-of-the-art SMT solver over various SMT-LIB families.

Alexander Nadel
A Tale of Two Solvers: Eager and Lazy Approaches to Bit-Vectors

The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver (

LBV

). We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.

Liana Hadarean, Kshitij Bansal, Dejan Jovanović, Clark Barrett, Cesare Tinelli
AVATAR: The Architecture for First-Order Theorem Provers

This paper describes a new architecture for first-order resolution and superposition theorem provers called AVATAR (Advanced Vampire Architecture for Theories and Resolution). Its original motivation comes from a problem well-studied in the past — dealing with problems having clauses containing propositional variables and other clauses that can be split into components with disjoint sets of variables. Such clauses are common for problems coming from applications, for example in program verification and program analysis, where many ground literals occur in the problems and even more are generated during the proof-search.

This problem was previously studied by adding various versions of splitting. The addition of splitting resulted in some improvements in performance of theorem provers. However, even with various versions of splitting, the performance of superposition theorem provers is nowhere near SMT solvers on variable-free problems or SAT solvers on propositional problems.

This paper describes a new architecture for superposition theorem provers, where a superposition theorem prover is tightly integrated with a SAT or an SMT solver. Its implementation in our theorem prover Vampire resulted in drastic improvements over all previous implementations of splitting. Over four hundred TPTP problems previously unsolvable by any modern prover, including Vampire itself, have been proved, most of them with short runtimes. Nearly all problems solved with one of 481 variants of splitting previously implemented in Vampire can also be solved with AVATAR.

We also believe that AVATAR is an important step towards efficient reasoning with both quantifiers and theories, which is one of the key areas in modern applications of theorem provers in program analysis and verification.

Andrei Voronkov
Automating Separation Logic with Trees and Data

Separation logic (SL) is a widely used formalism for verifying heap manipulating programs. Existing SL solvers focus on decidable fragments for list-like structures. More complex data structures such as trees are typically unsupported in implementations, or handled by incomplete heuristics. While complete decision procedures for reasoning about trees have been proposed, these procedures suffer from high complexity, or make global assumptions about the heap that contradict the separation logic philosophy of local reasoning. In this paper, we present a fragment of classical first-order logic for local reasoning about tree-like data structures. The logic is decidable in NP and the decision procedure allows for combinations with other decidable first-order theories for reasoning about data. Such extensions are essential for proving functional correctness properties. We have implemented our decision procedure and, building on earlier work on translating SL proof obligations into classical logic, integrated it into an SL-based verification tool. We successfully used the tool to verify functional correctness of tree-based data structure implementations.

Ruzica Piskac, Thomas Wies, Damien Zufferey
A Nonlinear Real Arithmetic Fragment

We present a new procedure for testing satisfiability (over the reals) of a conjunction of polynomial equations. There are three possible return values for our procedure: it either returns

a model

for the input formula, or it says that the input is

unsatisfiable

, or it fails because the applicability condition for the procedure, called the eigen-condition, is violated. For the class of constraints where the eigen-condition holds, our procedure is a

decision procedure

. We describe satisfiability-preserving transformations that can potentially convert problems into a form where eigen-condition holds. We experimentally evaluate the procedure and discuss applicability.

Ashish Tiwari, Patrick Lincoln
Yices 2.2

Yices is an SMT solver developed by SRI International. The first version of Yices was released in 2006 and has been continuously updated since then. In 2007, we started a complete re-implementation of the solver to improve performance and increase modularity and flexibility. We describe the latest release of Yices, namely, Yices 2.2. We present the tool’s architecture and discuss the algorithms it implements, and we describe recent developments such as support for the SMT-LIB 2.0 notation and various performance improvements.

Bruno Dutertre

Bounds and Termination

A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis

We present the first scalable

bound analysis

that achieves

amortized complexity analysis

. In contrast to earlier work, our bound analysis is not based on general purpose reasoners such as abstract interpreters, software model checkers or computer algebra tools. Rather, we derive bounds directly from abstract program models, which we obtain from programs by comparatively simple invariant generation and symbolic execution techniques. As a result, we obtain an analysis that is more predictable and more scalable than earlier approaches. We demonstrate by a thorough experimental evaluation that our analysis is fast and at the same time able to compute bounds for challenging loops in a large real-world benchmark. Technically, our approach is based on lossy vector addition systems (VASS). Our bound analysis first computes a lexicographic ranking function that proves the termination of a VASS, and then derives a bound from this ranking function. Our methodology achieves amortized analysis based on a new insight how lexicographic ranking functions can be used for bound analysis.

Moritz Sinn, Florian Zuleger, Helmut Veith
Symbolic Resource Bound Inference for Functional Programs

We present an approach for inferring symbolic resource bounds for purely functional programs consisting of recursive functions, algebraic data types and nonlinear arithmetic operations. In our approach, the developer specifies the desired shape of the bound as a program expression containing numerical holes which we refer to as

templates

. For e.g,

time

≤ 

a

 ∗

height

(

tree

) + b where

a

,

b

are unknowns, is a template that specifies a bound on the execution time. We present a scalable algorithm for computing tight bounds for sequential and parallel execution times by solving for the unknowns in the template. We empirically evaluate our approach on several benchmarks that manipulate complex data structures such as binomial heap, lefitist heap, red-black tree and AVL tree. Our implementation is able to infer hard, nonlinear symbolic time bounds for our benchmarks that are beyond the capability of the existing approaches.

Ravichandhran Madhavan, Viktor Kuncak
Proving Non-termination Using Max-SMT

We show how Max-SMT-based invariant generation can be exploited for proving non-termination of programs. The construction of the proof of non-termination is guided by the generation of

quasi-invariants

– properties such that if they hold at a location during execution once, then they will continue to hold at that location from then onwards. The check that quasi-invariants can indeed be reached is then performed separately. Our technique considers strongly connected subgraphs of a program’s control flow graph for analysis and thus produces more generic witnesses of non-termination than existing methods. Moreover, it can handle programs with unbounded non-determinism and is more likely to converge than previous approaches.

Daniel Larraz, Kaustubh Nimkar, Albert Oliveras, Enric Rodríguez-Carbonell, Albert Rubio
Termination Analysis by Learning Terminating Programs

We present a novel approach to termination analysis. In a first step, the analysis uses a program as a black-box which exhibits only a finite set of sample traces. Each sample trace is infinite but can be represented by a finite lasso. The analysis can ”learn” a program from a termination proof for the lasso, a program that is terminating by construction. In a second step, the analysis checks that the set of sample traces is representative in a sense that we can make formal. An experimental evaluation indicates that the approach is a potentially useful addition to the portfolio of existing approaches to termination analysis.

Matthias Heizmann, Jochen Hoenicke, Andreas Podelski
Causal Termination of Multi-threaded Programs

We present a new model checking procedure for the termination analysis of multi-threaded programs. Current termination provers scale badly in the number of threads; our new approach easily handles 100 threads on multi-threaded benchmarks like Producer-Consumer. In our procedure, we characterize the existence of non-terminating executions as Mazurkiewicz-style concurrent traces and apply causality-based transformation rules to refine them until a contradiction can be shown. The termination proof is organized into a tableau, where the case splits represent a novel type of modular reasoning according to different causal explanations of a hypothetical error. We report on experimental results obtained with a tool implementation of the new procedure, called

Arctor

, on previously intractable multi-threaded benchmarks.

Andrey Kupriyanov, Bernd Finkbeiner

Abstraction

Counterexample to Induction-Guided Abstraction-Refinement (CTIGAR)

Typical CEGAR-based verification methods refine the abstract domain based on full counterexample traces. The finite state model checking algorithm IC3 introduced the concept of discovering, generalizing from, and thereby eliminating individual state counterexamples to induction (CTIs). This focus on individual states suggests a simpler abstraction-refinement scheme in which refinements are performed relative to single steps of the transition relation, thus reducing the expense of refinement and eliminating the need for full traces. Interestingly, this change in refinement focus leads to a natural spectrum of refinement options, including when to refine and which type of concrete single-step query to refine relative to. Experiments validate that CTI-focused abstraction refinement, or CTIGAR, is competitive with existing CEGAR-based tools.

Johannes Birgmeier, Aaron R. Bradley, Georg Weissenbacher
Unbounded Scalable Verification Based on Approximate Property-Directed Reachability and Datapath Abstraction

This paper introduces the Averroes formal verification system which exploits the power of two complementary approaches: counter-example-guided abstraction and refinement (CEGAR) of the design’s datapath and the recently-introduced IC3 and PDR approximate reachability algorithms. Averroes is particularly suited to the class of hardware designs consisting of wide datapaths and complex control logic, a class that covers a wide spectrum of design styles that range from general-purpose microprocessors to special-purpose embedded controllers and accelerators. In most of these designs, the number of datapath state variables is orders of magnitude larger than the number of control state variables. Thus, for purposes of verifying the correctness of the control logic (where most design errors typically reside), datapath abstraction is particularly effective at pruning away most of a design’s state space leaving a much reduced “control space” that can be efficiently explored by the IC3 and PDR method. Preliminary experimental results on a suite of industrial benchmarks show that Averroes significantly outperforms verification at the bit level. To our knowledge, this is the first empirical demonstration of the possibility of automatic scalable unbounded sequential verification.

Suho Lee, Karem A. Sakallah
QUICr: A Reusable Library for Parametric Abstraction of Sets and Numbers

This paper introduces QUICr, an abstract domain combinator library that lifts any domain for numbers into an efficient domain for numbers and

sets

of numbers. As a library, it is useful for inferring relational data invariants in programs that manipulate data structures. As a testbed, it allows easy extension of widening and join heuristics, enabling adaptations to new and varied applications. In this paper we present the architecture of the library, guidelines on how to select heuristics, and an example instantiation of the library using the

Apron

library to verify set-manipulating programs.

Arlen Cox, Bor-Yuh Evan Chang, Sriram Sankaranarayanan
Backmatter
Metadaten
Titel
Computer Aided Verification
herausgegeben von
Armin Biere
Roderick Bloem
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-08867-9
Print ISBN
978-3-319-08866-2
DOI
https://doi.org/10.1007/978-3-319-08867-9