Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 19th International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2018, held in Los Angeles, CA, USA, in January 2018.The 24 full papers presented together with the abstracts of 3 invited keynotes and 1 invited tutorial were carefully reviewed and selected from 43 submissions. VMCAI provides topics including: program verification, model checking, abstract interpretation, program synthesis, static analysis, type systems, deductive methods, program certification, decision procedures, theorem proving, program certification, debugging techniques, program transformation, optimization, and hybrid and cyber-physical systems.

Inhaltsverzeichnis

Frontmatter

Parameterized Model Checking of Synchronous Distributed Algorithms by Abstraction

Parameterized verification of fault-tolerant distributed algorithms has recently gained more and more attention. Most of the existing work considers asynchronous distributed systems (interleaving semantics). However, there exists a considerable distributed computing literature on synchronous fault-tolerant distributed algorithms: conceptually, all processes proceed in lock-step rounds, that is, synchronized steps of all (correct) processes bring the system into the next state.We introduce an abstraction technique for synchronous fault-tolerant distributed algorithms that reduces parameterized verification of synchronous fault-tolerant distributed algorithms to finite-state model checking of an abstract system. Using the TLC model checker, we automatically verified several algorithms from the literature, some of which were not automatically verified before. Our benchmarks include fault-tolerant algorithms that solve atomic commitment, 2-set agreement, and consensus.

Benjamin Aminof, Sasha Rubin, Ilina Stoilkovska, Josef Widder, Florian Zuleger

Gradual Program Verification

Both static and dynamic program verification approaches have significant disadvantages when considered in isolation. Inspired by research on gradual typing, we propose gradual verification to seamlessly and flexibly combine static and dynamic verification. Drawing on general principles from abstract interpretation, and in particular on the recent Abstracting Gradual Typing methodology of Garcia et al., we systematically derive a gradual verification system from a static one. This approach yields, by construction, a gradual verification system that is compatible with the original static system, but overcomes its rigidity by resorting to dynamic verification when desired. As with gradual typing, the programmer can control the trade-off between static and dynamic checking by tuning the (im)precision of pre- and postconditions. The formal semantics of the gradual verification system and the proofs of its properties, including the gradual guarantees of Siek et al., have been fully mechanized in the Coq proof assistant.

Johannes Bader, Jonathan Aldrich, Éric Tanter

Automatic Verification of RMA Programs via Abstraction Extrapolation

Remote Memory Access (RMA) networks are emerging as a promising basis for building performant large-scale systems such as MapReduce, scientific computing applications, and others. To achieve this performance, RMA networks exhibit relaxed memory consistency. This means the developer now must manually ensure that the additional relaxed behaviors are not harmful to their application – a task known to be difficult and error-prone. In this paper, we present a method and a system that can automatically address this task. Our approach consists of two ingredients: (i) a reduction where we reduce the task of verifying program P running on RMA to the problem of verifying a program $$\overline{P}$$ on sequential consistency (where $$\overline{P}$$ captures the required RMA behaviors), and (ii) abstraction extrapolation: a new method to automatically discover both, predicates (via predicate extrapolation) and abstract transformers (via boolean program extrapolation) for $$\overline{P}$$. This enables us to automatically extrapolate the proof of P under sequential consistency (SC) to a proof of P under RMA. We implemented our method and showed it to be effective in automatically verifying, for the first time, several challenging concurrent algorithms under RMA.

Cedric Baumann, Andrei Marian Dan, Yuri Meshman, Torsten Hoefler, Martin Vechev

Scalable Approximation of Quantitative Information Flow in Programs

Quantitative information flow measurement techniques have been proven to be successful in detecting leakage of confidential information from programs. Modern approaches are based on formal methods, relying on program analysis to produce a SAT formula representing the program’s behavior, and model counting to measure the possible information flow. However, while program analysis scales to large codebases like the OpenSSL project, the formulas produced are too complex for analysis with precise model counting. In this paper we use the approximate model counter ApproxMC2 to quantify information flow. We show that ApproxMC2 is able to provide a large performance increase for a very small loss of precision, allowing the analysis of SAT formulas produced from complex code. We call the resulting technique ApproxFlow and test it on a large set of benchmarks against the state of the art. Finally, we show that ApproxFlow can evaluate the leakage incurred by the Heartbleed OpenSSL bug, contrarily to the state of the art.

Fabrizio Biondi, Michael A. Enescu, Annelie Heuser, Axel Legay, Kuldeep S. Meel, Jean Quilbeuf

Code Obfuscation Against Abstract Model Checking Attacks

Code protection technologies require anti reverse engineering transformations to obfuscate programs in such a way that tools and methods for program analysis become ineffective. We introduce the concept of model deformation inducing an effective code obfuscation against attacks performed by abstract model checking. This means complicating the model in such a way a high number of spurious traces are generated in any formal verification of the property to disclose about the system under attack. We transform the program model in order to make the removal of spurious counterexamples by abstraction refinement maximally inefficient. A measure of the quality of the obfuscation obtained by model deformation is given together with a corresponding best obfuscation strategy for abstract model checking based on partition refinement.

Roberto Bruni, Roberto Giacobazzi, Roberta Gori

Abstract Code Injection

A Semantic Approach Based on Abstract Non-Interference

Code injection attacks have been the most critical security risks for almost a decade. These attacks are due to an interference between an untrusted input (potentially controlled by an attacker) and the execution of a string-to-code statement, interpreting as code its parameter. In this paper, we provide a semantic-based model for code injection parametric on what the programmer considers safe behaviors. In particular, we provide a general (abstract) non-interference-based framework for abstract code injection policies, i.e., policies characterizing safety against code injection w.r.t. a given specification of safe behaviors. We expect the new semantic perspective on code injection to provide a deeper knowledge on the nature itself of this security threat. Moreover, we devise a mechanism for enforcing (abstract) code injection policies, soundly detecting attacks, i.e., avoiding false negatives.

Samuele Buro, Isabella Mastroeni

A Framework for Computer-Aided Design of Educational Domain Models

Many educational applications, from tutoring to problem generation, are built on a formal model of the operational knowledge for a given domain. These domain models consist of rewrite rules that experts apply to solve problems in the domain; e.g., factoring, $$ax + bx \rightarrow (a+b)x$$, is one such rule for K-12 algebra. Domain models currently take hundreds of hours to create, and they differ widely in how well they meet educational objectives such as maximizing problem-solving efficiency. Rapid, objective-driven creation of domain models is a key challenge in the development of personalized educational tools.This paper presents $$\textsc {RuleSy}_{}$$, a new framework for computer-aided authoring of domain models for educational applications. $$\textsc {RuleSy}_{}$$ takes as input a set of example problems (e.g., $$x + 1 = 2$$), a set of basic axiom rules for solving these problems (e.g., factoring), and a function expressing the desired educational objective. Given these inputs, it first synthesizes a set of sound tactic rules (e.g., combining like terms) that integrate multiple axioms into advanced problem-solving strategies. The axioms and tactics are then searched for a domain model that optimizes the objective. $$\textsc {RuleSy}_{}$$ is based on new algorithms for mining tactic specifications from examples and axioms, synthesizing tactic rules from these specifications, and selecting an optimal domain model from the axioms and tactics.We evaluate $$\textsc {RuleSy}_{}$$ on the domain of K-12 algebra, finding that it recovers textbook tactics and domain models, discovers new tactics and models, and outperforms a prior tool for this domain by orders of magnitude. But $$\textsc {RuleSy}_{}$$ generalizes beyond K-12 algebra: we also use it to (re)discover proof tactics for propositional logic, demonstrating its potential to aid in designing models for a variety of educational domains.

Eric Butler, Emina Torlak, Zoran Popović

Automatic Verification of Intermittent Systems

Transiently powered devices have given rise to a new model of computation called intermittent computation. Intermittent programs keep checkpointing the program state to a persistent memory, and on power failures, the programs resume from the last executed checkpoint. An intermittent program is usually automatically generated by instrumenting a given continuous program (continuously powered). The behaviour of the continuous program should be equivalent to that of the intermittent program under all possible power failures.This paper presents a technique to automatically verify the correctness of an intermittent program with respect to its continuous counterpart. We present a model of intermittence to capture all possible scenarios of power failures and an algorithm to automatically find a proof of equivalence between a continuous and an intermittent program.

Manjeet Dahiya, Sorav Bansal

On abstraction and compositionality for weak-memory linearisability

Linearisability is the de facto standard correctness condition for concurrent objects. Classical linearisability assumes that the effect of a method is captured entirely by the allowed sequences of calls and returns. This assumption is inadequate in the presence of relaxed memory models, where happens-before relations are also of importance.In this paper, we develop hb-linearisability for relaxed memory models by extending the classical notion with happens-before information. We consider two variants: Real timehb-linearisability, which adopts the classical view that time runs on a single global clock, and causalhb-linearisability, which eschews real-time and is appropriate for systems without a global clock. For both variants, we prove abstraction (so that programmers can reason about a client program using the sequential specification of an object rather than its more complex concurrent implementation) and composition (so that reasoning about independent objects can be conducted in isolation).

Brijesh Dongol, Radha Jagadeesan, James Riely, Alasdair Armstrong

From Shapes to Amortized Complexity

We propose a new method for the automated resource bound analysis of programs manipulating dynamic data structures built on top of an underlying shape and resource bound analysis. Our approach first constructs an integer abstraction for the input program using information gathered by a shape analyser; then a resource bound analyzer is run on the resulting integer program. The integer abstraction is based on shape norms — numerical measures on dynamic data structures (e.g., the length of a linked list). In comparison to related approaches, we consider a larger class of shape norms which we derive by a lightweight program analysis. The analysis identifies paths through the involved dynamic data structures, and filters the norms which are unlikely to be useful for the later bound analysis. We present a calculus for deriving the numeric changes of the shape norms, thereby generating the integer program. Our calculus encapsulates the minimal information which is required from the shape analysis.We have implemented our approach on top of the Forester shape analyser and evaluated it on a number of programs manipulating various list and tree structures using the Loopus tool as the underlying bounds analyser. We report on programs with complex data structures and/or using complex algorithms that could not be analysed in a fully automated and precise way before.

Tomáš Fiedor, Lukáš Holík, Adam Rogalewicz, Moritz Sinn, Tomáš Vojnar, Florian Zuleger

Invariant Generation for Multi-Path Loops with Polynomial Assignments

Program analysis requires the generation of program properties expressing conditions to hold at intermediate program locations. When it comes to programs with loops, these properties are typically expressed as loop invariants. In this paper we study a class of multi-path program loops with numeric variables, in particular nested loops with conditionals, where assignments to program variables are polynomial expressions over program variables. We call this class of loops extended P-solvable and introduce an algorithm for generating all polynomial invariants of such loops. By an iterative procedure employing Gröbner basis computation, our approach computes the polynomial ideal of the polynomial invariants of each program path and combines these ideals sequentially until a fixed point is reached. This fixed point represents the polynomial ideal of all polynomial invariants of the given extended P-solvable loop. We prove termination of our method and show that the maximal number of iterations for reaching the fixed point depends linearly on the number of program variables and the number of inner loops. In particular, for a loop with m program variables and r conditional branches we prove an upper bound of $$m\cdot r$$ iterations. We implemented our approach in the Aligator software package. Furthermore, we evaluated it on 18 programs with polynomial arithmetic and compared it to existing methods in invariant generation. The results show the efficiency of our approach.

Andreas Humenberger, Maximilian Jaroschek, Laura Kovács

Analyzing Guarded Protocols: Better Cutoffs, More Systems, More Expressivity

We study cutoff results for parameterized verification and synthesis of guarded protocols, as introduced by Emerson and Kahlon (2000). Guarded protocols describe systems of processes whose transitions are enabled or disabled depending on the existence of other processes in certain local states. Cutoff results reduce reasoning about systems with an arbitrary number of processes to systems of a determined, fixed size. Our work is based on the observation that existing cutoff results for guarded protocols are often impractical, since they scale linearly in the number of local states of processes in the system. We provide new cutoffs that scale not with the number of local states, but with the number of guards in the system, which is in many cases much smaller. Furthermore, we consider generalizations of the type of guards and of the specifications under consideration, and present results for problems that have not been known to admit cutoffs before.

Swen Jacobs, Mouhammad Sakr

Refinement Types for Ruby

Refinement types are a popular way to specify and reason about key program properties. In this paper, we introduce RTR, a new system that adds refinement types to Ruby. RTR is built on top of RDL, a Ruby type checker that provides basic type information for the verification process. RTR works by encoding its verification problems into Rosette, a solver-aided host language. RTR handles mixins through assume-guarantee reasoning and uses just-in-time verification for metaprogramming. We formalize RTR by showing a translation from a core, Ruby-like language with refinement types into Rosette. We apply RTR to check a range of functional correctness properties on six Ruby programs. We find that RTR can successfully verify key methods in these programs, taking only a few minutes to perform verification.

Milod Kazerounian, Niki Vazou, Austin Bourgerie, Jeffrey S. Foster, Emina Torlak

Modular Analysis of Executables Using On-Demand Heyting Completion

A function-modular analysis is presented that computes precise function summaries in the presence of pointers and indirect calls. Our approach computes several summaries for a function, each specialized to a particular input property. A call site combines the effect of several summaries, based on what properties hold. The key novelty is that the properties are tailored to the function being analyzed. Moreover, they are represented in a domain-agnostic way by using Herbrand terms with variables. Callers instantiate these variables, based on their state. For each variable instantiation, a new summary is computed. Since the computed summaries are exact with respect to the property, our fixpoint computation resembles the process of Heyting completion where a domain is iteratively refined to be complete wrt. the intersection with a property. Our approach combines the advantages of a modular analysis, such as scalability and context-sensitivity, with the ability to compute meaningful summaries for functions that call other functions via pointers that were passed as arguments. We illustrate our framework in the context of inferring indirect callees in x86 executables.

Julian Kranz, Axel Simon

Learning to Complement Büchi Automata

Complementing Büchi automata is an intriguing and intensively studied problem. Complementation suffers from a theoretical super-exponential complexity. From an applied point of view, however, there is no reason to assume that the target language is more complex than the source language. The chance that the smallest representation of a complement language is (much) smaller or (much) larger than the representation of its source should be the same; after all, complementing twice is an empty operation. With this insight, we study the use of learning for complementation. We use a recent learning approach for FDFAs, families of DFAs, that can be used to represent $$\omega $$-regular languages, as a basis for our complementation technique. As a surprising result, it has proven beneficial not to learn an FDFA that represents the complement language of a Büchi automaton (or the language itself, as complementing FDFAs is cheap), but to use it as an intermediate construction in the learning cycle. While the FDFA is refined in every step, the target is an associated Büchi automaton that underestimates the language of a conjecture FDFA. We have implemented our approach and compared it on benchmarks against the algorithms provided in GOAL. The complement automata we produce for large Büchi automata are generally smaller, which makes them more valuable for applications like model checking. Our approach has also been faster in 98% of the cases. Finally we compare the advantages we gain by the novel techniques with advantages provided by the high level optimisations implemented in the state-of-the-art tool SPOT.

Yong Li, Andrea Turrini, Lijun Zhang, Sven Schewe

: Planner-less Proofs of Probabilistic Parameterized Protocols

Liveness of many probabilistic parameterized protocols are proven by first crafting a family of sequences of “good"random draws, thus, in effect “de-probabilizing" the system, and then proving the system just as one would for a non-probabilistic parameterized system. The family of “good"random draws (known in different names, such as “planner" and “strategy") is often an intricate piece of machinery, arising from the need to reason about a parameterized Markov Decision Process (MDP). In effect, it represents a parameterized strategy for an infinite game played between a probabilistic player and a non-deterministic adversary.We present a novel approach to the problem that avoids the need to de-probabilize the system. First, we abstract the parameterized MDP to a finite MDP. The probabilistic choices of this abstraction are drawn not from an independent identically distributed random variable, but instead from a parameterized Markov chain. That is, the distribution of the random variable at any time is dependent on its history and also on the system’s parameters. Then, we prove properties about infinite behaviors of the Markov chain and transfer these to the finite MDP. At this point, the proof can be completed by ordinary finite-state model checking. By using abstraction to separate parameterization from nondeterminism, we eliminate the parameterized game and avoid the need for a planner.

Lenore D. Zuck, Kenneth L. McMillan, Jordan Torf

Co-Design and Verification of an Available File System

Distributed file systems play a vital role in large-scale enterprise services. However, the designer of a distributed file system faces a vexing choice between strong consistency and asynchronous replication. The former supports a standard sequential model by synchronising operations, but is slow and fragile. The latter is highly available and responsive, but exposes users to concurrency anomalies. In this paper, we describe a rigorous and general approach to navigating this trade-off by leveraging static verification tools that allow to verify different file system designs. We show that common file system operations can run concurrently without synchronisation, while still retaining a semantics reasonably similar to Posix hierarchical structure. The one exception is the $$\mathsf {move}$$ operation, for which we prove that, unless synchronised, it will have an anomalous behaviour.

Mahsa Najafzadeh, Marc Shapiro, Patrick Eugster

Abstraction-Based Interaction Model for Synthesis

Program synthesis is the problem of computing from a specification a program that implements it. New and popular variations on the synthesis problem accept specifications in formats that are easier for the human synthesis user to provide: input-output example pairs, type information, and partial logical specifications. These are all partial specification formats, encoding only a fraction of the expected behavior of the program, leaving many matching programs. This transition into partial specification also changes the mode of work for the user, who now provides additional specifications as long as they are unhappy with the result. This turns synthesis into an iterative, interactive process.We present a formal model for interactive synthesis, leveraging an abstract domain of predicates on programs in order to describe the iterative refinement of the specifications and reduction of the candidate program space. We use this model to describe the behavior of several real-world synthesizers. Additionally, we present two conditions for termination of a synthesis session, one hinging only on the properties of the available partial specifications, and the other also on the behavior of the user. Finally, we show conditions for realizability of the user’s intent, and show the limitations of backtracking when it is apparent a session will fail.

Hila Peleg, Shachar Itzhaky, Sharon Shoham

Generating Tests by Example

Property-based testing is a technique combining parametric tests with value generators, to create an efficient and maintainable way to test general specifications. To test the program, property-based testing randomly generates a large number of inputs defined by the generator to check whether the test-assertions hold.We present a novel framework that synthesizes property-based tests from existing unit tests. Projects often have a suite of unit tests that have been collected over time, some of them checking specific and subtle cases. Our approach leverages existing unit tests to learn property-based tests that can be used to increase value coverage by orders of magnitude. Further, we show that our approach: (i) preserves the subtleties of the original test suite; and (ii) produces properties that cover a greater range of inputs than those in the example set.The main idea is to use abstractions to over-approximate the concrete values of tests with similar structure. These abstractions are then used to produce appropriate value generators that can drive the synthesized property-based test.We present JARVIS, a tool that synthesizes property-based tests from unit tests, while preserving the subtleties of the original unit tests. We evaluate JARVIS on tests from Apache projects, and show that it preserves these interesting tests while increasing value coverage by orders of magnitude.

Hila Peleg, Dan Rasin, Eran Yahav

A Logical System for Modular Information Flow Verification

Information Flow Control (IFC) is important to ensure secure programs where secret data does not influence any public data. The pervasive standard that IFC aims to is non-interference. Current IFC systems are separated into dynamic IFC, static IFC, and hybrids between static and dynamic. With dynamic IFC suffering from high overhead and limited ability to prevent implicit flows due to the paths not taken, we propose a novel modular static IFC system. To the best of our knowledge, this is the first modular static IFC system. Unlike type-based static IFC systems, ours is logic-based. The limitation of type-based IFC systems is in the inviolability of static security label declarations for fields. As such, they suffer from transient leaks on fields. Our proposed system uses a Hoare-like logic. It verifies each function independently with the help of separation logic. Furthermore, we provide the proof of correctness for our novel IFC system with respect to termination- and timing-insensitive non-interference.

Adi Prabawa, Mahmudul Faisal Al Ameen, Benedict Lee, Wei-Ngan Chin

On Constructivity of Galois Connections

Abstract interpretation-based static analyses rely on abstract domains of program properties, such as intervals or congruences for integer variables. Galois connections (GCs) between posets provide the most widespread and useful formal tool for mathematically specifying abstract domains. Darais and Van Horn [2016] put forward a notion of constructive Galois connection for unordered sets (rather than posets), which allows to define abstract domains in a so-called mechanized and calculational proof style and therefore enables the use of proof assistants like Coq and Agda for automatically extracting certified algorithms of static analysis. We show here that constructive GCs are isomorphic, in a mathematical meaning which includes sound abstract functions, to so-called partitioning GCs — an already known class of GCs which allows to cast standard set partitions as an abstract domain. Darais and Van Horn [2016] further provide a notion of constructive Galois connection for posets, which we prove to be mathematically isomorphic to plain GCs. Drawing on these findings, we put forward purely partitioning GCs, a novel class of constructive abstract domains for a mechanized approach to abstract interpretation. We show that this class of abstract domains allows us to represent a set partition in a flexible way while retaining a constructive approach to Galois connections.

Francesco Ranzato

Revisiting MITL to Fix Decision Procedures

Metric Interval Temporal Logic(MITL) is a well studied real-time, temporal logic that has decidable satisfiability and model checking problems. The decision procedures for MITLrely on the automata theoretic approach, where logic formulas are translated into equivalent timed automata. Since timed automata are not closed under complementation, decision procedures for MITLfirst convert a formula into negated normal form before translating to a timed automaton. We show that, unfortunately, these 20-year-old procedures are incorrect, because they rely on an incorrect semantics of the $$\mathcal {R}$$ operator. We present the right semantics of $$\mathcal {R}$$ and give new, correct decision procedures for MITL. We show that both satisfiability and model checking for MITLare , as was previously claimed. We also identify a fragment of MITLthat we call MITLWIthat is richer than $$\texttt {MITL}_{0,\!\infty }$$, for which we show that both satisfiability and model checking are . Many of our results have been formally proved in PVS

Nima Roohi, Mahesh Viswanathan

Selfless Interpolation for Infinite-State Model Checking

We present a new method for interpolation in satisfiability modulo theories (SMT) that is aimed at applications in model-checking and invariant inference. The new method allows us to control the finite-convergence of interpolant sequences and, at the same time, provides expressive invariant-driven interpolants. It is based on a novel integration of model-driven quantifier elimination and abstract interpretation into existing SMT frameworks for interpolation. We have integrated the new approach into the sally model checker and we include experimental evaluation showing its effectiveness.

Tanja Schindler, Dejan Jovanović

An Abstract Interpretation Framework for the Round-Off Error Analysis of Floating-Point Programs

This paper presents an abstract interpretation framework for the round-off error analysis of floating-point programs. This framework defines a parametric abstract analysis that computes, for each combination of ideal and floating-point execution path of the program, a sound over-approximation of the accumulated floating-point round-off error that may occur. In addition, a Boolean expression that characterizes the input values leading to the computed error approximation is also computed. An abstraction on the control flow of the program is proposed to mitigate the explosion of the number of elements generated by the analysis. Additionally, a widening operator is defined to ensure the convergence of recursive functions and loops. An instantiation of this framework is implemented in the prototype tool PRECiSA that generates formal proof certificates stating the correctness of the computed round-off errors.

Laura Titolo, Marco A. Feliú, Mariano Moscato, César A. Muñoz

Backmatter

Weitere Informationen

Premium Partner

Neuer Inhalt

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise