Skip to main content

2017 | Buch

Computer Aided Verification

29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 10426 and LNCS 10427 constitutes the refereed proceedings of the 29th International Conference on Computer Aided Verification, CAV 2017, held in Heidelberg, Germany, in July 2017.

The total of 50 full and 7 short papers presented together with 5 keynotes and tutorials in the proceedings was carefully reviewed and selected from 191 submissions.

The CAV conference series is dedicated to the advancement of the theory and practice of computer-aided formal analysis of hardware and software systems. The conference covers the spectrum from theoretical results to concrete applications, with an emphasis on practical verification tools and the algorithms and techniques that are needed for their implementation.

Inhaltsverzeichnis

Frontmatter

Invited Contributions

Frontmatter
Safety Verification of Deep Neural Networks
Abstract
Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.
Xiaowei Huang, Marta Kwiatkowska, Sen Wang, Min Wu
Program Verification Under Weak Memory Consistency Using Separation Logic
Abstract
The semantics of concurrent programs is now defined by a weak memory model, determined either by the programming language (e.g., in the case of C/C++11 or Java) or by the hardware architecture (e.g., for assembly and legacy C code). Since most work in concurrent software verification has been developed prior to weak memory consistency, it is natural to ask how these models affect formal reasoning about concurrent programs.
In this overview paper, we show that verification is indeed affected: for example, the standard Owicki-Gries method is unsound under weak memory. Further, based on concurrent separation logic, we develop a number of sound program logics for fragments of the C/C++11 memory model. We show that these logics are useful not only for verifying concurrent programs, but also for explaining the weak memory constructs of C/C++.
Viktor Vafeiadis
The Power of Symbolic Automata and Transducers
Abstract
Symbolic automata and transducers extend finite automata and transducers by allowing transitions to carry predicates and functions over rich alphabet theories, such as linear arithmetic. Therefore, these models extend their classic counterparts to operate over infinite alphabets, such as the set of rational numbers. Due to their expressiveness, symbolic automata and transducers have been used to verify functional programs operating over lists and trees, to prove the correctness of complex implementations of BASE64 and UTF encoders, and to expose data parallelism in computations that may otherwise seem inherently sequential. In this paper, we give an overview of what is currently known about symbolic automata and transducers as well as their variants. We discuss what makes these models different from their finite-alphabet counterparts, what kind of applications symbolic models can enable, and what challenges arise when reasoning about these formalisms. Finally, we present a list of open problems and research directions that relate to both the theory and practice of symbolic automata and transducers.
Loris D’Antoni, Margus Veanes
Maximum Satisfiability in Software Analysis: Applications and Techniques
Abstract
A central challenge in software analysis concerns balancing different competing tradeoffs. To address this challenge, we propose an approach based on the Maximum Satisfiability (MaxSAT) problem, an optimization extension of the Boolean Satisfiability (SAT) problem. We demonstrate the approach on three diverse applications that advance the state-of-the-art in balancing tradeoffs in software analysis. Enabling these applications on real-world programs necessitates solving large MaxSAT instances comprising over \(10^{30}\) clauses in a sound and optimal manner. We propose a general framework that scales to such instances by iteratively expanding a subset of clauses while providing soundness and optimality guarantees. We also present new techniques to instantiate and optimize the framework.
Xujie Si, Xin Zhang, Radu Grigore, Mayur Naik

Probabilistic Systems

Frontmatter
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Abstract
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.
Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer
Automated Recurrence Analysis for Almost-Linear Expected-Runtime Bounds
Abstract
We consider the problem of developing automated techniques for solving recurrence relations to aid the expected-runtime analysis of programs. The motivation is that several classical textbook algorithms have quite efficient expected-runtime complexity, whereas the corresponding worst-case bounds are either inefficient (e.g., Quick-Sort), or completely ineffective (e.g., Coupon-Collector). Since the main focus of expected-runtime analysis is to obtain efficient bounds, we consider bounds that are either logarithmic, linear or almost-linear (\(\mathcal {O}(\log {n})\), \(\mathcal {O}(n)\), \(\mathcal {O}(n\cdot \log {n})\), respectively, where n represents the input size). Our main contribution is an efficient (simple linear-time algorithm) sound approach for deriving such expected-runtime bounds for the analysis of recurrence relations induced by randomized algorithms. The experimental results show that our approach can efficiently derive asymptotically optimal expected-runtime bounds for recurrences of classical randomized algorithms, including Randomized-Search, Quick-Sort, Quick-Select, Coupon-Collector, where the worst-case bounds are either inefficient (such as linear as compared to logarithmic expected-runtime complexity, or quadratic as compared to linear or almost-linear expected-runtime complexity), or ineffective.
Krishnendu Chatterjee, Hongfei Fu, Aniket Murhekar
Markov Automata with Multiple Objectives
Abstract
Markov automata combine non-determinism, probabilistic branching, and exponentially distributed delays. This compositional variant of continuous-time Markov decision processes is used in reliability engineering, performance evaluation and stochastic scheduling. Their verification so far focused on single objectives such as (timed) reachability, and expected costs. In practice, often the objectives are mutually dependent and the aim is to reveal trade-offs. We present algorithms to analyze several objectives simultaneously and approximate Pareto curves. This includes, e.g., several (timed) reachability objectives, or various expected cost objectives. We also consider combinations thereof, such as on-time-within-budget objectives—which policies guarantee reaching a goal state within a deadline with at least probability p while keeping the allowed average costs below a threshold? We adopt existing approaches for classical Markov decision processes. The main challenge is to treat policies exploiting state residence times, even for untimed objectives. Experimental results show the feasibility and scalability of our approach.
Tim Quatmann, Sebastian Junges, Joost-Pieter Katoen
Ensuring the Reliability of Your Model Checker: Interval Iteration for Markov Decision Processes
Abstract
Probabilistic model checking provides formal guarantees on quantitative properties such as reliability, performance or risk, so the accuracy of the numerical results that it returns is critical. However, recent results have shown that implementations of value iteration, a widely used iterative numerical method for computing reachability probabilities, can return results that are incorrect by several orders of magnitude. To remedy this, interval iteration, which instead converges simultaneously from both above and below, has been proposed. In this paper, we present interval iteration techniques for computing expected accumulated weights (or costs), a considerably broader class of properties. This relies on an efficient, mainly graph-based method to determine lower and upper bounds for extremal expected accumulated weights. To offset the additional effort of dual convergence, we also propose topological interval iteration, which increases efficiency using a model decomposition into strongly connected components. Finally, we present a detailed experimental evaluation, which highlights inaccuracies in standard benchmarks, rather than just artificial examples, and illustrates the feasibility of our techniques.
Christel Baier, Joachim Klein, Linda Leuschner, David Parker, Sascha Wunderlich
Repairing Decision-Making Programs Under Uncertainty
Abstract
The world is uncertain. Programs can be wrong. We address the problem of repairing a program under uncertainty, where program inputs are drawn from a probability distribution. The goal of the repair is to construct a new program that satisfies a probabilistic Boolean expression. Our work focuses on loop-free decision-making programs, e.g., classifiers, that return a Boolean- or finite-valued result. Specifically, we propose distribution-guided inductive synthesis, a novel program repair technique that iteratively (isamples a finite set of inputs from a probability distribution defining the precondition, (ii) synthesizes a minimal repair to the program over the sampled inputs using an smt-based encoding, and (iiiverifies that the resulting program is correct and is semantically close to the original program. We formalize our algorithm and prove its correctness by rooting it in computational learning theory. For evaluation, we focus on repairing machine learning classifiers with the goal of making them unbiased (fair). Our implementation and evaluation demonstrate our approach’s ability to repair a range of programs.
Aws Albarghouthi, Loris D’Antoni, Samuel Drews
Value Iteration for Long-Run Average Reward in Markov Decision Processes
Abstract
Markov decision processes (MDPs) are standard models for probabilistic systems with non-deterministic behaviours. Long-run average rewards provide a mathematically elegant formalism for expressing long term performance. Value iteration (VI) is one of the simplest and most efficient algorithmic approaches to MDPs with other properties, such as reachability objectives. Unfortunately, a naive extension of VI does not work for MDPs with long-run average rewards, as there is no known stopping criterion. In this work our contributions are threefold. (1) We refute a conjecture related to stopping criteria for MDPs with long-run average rewards. (2) We present two practical algorithms for MDPs with long-run average rewards based on VI. First, we show that a combination of applying VI locally for each maximal end-component (MEC) and VI for reachability objectives can provide approximation guarantees. Second, extending the above approach with a simulation-guided on-demand variant of VI, we present an anytime algorithm that is able to deal with very large models. (3) Finally, we present experimental results showing that our methods significantly outperform the standard approaches on several benchmarks.
Pranav Ashok, Krishnendu Chatterjee, Przemysław Daca, Jan Křetínský, Tobias Meggendorfer

Data Driven Techniques

Frontmatter
STLInspector: STL Validation with Guarantees
Abstract
STLInspector is a tool for systematic validation of Signal Temporal Logic (STL) specifications against informal textual requirements. Its goal is to identify typical faults that occur in the process of formalizing requirements by mutating a candidate specification. STLInspector computes a series of representative signals that enables a requirements engineer to validate a candidate specification against all its mutated variants, thus achieving full mutation coverage. By visual inspection of the signals via a web-based GUI, an engineer can obtain high confidence in the correctness of the formalization – even if she is not familiar with STL. STLInspector makes the assessment of formal specifications accessible to a wide range of developers in industry, hence contributes to leveraging the use of formal specifications and computer-aided verification in industrial practice. We apply the tool to several collections of STL formulas and show its effectiveness.
Hendrik Roehm, Thomas Heinz, Eva Charlotte Mayer
Learning a Static Analyzer from Data
Abstract
To be practically useful, modern static analyzers must precisely model the effect of both, statements in the programming language as well as frameworks used by the program under analysis. While important, manually addressing these challenges is difficult for at least two reasons: (i) the effects on the overall analysis can be non-trivial, and (ii) as the size and complexity of modern libraries increase, so is the number of cases the analysis must handle.
In this paper we present a new, automated approach for creating static analyzers: instead of manually providing the various inference rules of the analyzer, the key idea is to learn these rules from a dataset of programs. Our method consists of two ingredients: (i) a synthesis algorithm capable of learning a candidate analyzer from a given dataset, and (ii) a counter-example guided learning procedure which generates new programs beyond those in the initial dataset, critical for discovering corner cases and ensuring the learned analysis generalizes to unseen programs.
We implemented and instantiated our approach to the task of learning JavaScript static analysis rules for a subset of points-to analysis and for allocation sites analysis. These are challenging yet important problems that have received significant research attention. We show that our approach is effective: our system automatically discovered practical and useful inference rules for many cases that are tricky to manually identify and are missed by state-of-the-art, hand tuned analyzers.
Pavol Bielik, Veselin Raychev, Martin Vechev
Synthesis with Abstract Examples
Abstract
Interactive program synthesizers enable a user to communicate his/her intent via input-output examples. Unfortunately, such synthesizers only guarantee that the synthesized program is correct on the provided examples. A user that wishes to guarantee correctness for all possible inputs has to manually inspect the synthesized program, an error-prone and challenging task.
We present a novel synthesis framework that communicates only through (abstract) examples and guarantees that the synthesized program is correct on all inputs. The main idea is to use abstract examples—a new form of examples that represent a potentially unbounded set of concrete examples. An abstract example captures how part of the input space is mapped to corresponding outputs by the synthesized program. Our framework uses a generalization algorithm to compute abstract examples which are then presented to the user. The user can accept an abstract example, or provide a counterexample in which case the synthesizer will explore a different program. When the user accepts a set of abstract examples that covers the entire input space, the synthesis process is completed.
We have implemented our approach and we experimentally show that our synthesizer communicates with the user effectively by presenting on average 3 abstract examples until the user rejects false candidate programs. Further, we show that a synthesizer that prunes the program space based on the abstract examples reduces the overall number of required concrete examples in up to 96% of the cases.
Dana Drachsler-Cohen, Sharon Shoham, Eran Yahav
Data-Driven Synthesis of Full Probabilistic Programs
Abstract
Probabilistic programming languages (PPLs) provide users a clean syntax for concisely representing probabilistic processes and easy access to sophisticated built-in inference algorithms. Unfortunately, writing a PPL program by hand can be difficult for non-experts, requiring extensive knowledge of statistics and deep insights into the data. To make the modeling process easier, we have created a tool that synthesizes PPL programs from relational datasets. Our synthesizer leverages the input data to generate a program sketch, then applies simulated annealing to complete the sketch. We introduce a data-guided approach to the program mutation stage of simulated annealing; this innovation allows our tool to scale to synthesizing complete probabilistic programs from scratch. We find that our synthesizer produces accurate programs from 10,000-row datasets in 21 s on average.
Sarah Chasins, Phitchaya Mangpo Phothilimthana
Logical Clustering and Learning for Time-Series Data
Abstract
In order to effectively analyze and build cyberphysical systems (CPS), designers today have to combat the data deluge problem, i.e., the burden of processing intractably large amounts of data produced by complex models and experiments. In this work, we utilize monotonic parametric signal temporal logic (PSTL) to design features for unsupervised classification of time series data. This enables using off-the-shelf machine learning tools to automatically cluster similar traces with respect to a given PSTL formula. We demonstrate how this technique produces interpretable formulas that are amenable to analysis and understanding using a few representative examples. We illustrate this with case studies related to automotive engine testing, highway traffic analysis, and auto-grading massively open online courses.
Marcell Vazquez-Chanlatte, Jyotirmoy V. Deshmukh, Xiaoqing Jin, Sanjit A. Seshia

Runtime Verification

Frontmatter
Montre: A Tool for Monitoring Timed Regular Expressions
Abstract
We present Montre, a monitoring tool to search patterns specified by timed regular expressions over real-time behaviors. We use timed regular expressions as a compact, natural, and highly-expressive pattern specification language for monitoring applications involving quantitative timing constraints. Our tool essentially incorporates online and offline timed pattern matching algorithms so it is capable of finding all occurrences of a given pattern over both logged and streaming behaviors. Furthermore, Montre is designed to work with other tools via standard interfaces to perform more complex and versatile tasks for analyzing and reasoning about cyber-physical systems. As the first of its kind, we believe Montre will enable a new line of inquiries and techniques in these fields.
Dogan Ulus
Runtime Monitoring with Recovery of the SENT Communication Protocol
Abstract
We show how the requirements of the SENT communication protocol between a magnetic sensor and an electronic control unit (ECU) can be monitored in real time, with a monitor capable of processing 70 million samples per second. We elaborate on a complete flow from formalizing electrical and timing requirements using Signal Temporal Logic (STL) and Timed Regular Expressions (TRE), to implementing runtime monitors in FPGA hardware and evaluating the results in the lab. For a class of asynchronous serial protocols, we define a procedure to obtain monitors that are capable to recover after violations. We elaborate on two different approaches to monitor the requirements of interest: (i) temporal testers with SystemC, STL and High-Level Synthesis; (ii) automata-based approach with TRE in HDL. We also present how the results of the monitoring can be used for error logging to provide users with extensive debugging information. Our approach allows to monitor requirements-specification conformance in real time for long-term tests.
Konstantin Selyunin, Stefan Jaksic, Thang Nguyen, Christian Reidl, Udo Hafner, Ezio Bartocci, Dejan Nickovic, Radu Grosu
Runtime Verification of Temporal Properties over Out-of-Order Data Streams
Abstract
We present a monitoring approach for verifying systems at runtime. Our approach targets systems whose components communicate with the monitors over unreliable channels, where messages can be delayed or lost. In contrast to prior works, whose property specification languages are limited to propositional temporal logics, our approach handles an extension of the real-time logic MTL with freeze quantifiers for reasoning about data values. We present its underlying theory based on a new three-valued semantics that is well suited to soundly and completely reason online about event streams in the presence of message delay or loss. We also evaluate our approach experimentally. Our prototype implementation processes hundreds of events per second in settings where messages are received out of order.
David Basin, Felix Klaedtke, Eugen Zălinescu

Cyber-Physical Systems

Frontmatter
Lagrangian Reachabililty
Abstract
We introduce LRT, a new Lagrangian-based ReachTube computation algorithm that conservatively approximates the set of reachable states of a nonlinear dynamical system. LRT makes use of the Cauchy-Green stretching factor (SF), which is derived from an over-approximation of the gradient of the solution-flows. The SF measures the discrepancy between two states propagated by the system solution from two initial states lying in a well-defined region, thereby allowing LRT to compute a reachtube with a ball-overestimate in a metric where the computed enclosure is as tight as possible. To evaluate its performance, we implemented a prototype of LRT in C++/Matlab, and ran it on a set of well-established benchmarks. Our results show that LRT compares very favorably with respect to the CAPD and Flow* tools.
Jacek Cyranka, Md. Ariful Islam, Greg Byrne, Paul Jones, Scott A. Smolka, Radu Grosu
Simulation-Equivalent Reachability of Large Linear Systems with Inputs
Abstract
Control systems can be subject to outside inputs, environmental effects, disturbances, and sensor/actuator inaccuracy. To model such systems, linear differential equations with constrained inputs are often used, \(\dot{x}(t) = A x(t) + B u(t)\), where the input vector u(t) stays in some bound. Simulating these models is an important tool for detecting design issues. However, since there may be many possible initial states and many possible valid sequences of inputs, simulation-only analysis may also miss critical system errors. In this paper, we present a scalable verification method that computes the simulation-equivalent reachable set for a linear system with inputs. This set consists of all the states that can be reached by a fixed-step simulation for (i) any choice of start state in the initial set and (ii) any choice of piecewise constant inputs.
Building upon a recently-developed reachable set computation technique that uses a state-set representation called a generalized star, we extend the approach to incorporate the effects of inputs using linear programming. The approach is made scalable through two optimizations based on Minkowski sum decomposition and warm-start linear programming. We demonstrate scalability by analyzing a series of large benchmark systems, including a system with over 10,000 dimensions (about two orders of magnitude larger than what can be handled by existing tools). The method detects previously-unknown violations in benchmark models, finding complex counter-example traces which validate both its correctness and accuracy.
Stanley Bak, Parasara Sridhar Duggirala
MightyL: A Compositional Translation from MITL to Timed Automata
Abstract
Metric Interval Temporal Logic (MITL ) was first proposed in the early 1990s as a specification formalism for real-time systems. Apart from its appealing intuitive syntax, there are also theoretical evidences that make MITL a prime real-time counterpart of Linear Temporal Logic (LTL ). Unfortunately, the tool support for MITL verification is still lacking to this day. In this paper, we propose a new construction from MITL to timed automata via very-weak one-clock alternating timed automata. Our construction subsumes the well-known construction from LTL to Büchi automata by Gastin and Oddoux and yet has the additional benefits of being compositional and integrating easily with existing tools. We implement the construction in our new tool MightyL and report on experiments using Uppaal and LTSmin as back-ends.
Thomas Brihaye, Gilles Geeraerts, Hsi-Ming Ho, Benjamin Monmege
DryVR: Data-Driven Verification and Compositional Reasoning for Automotive Systems
Abstract
We present the DryVR framework for verifying hybrid control systems that are described by a combination of a black-box simulator for trajectories and a white-box transition graph specifying mode switches. The framework includes (a) a probabilistic algorithm for learning sensitivity of the continuous trajectories from simulation data, (b) a bounded reachability analysis algorithm that uses the learned sensitivity, and (c) reasoning techniques based on simulation relations and sequential composition, that enable verification of complex systems under long switching sequences, from the reachability analysis of a simpler system under shorter sequences. We demonstrate the utility of the framework by verifying a suite of automotive benchmarks that include powertrain control, automatic transmission, and several autonomous and ADAS features like automatic emergency braking, lane-merge, and auto-passing controllers.
Chuchu Fan, Bolun Qi, Sayan Mitra, Mahesh Viswanathan
Automated Formal Synthesis of Digital Controllers for State-Space Physical Plants
Abstract
We present a sound and automated approach to synthesize safe digital feedback controllers for physical plants represented as linear, time-invariant models. Models are given as dynamical equations with inputs, evolving over a continuous state space and accounting for errors due to the digitization of signals by the controller. Our counterexample guided inductive synthesis (CEGIS) approach has two phases: We synthesize a static feedback controller that stabilizes the system but that may not be safe for all initial conditions. Safety is then verified either via BMC or abstract acceleration; if the verification step fails, a counterexample is provided to the synthesis engine and the process iterates until a safe controller is obtained. We demonstrate the practical value of this approach by automatically synthesizing safe controllers for intricate physical plant models from the digital control literature.
Alessandro Abate, Iury Bessa, Dario Cattaruzza, Lucas Cordeiro, Cristina David, Pascal Kesseli, Daniel Kroening, Elizabeth Polgreen
Classification and Coverage-Based Falsification for Embedded Control Systems
Abstract
Many industrial cyber-physical system (CPS) designs are too complex to formally verify system-level properties. A practical approach for testing and debugging these system designs is falsification, wherein the user provides a temporal logic specification of correct system behaviors, and some technique for selecting test cases is used to identify behaviors that demonstrate that the specification does not hold for the system. While coverage metrics are often used to measure the exhaustiveness of this kind of testing approach for software systems, existing falsification approaches for CPS designs do not consider coverage for the signal variables. We present a new coverage measure for continuous signals and a new falsification technique that leverages the measure to efficiently identify falsifying traces. This falsification algorithm combines global and local search methods and uses a classification technique based on support vector machines to identify regions of the search space on which to focus effort. We use an industrial example from an automotive fuel cell application and other benchmark models to compare the new approach against existing falsification tools.
Arvind Adimoolam, Thao Dang, Alexandre Donzé, James Kapinski, Xiaoqing Jin

Concurrency

Frontmatter
GPUDrano: Detecting Uncoalesced Accesses in GPU Programs
Abstract
Graphics Processing Units (GPUs) have become widespread and popular over the past decade. Fully utilizing the parallel compute and memory resources that GPUs present remains a significant challenge, however. In this paper, we describe GPUDrano: a scalable static analysis that detects uncoalesced global memory accesses in CUDA programs. Uncoalesced global memory accesses arise when a GPU program accesses DRAM in an ill-structured way, increasing latency and energy consumption. We formalize the GPUDrano static analysis and compare it empirically against a dynamic analysis to demonstrate that false positives are rare for most programs. We implement GPUDrano in LLVM and show that it can run on GPU programs of over a thousand lines of code. GPUDrano finds 133 of the 143 uncoalesced static memory accesses in the popular Rodinia GPU benchmark suite, demonstrating the precision of our implementation. Fixing these bugs leads to real performance improvements of up to 25%.
Rajeev Alur, Joseph Devietti, Omar S. Navarro Leija, Nimit Singhania
Context-Sensitive Dynamic Partial Order Reduction
Abstract
Dynamic Partial Order Reduction (DPOR) is a powerful technique used in verification and testing to reduce the number of equivalent executions explored. Two executions are equivalent if they can be obtained from each other by swapping adjacent, non-conflicting (independent) execution steps. Existing DPOR algorithms rely on a notion of independence that is context-insensitive, i.e., the execution steps must be independent in all contexts. In practice, independence is often proved by just checking no execution step writes on a shared variable. We present context-sensitive DPOR, an extension of DPOR that uses context-sensitive independence, where two steps might be independent only in the particular context explored. We show theoretically and experimentally how context-sensitive DPOR can achieve exponential gains.
Elvira Albert, Puri Arenas, María García de la Banda, Miguel Gómez-Zamalloa, Peter J. Stuckey
Starling: Lightweight Concurrency Verification with Views
Abstract
Modern program logics have made it feasible to verify the most complex concurrent algorithms. However, many such logics are complex, and most lack automated tool support. We propose Starling, a new lightweight logic and automated tool for concurrency verification. Starling takes a proof outline written in an abstracted Hoare-logic style, and converts it into proof terms that can be discharged by a sequential solver. Starling’s approach is generic in its structure, making it easy to target different solvers. In this paper we verify shared-variable algorithms using the Z3 SMT solver, and heap-based algorithms using the GRASShopper solver. We have applied our approach to a range of concurrent algorithms, including Rust’s atomic reference counter, the Linux ticketed lock, the CLH queue-lock, and a fine-grained list algorithm.
Matt Windsor, Mike Dodds, Ben Simner, Matthew J. Parkinson
Compositional Model Checking with Incremental Counter-Example Construction
Abstract
In compositional model checking, the approach is to reason about the correctness of a system by lifting results obtained in analyses of subsystems to the system-level. The main challenge, however, is that requirements, in the form of temporal logic formulae, are usually specified at the system-level, and it is not obvious how to relate these to subsystem-local behaviour. In this paper, we propose a new approach to checking regular safety properties, which we call Incremental Counter-Example Construction (ICC). Its main strong point is that it performs a series of model checking procedures, and that each one only explores a small part of the entire state space. This makes ICC an excellent approach in those cases where state space explosion is an issue. Moreover, it is frequently much faster than traditional explicit-state model checking, particularly when the model satisfies the verified property, and in most cases not significantly slower. We explain the technique, and report on experiments we have conducted using an implementation of ICC, comparing the results to those obtained with other approaches.
Anton Wijs, Thomas Neele
Pithya: A Parallel Tool for Parameter Synthesis of Piecewise Multi-affine Dynamical Systems
Abstract
We present a novel tool for parameter synthesis of piecewise multi-affine dynamical systems from specifications expressed in a hybrid branching-time temporal logic. The tool is based on the algorithm of parallel semi-symbolic coloured model checking that extends standard model checking methods to cope with parametrised Kripke structures. The tool implements state-of-the-art techniques developed in our previous research and is primarily intended to be used for the analysis of dynamical systems with uncertain parameters that frequently arise in computational systems biology. However, it can be employed for any dynamical system where the non-linear equations can be sufficiently well approximated by piecewise multi-affine equations.
Nikola Beneš, Luboš Brim, Martin Demko, Samuel Pastva, David Šafránek
Backmatter
Metadaten
Titel
Computer Aided Verification
herausgegeben von
Rupak Majumdar
Viktor Kunčak
Copyright-Jahr
2017
Electronic ISBN
978-3-319-63387-9
Print ISBN
978-3-319-63386-2
DOI
https://doi.org/10.1007/978-3-319-63387-9