Skip to main content

2014 | Buch

Verification, Model Checking, and Abstract Interpretation

15th International Conference, VMCAI 2014, San Diego, CA, USA, January 19-21, 2014, Proceedings

herausgegeben von: Kenneth L. McMillan, Xavier Rival

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 15th International Conference on Verification, Model Checking and Abstract Interpretation, VMCAI 2014, held in San Diego, CA, USA, in January 2013. The 25 revised full papers presented were carefully reviewed and selected from 64 submissions. The papers cover a wide range of topics including program verification, model checking, abstract interpretation and abstract domains, program synthesis, static analysis, type systems, deductive methods, program certification, debugging techniques, program transformation, optimization, hybrid and cyber-physical systems.

Inhaltsverzeichnis

Frontmatter
SAT-Based Synthesis Methods for Safety Specs
Abstract
Automatic synthesis of hardware components from declarative specifications is an ambitious endeavor in computer aided design. Existing synthesis algorithms are often implemented with Binary Decision Diagrams (BDDs), inheriting their scalability limitations. Instead of BDDs, we propose several new methods to synthesize finite-state systems from safety specifications using decision procedures for the satisfiability of quantified and unquantified Boolean formulas (SAT-, QBF- and EPR-solvers). The presented approaches are based on computational learning, templates, or reduction to first-order logic. We also present an efficient parallelization, and optimizations to utilize reachability information and incremental solving. Finally, we compare all methods in an extensive case study. Our new methods outperform BDDs and other existing work on some classes of benchmarks, and our parallelization achieves a super-linear speedup.
Roderick Bloem, Robert Könighofer, Martina Seidl
Precise Analysis of Value-Dependent Synchronization in Priority Scheduled Programs
Abstract
Although priority scheduling in concurrent programs provides a clean way of synchronization, developers still additionally rely on hand-crafted schemes based on integer variables to protect critical sections. We identify a set of sufficient conditions for variables to serve this purpose. We provide efficient methods to verify these conditions, which enable us to construct an enhanced analysis of mutual exclusion in interrupt-driven concurrent programs. All our algorithms are build upon off-the-shelf inter-procedural analyses alone. We have implemented this approach for the analysis of automotive controllers, and demonstrate that it results in a major improvement in the precision of data race detection compared to purely priority-based techniques.
Martin D. Schwarz, Helmut Seidl, Vesal Vojdani, Kalmer Apinis
Relational Thread-Modular Static Value Analysis by Abstract Interpretation
Abstract
We study thread-modular static analysis by abstract interpretation to infer the values of variables in concurrent programs. We show how to go beyond the state of the art and increase an analysis precision by adding the ability to infer some relational and history-sensitive properties of thread interferences. The fundamental basis of this work is the formalization by abstract interpretation of a rely-guarantee concrete semantics which is thread-modular, constructive, and complete for safety properties. We then show that previous analyses based on non-relational interferences can be retrieved as coarse computable abstractions of this semantics; additionally, we present novel abstraction examples exploiting our ability to reason more precisely about interferences, including domains to infer relational lock invariants and the monotonicity of counters. Our method and domains have been implemented in the AstréeA static analyzer that checks for run-time errors in embedded concurrent C programs, where they enabled a significant reduction of the number of false alarms.
Antoine Miné
Timing Analysis of Parallel Software Using Abstract Execution
Abstract
A major trend in computer architecture is multi-core processors. To fully exploit this type of parallel processor chip, programs running on it will have to be parallel as well. This means that even hard real-time embedded systems will be parallel. Therefore, it is of utmost importance that methods to analyze the timing properties of parallel real-time systems are developed.
This paper presents an algorithm that is founded on abstract interpretation and derives safe approximations of the execution times of parallel programs. The algorithm is formulated and proven correct for a simple parallel language with parallel threads, shared memory and synchronization via locks.
Andreas Gustavsson, Jan Gustafsson, Björn Lisper
Doomsday Equilibria for Omega-Regular Games
Abstract
Two-player games on graphs provide the theoretical framework for many important problems such as reactive synthesis. While the traditional study of two-player zero-sum games has been extended to multi-player games with several notions of equilibria, they are decidable only for perfect-information games, whereas several applications require imperfect-information games.
In this paper we propose a new notion of equilibria, called doomsday equilibria, which is a strategy profile such that all players satisfy their own objective, and if any coalition of players deviates and violates even one of the players objective, then the objective of every player is violated.
We present algorithms and complexity results for deciding the existence of doomsday equilibria for various classes of ω-regular objectives, both for imperfect-information games, and for perfect-information games.We provide optimal complexity bounds for imperfect-information games, and in most cases for perfect-information games.
Krishnendu Chatterjee, Laurent Doyen, Emmanuel Filiot, Jean-François Raskin
Bisimulations and Logical Characterizations on Continuous-Time Markov Decision Processes
Abstract
In this paper we study strong and weak bisimulation equivalences for continuous-time Markov decision processes (CTMDPs) and the logical characterizations of these relations with respect to the continuous-time stochastic logic (CSL). For strong bisimulation, it is well known that it is strictly finer than the CSL equivalence. In this paper we propose strong and weak bisimulations for CTMDPs and show that for a subclass of CTMDPs, strong and weak bisimulations are both sound and complete with respect to the equivalences induced by CSL and the sub-logic of CSL without next operator respectively. We then consider a standard extension of CSL, and show that it and its sub-logic without X can be fully characterized by strong and weak bisimulations respectively over arbitrary CTMDPs.
Lei Song, Lijun Zhang, Jens Chr. Godskesen
Probabilistic Automata for Safety LTL Specifications
Abstract
Automata constructions for logical properties play an important role in the formal analysis of the system both statically and dynamically. In this paper, we present constructions of finite-state probabilistic monitors (FPM) for safety properties expressed in LTL. FPMs are probabilistic automata on infinite words that have a special, absorbing reject state, and given a cut-point λ ∈ [0,1], accept all words whose probability of reaching the reject state is at most 1 − λ. We consider Safe-LTL, the collection of LTL formulas built using conjunction, disjunction, next, and release operators, and show that (a) for any formula ϕ, there is an FPM with cut-point 1 of exponential size that recognizes the models of ϕ, and (b) there is a family of Safe-LTL formulas, such that the smallest FPM with cut-point 0 for this family is of doubly exponential size. Next, we consider the fragment LTL(G) of Safe-LTL wherein always operator is used instead of release operator and show that for any formula ϕ ∈ LTL(G) (c) there is an FPM with cut-point 0 of exponential size for ϕ, and (d) there is a robust FPM of exponential size for ϕ, where a robust FPM is one in which the acceptance probability of any word is bounded away from the cut-point. We also show that these constructions are optimal.
Dileep Kini, Mahesh Viswanathan
Refuting Heap Reachability
Abstract
Precise heap reachability information is a prerequisite for many static verification clients. However, the typical scenario is that the available heap information, computed by say an up-front points-to analysis, is not precise enough for the client of interest. This imprecise heap information in turn leads to a deluge of false alarms for the tool user to triage. Our position is to approach the false alarm problem not just by improving the up-front analysis but by also employing after-the-fact, goal-directed refutation analyses that yield targeted precision improvements. We have investigated refutation analysis in the context of detecting statically a class of Android memory leaks. For this client, we have found the necessity for an overall analysis capable of path-sensitive reasoning interprocedurally and with strong updates—a level of precision difficult to achieve globally in an up-front manner. Instead, our approach uses a refutation analysis that mixes highly precise, goal-directed reasoning with facts derived from the up-front analysis to prove alarms false and thus enabling effective and sound filtering of the overall list of alarms.
Bor-Yuh Evan Chang
Cascade 2.0
Abstract
Cascade is a program static analysis tool developed at New York University. Cascade takes as input a program and a control file. The control file specifies one or more assertions to be checked together with restrictions on program behaviors. The tool generates verification conditions for the specified assertions and checks them using an SMT solver which either produces a proof or gives a concrete trace showing how an assertion can fail. Version 2.0 supports the majority of standard C features except for floating point. It can be used to verify both memory safety as well as user-defined assertions. In this paper, we describe the Cascade system including some of its distinguishing features such as its support for different memory models (trading off precision for scalability) and its ability to reason about linked data structures.
Wei Wang, Clark Barrett, Thomas Wies
A Logic-Based Framework for Verifying Consensus Algorithms
Abstract
Fault-tolerant distributed algorithms play an important role in ensuring the reliability of many software applications. In this paper we consider distributed algorithms whose computations are organized in rounds. To verify the correctness of such algorithms, we reason about (i) properties (such as invariants) of the state, (ii) the transitions controlled by the algorithm, and (iii) the communication graph. We introduce a logic that addresses these points, and contains set comprehensions with cardinality constraints, function symbols to describe the local states of each process, and a limited form of quantifier alternation to express the verification conditions. We show its use in automating the verification of consensus algorithms. In particular, we give a semi-decision procedure for the unsatisfiability problem of the logic and identify a decidable fragment. We successfully applied our framework to verify the correctness of a variety of consensus algorithms tolerant to both benign faults (message loss, process crashes) and value faults (message corruption).
Cezara Drăgoi, Thomas A. Henzinger, Helmut Veith, Josef Widder, Damien Zufferey
Verifying Array Programs by Transforming Verification Conditions
Abstract
We present a method for verifying properties of imperative programs manipulating integer arrays. We assume that we are given a program and a property to be verified. The interpreter (that is, the operational semantics) of the program is specified as a set of Horn clauses with constraints in the domain of integer arrays, also called constraint logic programs over integer arrays, denoted CLP(Array). Then, by specializing the interpreter with respect to the given program and property, we generate a set of verification conditions (expressed as a CLP(Array) program) whose satisfiability implies that the program verifies the given property. Our verification method is based on transformations that preserve the least model semantics of CLP(Array) programs, and hence the satisfiability of the verification conditions. In particular, we apply the usual rules for CLP transformation, such as unfolding, folding, and constraint replacement, tailored to the specific domain of integer arrays. We propose an automatic strategy that guides the application of those rules with the objective of deriving a new set of verification conditions which is either trivially satisfiable (because it contains no constrained facts) or is trivially unsatisfiable (because it contains the fact false). Our approach provides a very rich program verification framework where one can compose together several verification strategies, each of them being implemented by transformations of CLP(Array) programs.
Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi, Maurizio Proietti
Weakest Precondition Synthesis for Compiler Optimizations
Abstract
Compiler optimizations play an increasingly important role in code generation. This is especially true with the advent of resourcelimited mobile devices. We rely on compiler optimizations to improve performance, reduce code size, and reduce power consumption of our programs.
Despite being a mature field, compiler optimizations are still designed and implemented by hand, and usually without providing any guarantee of correctness.
In addition to devising the code transformations, designers and implementers have to come up with an analysis that determines in which cases the optimization can be safely applied. In other words, the optimization designer has to specify a precondition that ensures that the optimization is semantics-preserving. However, devising preconditions for optimizations by hand is a non-trivial task. It is easy to specify a precondition that, although correct, is too restrictive, and therefore misses some optimization opportunities.
In this paper, we propose, to the best of our knowledge, the first algorithm for the automatic synthesis of preconditions for compiler optimizations. The synthesized preconditions are provably correct by construction, and they are guaranteed to be the weakest in the precondition language that we consider.
We implemented the proposed technique in a tool named PSyCO. We present examples of preconditions synthesized by PSyCO, as well as the results of running PSyCO on a set of optimizations.
Nuno P. Lopes, José Monteiro
Message-Passing Algorithms for the Verification of Distributed Protocols
Abstract
Message-passing algorithms (MPAs) are an algorithmic paradigm for the following generic problem: given a system consisting of several interacting components, compute a new version of each component representing its behaviour inside the system. MPAs avoid computing the full state space by propagating messages along the edges of the system interaction graph. We present an MPA for verifying local properties of distributed protocols with a tree communication structure. We report on an implementation, and validate it by means of two case studies, including an analysis of the PGM protocol.
Loïg Jezequel, Javier Esparza
Safety Problems Are NP-complete for Flat Integer Programs with Octagonal Loops
Abstract
This paper proves the NP-completeness of the reachability problem for the class of flat counter machines with difference bounds and, more generally, octagonal relations, labeling the transitions on the loops. The proof is based on the fact that the sequence of powers \(\{R^i\}_{i=1}^\infty\) of such relations can be encoded as a periodic sequence of matrices, and that both the prefix and the period of this sequence are \(2^{{O}(||R||_2)}\) in the size of the binary encoding ||R||2 of a relation R. This result allows to characterize the complexity of the reachability problem for one of the most studied class of counter machines [6,10], and has a potential impact on other problems in program verification.
Marius Bozga, Radu Iosif, Filip Konečný
Parameterized Model Checking of Token-Passing Systems
Abstract
We revisit the parameterized model checking problem for token-passing systems and specifications in indexed CTL  ∗ \X. Emerson and Namjoshi (1995, 2003) have shown that parameterized model checking of indexed CTL  ∗ \X in uni-directional token rings can be reduced to checking rings up to some cutoff size. Clarke et al. (2004) have shown a similar result for general topologies and indexed LTL \X, provided processes cannot choose the directions for sending or receiving the token.
We unify and substantially extend these results by systematically exploring fragments of indexed CTL  ∗ \X with respect to general topologies. For each fragment we establish whether a cutoff exists, and for some concrete topologies, such as rings, cliques and stars, we infer small cutoffs. Finally, we show that the problem becomes undecidable, and thus no cutoffs exist, if processes are allowed to choose the directions in which they send or from which they receive the token.
Benjamin Aminof, Swen Jacobs, Ayrat Khalimov, Sasha Rubin
Modularly Combining Numeric Abstract Domains with Points-to Analysis, and a Scalable Static Numeric Analyzer for Java
Abstract
This paper contributes to a new abstract domain that combines static numeric analysis and points-to analysis. One particularity of this abstract domain lies in its high degree of modularity, in the sense that the domain is constructed by reusing its combined components as black-boxes. This modularity dramatically eases the proof of its soundness and renders its algorithm intuitive. We have prototyped the abstract domain for analyzing real-world Java programs. Our experimental results show a tangible precision enhancement compared to what is possible by traditional static numeric analysis, and this at a cost that is comparable to the cost of running the numeric and pointer analyses separately.
Zhoulai Fu
Generic Combination of Heap and Value Analyses in Abstract Interpretation
Abstract
Abstract interpretation has been widely applied to approximate data structures and (usually numerical) value information. One needs to combine them to effectively apply static analysis to real software. Nevertheless, they have been studied mainly as orthogonal problems so far. In this context, we introduce a generic framework that, given a heap and a value analysis, combines them, and we formally prove its soundness. The heap analysis approximates concrete locations with heap identifiers, that can be materialized or merged. Meanwhile, the value analysis tracks information both on variable and heap identifiers, taking into account when heap identifiers are merged or materialized. We show how existing pointer and shape analyses, as well as numerical domains, can be plugged in our framework. As far as we know, this is the first sound generic automatic framework combining heap and value analyses that allows to freely manage heap identifiers.
Pietro Ferrara
Modeling Parsimonious Putative Regulatory Networks: Complexity and Heuristic Approach
Abstract
A relevant problem in systems biology is the description of the regulatory interactions between genes. It is observed that pairs of genes have significant correlation through several experimental conditions. The question is to find causal relationships that can explain this experimental evidence.
A putative regulatory network can be represented by an oriented weighted graph, where vertices represent genes, arcs represent predicted regulatory interactions and the arc weights represent the p-value of the prediction. Given such graph, and experimental evidence of correlation between pairs of vertices, we propose an abstraction and a method to enumerate all parsimonious subgraphs that assign causality relationships compatible with the experimental evidence.
When the problem is modeled as the minimization of a global weight function, we show that the enumeration of scenarios is a hard problem. As an heuristic, we model the problem as a set of independent minimization problems, each solvable in polynomial time, which can be combined to explore a relevant subset of the solution space. We present a logic-programming formalization of the model implemented using Answer Set Programming.
We show that, when the graph follows patterns that can be found in real organisms, our heuristic finds solutions that are good approximations to the full model. We encoded these approach using Answer Set Programming, applied this to a specific case in the organism E. coli and compared the execution time of each approach.
Vicente Acuña, Andrés Aravena, Alejandro Maass, Anne Siegel
Practical Floating-Point Tests with Integer Code
Abstract
Testing integer software with symbolic execution is wellestablished but floating-point remains a specialty feature. Modern symbolic floating-point tactics include concretization, lexical analysis, floating-point solvers, and intricate theories, but mostly ignore the default integer-only capabilities. If a symbolic executor is already highperformance, then software-emulation, common to integer-only machines, becomes a compelling choice for symbolic floating-point.
We propose a software floating-point emulation extension for symbolic execution of binary programs. First, supporting a soft floating-point library requires little effort, so multiple models are cheap; our executor has five distinct open source soft floating-point code bases. For integrity, test cases from symbolic execution of library code itself are hardware validated; mismatches with hardware appear in every tested library, a justin- time compiler, a machine decoder, and several floating-point solvers. In practice, the executor finds program faults involving floating-point in hundreds of Linux binaries.
Anthony Romano
Monitoring Parametric Temporal Logic
Abstract
Runtime verification techniques allow us to monitor an execution and check whether it satisfies some given property. Efficiency in runtime verification is of critical importance, because the evaluation is performed while new events are monitored. We apply runtime verification to obtain quantitative information about the execution, based on linear-time temporal properties: the temporal specification is extended to include parameters that are instantiated according to a measure obtained at runtime. The measure is updated in order to maintain the best values of parameters, according to their either maximizing or minimizing behavior, and priority. We provide measuring algorithms for linear-time temporal logic with parameters (PLTL). Our key result is that achieving efficient runtime verification is dependent on the determinization of the measuring semantics of PLTL. For deterministic PLTL, where all disjunctions are guarded by atomic propositions, online measuring requires only linear space in the size of the specification and logarithmic space in the length of the trace. For unambiguous PLTL, where general disjunctions are allowed, but the measuring is deterministic in the truth values of the non-parametric subformulas, the required space is exponential in the size of the specification, but still logarithmic in the length of the trace. For full PLTL, we show that online measuring is inherently hard and instead provide an efficient offline algorithm.
Peter Faymonville, Bernd Finkbeiner, Doron Peled
Precisely Deciding Control State Reachability in Concurrent Traces with Limited Observability
Abstract
We propose a new algorithm for precisely deciding a control state reachability (CSR) problem in runtime verification of concurrent programs, where the trace provides only limited observability of the execution. Under the assumption of limited observability, we know only the type of each event (read, write, lock, unlock, etc.) and the associated shared object, but not the concrete values of these objects or the control/data dependency among these events. Our method is the first sound and complete method for deciding such CSR in traces that involve more than two threads, while handling both standard synchronization primitives and ad hoc synchronizations implemented via shared memory accesses. It relies on a new polygraph based analysis, which is provably more accurate than existing methods based on lockset analysis, acquisition history, universal causality graph, and a recently proposed method based the causally-precedes relation. We have implemented the method in an offline data-race detection tool and demonstrated its effectiveness on multithreaded C/C++ applications.
Chao Wang, Kevin Hoang
Modular Synthesis of Sketches Using Models
Abstract
One problem with the constraint-based approaches to synthesis that have become popular over the last few years is that they only scale to relatively small routines, on the order of a few dozen lines of code. This paper presents a mechanism for modular reasoning that allows us to break larger synthesis problems into small manageable pieces. The approach builds on previous work in the verification community of using high-level specifications and partially interpreted functions (we call them models) in place of more complex pieces of code in order to make the analysis modular.
The main contribution of this paper is to show how to combine these techniques with the counterexample guided synthesis approaches used to efficiently solve synthesis problems. Specifically, we show two new algorithms; one to efficiently synthesize functions that use models, and another one to synthesize functions while ensuring that the behavior of the resulting function will be in the set of behaviors allowed by the model. We have implemented our approach on top of the open-source Sketch synthesis system, and we demonstrate its effectiveness on several Sketch benchmark problems.
Rohit Singh, Rishabh Singh, Zhilei Xu, Rebecca Krosnick, Armando Solar-Lezama
Synthesis with Identifiers
Abstract
We consider the synthesis of reactive systems from specifications with identifiers. Identifiers are useful to parametrize the input and output of a reactive system, for example, to state which client requests a grant from an arbiter, or the type of object that a robot is expected to fetch.
Traditional reactive synthesis algorithms only handle a constant bounded range of such identifiers. However, in practice, we might not want to restrict the number of clients of an arbiter or the set of object types handled by a robot a priori. We first present a concise automata-based formalism for specifications with identifiers. The synthesis problem for such specifications is undecidable. We therefore give an algorithm that is always sound, and complete for unrealizable safety specifications. Our algorithm is based on computing a pattern-based abstraction of a synthesis game that captures the realizability problem for the specification. The abstraction does not restrict the possible solutions to finite-state ones and captures the obligations for the system in the synthesis game. We present an experimental evaluation based on a prototype implementation that shows the practical applicability of our algorithm.
Rüdiger Ehlers, Sanjit A. Seshia, Hadas Kress-Gazit
Synthesis for Polynomial Lasso Programs
Abstract
We present a method for the synthesis of polynomial lasso programs. These programs consist of a program stem, a set of transitions, and an exit condition, all in the form of algebraic assertions (conjunctions of polynomial equalities). Central to this approach is the discovery of non-linear (algebraic) loop invariants. We extend Sankaranarayanan, Sipma, and Manna’s template-based approach and prove a completeness criterion. We perform program synthesis by generating a constraint whose solution is a synthesized program together with a loop invariant that proves the program’s correctness. This constraint is non-linear and is passed to an SMT solver. Moreover, we can enforce the termination of the synthesized program with the support of test cases.
Jan Leike, Ashish Tiwari
Policy Iteration-Based Conditional Termination and Ranking Functions
Abstract
Termination analyzers generally synthesize ranking functions or relations, which represent checkable proofs of their results. In [23], we proposed an approach for conditional termination analysis based on abstract fixpoint computation by policy iteration. This method is not based on ranking functions and does not directly provide a ranking relation, which makes the comparison with existing approaches difficult. In this paper we study the relationships between our approach and ranking functions and relations, focusing on extensions of linear ranking functions. We show that it can work on programs admitting a specific kind of segmented ranking functions, and that the results can be checked by the construction of a disjunctive ranking relation. Experimental results show the interest of this approach.
Damien Massé
Widening for Control-Flow
Abstract
We present a parameterized widening operator that determines the control-flow sensitivity of an analysis, i.e., its flow-sensitivity, context-sensitivity, and path-sensitivity. By instantiating the operator’s parameter in different ways, the analysis can be tuned to arbitrary sensitivities without changing the abstract semantics of the analysis itself. Similarly, the analysis can be implemented so that its sensitivity can be tuned without changing the analysis implementation. Thus, the sensitivity is an independent concern, allowing the analysis designer to design and implement the analysis without worrying about its sensitivity and then easily experiment with different sensitivities after the fact. Additionally, we show that the space of control-flow sensitivities induced by this widening operator forms a lattice. The lattice meet and join operators are the product and sum of sensitivities, respectively. They can be used to automatically create new sensitivities from existing ones without manual effort. The sum operation in particular is a novel construction, which creates a new sensitivity less precise than either of its operands but containing elements of both.
Ben Hardekopf, Ben Wiedermann, Berkeley Churchill, Vineeth Kashyap
Backmatter
Metadaten
Titel
Verification, Model Checking, and Abstract Interpretation
herausgegeben von
Kenneth L. McMillan
Xavier Rival
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-54013-4
Print ISBN
978-3-642-54012-7
DOI
https://doi.org/10.1007/978-3-642-54013-4