Skip to main content

2015 | Buch

Computer Aided Verification

27th International Conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 9206 and LNCS 9207 constitutes the refereed proceedings of the 27th International Conference on Computer Aided Verification, CAV 2015, held in San Francisco, CA, USA, in July 2015.

The total of 58 full and 11 short papers presented in the proceedings was carefully reviewed and selected from 252 submissions. The papers were organized in topical sections named: model checking and refinements; quantitative reasoning; software analysis; lightning talks; interpolation, IC3/PDR, and Invariants; SMT techniques and applications; HW verification; synthesis; termination; and concurrency.

Inhaltsverzeichnis

Frontmatter

SMT Techniques and Applications

Frontmatter
Poling: SMT Aided Linearizability Proofs
Abstract
Proofs of linearizability of concurrent data structures generally rely on identifying linearization points to establish a simulation argument between the implementation and the specification. However, for many linearizable data structure operations, the linearization points may not correspond to their internal static code locations; for example, they might reside in the code of another concurrent operation. To overcome this limitation, we identify important program patterns that expose such instances, and describe a tool (Poling) that automatically verifies the linearizability of implementations that conform to these patterns.
He Zhu, Gustavo Petri, Suresh Jagannathan
Finding Bounded Path in Graph Using SMT for Automatic Clock Routing
Abstract
Automating the routing process is essential for the semiconductor industry to reduce time-to-market and increase productivity. This study sprang from the need to automate the following critical task in clock routing: given a set of nets, each net consisting of a driver and a receiver, connect each driver to its receiver, where the delay should be almost the same across the nets. We demonstrate that this problem can be reduced to bounded-path, that is, the NP-hard problem of finding a simple path, whose cost is bounded by a given range, connecting two given vertices in an undirected positively weighted graph. Furthermore, we show that bounded-path can be reduced to bit-vector reasoning and solved with a SAT-based bit-vector SMT solver. In order to render our solution scalable, we override the SAT solver’s decision strategy with a novel graph-aware strategy and augment conflict analysis with a graph-aware procedure. Our solution scales to graphs having millions of edges and vertices. It has been deployed at Intel for clock routing automation.
Amit Erez, Alexander Nadel
Cutting the Mix
Abstract
While linear arithmetic has been studied in the context of SMT individually for reals and integers, mixed linear arithmetic allowing comparisons between integer and real variables has not received much attention. For linear integer arithmetic, the cuts from proofs algorithm has proven to have superior performance on many benchmarks. In this paper we extend this algorithm to the mixed case where real and integer variables occur in the same linear constraint. Our algorithm allows for an easy integration into existing SMT solvers. Experimental evaluation of our prototype implementation inside the SMT solver SMTInterpol shows that this algorithm is successful on benchmarks that are hard for all existing solvers.
Jürgen Christ, Jochen Hoenicke
The Inez Mathematical Programming Modulo Theories Framework
Abstract
Our Mathematical Programming Modulo Theories (MPMT) constraint solving framework extends Mathematical Programming technology with techniques from the field of Automated Reasoning, e.g., solvers for first-order theories. In previous work, we used MPMT to synthesize system architectures for Boeing’s Dreamliner and we studied the theoretical aspects of MPMT by means of the Branch and Cut Modulo T (\({\text {BC}}(T)\)) transition system. \({\text {BC}}(T)\) can be thought of as a blueprint for MPMT solvers. This paper provides a more practical and algorithmic view of \({\text {BC}}(T)\). We elaborate on the design and features of Inez, our \({\text {BC}}(T)\) constraint solver. Inez is an open-source, freely available superset of the OCaml programming language that uses the SCIP Branch and Cut framework to extend OCaml with MPMT capability. Inez allows users to write programs that arbitrarily interweave general computation with MPMT constraint solving.
Panagiotis Manolios, Jorge Pais, Vasilis Papavasileiou
Using Minimal Correction Sets to More Efficiently Compute Minimal Unsatisfiable Sets
Abstract
An unsatisfiable set is a set of formulas whose conjunction is unsatisfiable. Every unsatisfiable set can be corrected, i.e., made satisfiable, by removing a subset of its members. The subset whose removal yields satisfiability is called a correction subset. Given an unsatisfiable set \({\mathcal {F}} \) there is a well known hitting set duality between the unsatisfiable subsets of \({\mathcal {F}} \) and the correction subsets of \({\mathcal {F}} \): every unsatisfiable subset hits (has a non-empty intersection with) every correction subset, and, dually, every correction subset hits every unsatisfiable subset. An important problem with many applications in practice is to find a minimal unsatisfiable subset (mus) of \({\mathcal {F}} \), i.e., an unsatisfiable subset all of whose proper subsets are satisfiable. A number of algorithms for this important problem have been proposed. In this paper we present new algorithms for finding a single mus and for finding all muses. Our algorithms exploit in a new way the duality between correction subsets and unsatisfiable subsets. We show that our algorithms advance the state of the art, enabling more effective computation of muses.
Fahiem Bacchus, George Katsirelos
Deciding Local Theory Extensions via E-matching
Abstract
Satisfiability Modulo Theories (SMT) solvers incorporate decision procedures for theories of data types that commonly occur in software. This makes them important tools for automating verification problems. A limitation frequently encountered is that verification problems are often not fully expressible in the theories supported natively by the solvers. Many solvers allow the specification of application-specific theories as quantified axioms, but their handling is incomplete outside of narrow special cases.
In this work, we show how SMT solvers can be used to obtain complete decision procedures for local theory extensions, an important class of theories that are decidable using finite instantiation of axioms. We present an algorithm that uses E-matching to generate instances incrementally during the search, significantly reducing the number of generated instances compared to eager instantiation strategies. We have used two SMT solvers to implement this algorithm and conducted an extensive experimental evaluation on benchmarks derived from verification conditions for heap-manipulating programs. We believe that our results are of interest to both the users of SMT solvers as well as their developers.
Kshitij Bansal, Andrew Reynolds, Tim King, Clark Barrett, Thomas Wies

HW Verification

Frontmatter
Modular Deductive Verification of Multiprocessor Hardware Designs
Abstract
We present a new framework for modular verification of hardware designs in the style of the Bluespec language. That is, we formalize the idea of components in a hardware design, with well-defined input and output channels; and we show how to specify and verify components individually, with machine-checked proofs in the Coq proof assistant. As a demonstration, we verify a fairly realistic implementation of a multicore shared-memory system with two types of components: memory system and processor. Both components include nontrivial optimizations, with the memory system employing an arbitrary hierarchy of cache nodes that communicate with each other concurrently, and with the processor doing speculative execution of many concurrent read operations. Nonetheless, we prove that the combined system implements sequential consistency. To our knowledge, our memory-system proof is the first machine verification of a cache-coherence protocol parameterized over an arbitrary cache hierarchy, and our full-system proof is the first machine verification of sequential consistency for a multicore hardware design that includes caches and speculative processors.
Muralidaran Vijayaraghavan, Adam Chlipala, Arvind, Nirav Dave
Word-Level Symbolic Trajectory Evaluation
Abstract
Symbolic trajectory evaluation (STE) is a model checking technique that has been successfully used to verify industrial designs. Existing implementations of STE, however, reason at the level of bits, allowing signals to take values in \(\{0, 1, X\}\). This limits the amount of abstraction that can be achieved, and presents inherent limitations to scaling. The main contribution of this paper is to show how much more abstract lattices can be derived automatically from RTL descriptions, and how a model checker for the general theory of STE instantiated with such abstract lattices can be implemented in practice. This gives us the first practical word-level STE engine, called \(\mathsf {STEWord}\). Experiments on a set of designs similar to those used in industry show that \(\mathsf {STEWord}\) scales better than word-level BMC and also bit-level STE.
Supratik Chakraborty, Zurab Khasidashvili, Carl-Johan H. Seger, Rajkumar Gajavelly, Tanmay Haldankar, Dinesh Chhatani, Rakesh Mistry
Verifying Linearizability of Intel® Software Guard Extensions
Abstract
Intel® Software Guard Extensions (SGX) is a collection of CPU instructions that enable an application to create secure containers that are inaccessible to untrusted entities, including the operating system and other low-level software. Establishing that the design of these instructions provides security is critical to the success of the feature, however, SGX introduces complex concurrent interactions between the instructions and the shared hardware state used to enforce security, rendering traditional approaches to validation insufficient. In this paper, we introduce Accordion, a domain specific language and compiler for automatically verifying linearizability via model checking. The compiler determines an appropriate linearization point for each instruction, computes the required linearizability assertions, and supports experimentation with a variety of model configurations across multiple model checking tools. We show that this approach to verifying linearizability works well for validating SGX and that the compiler provides improved usability over encoding the problem in a model checker directly.
Rebekah Leslie-Hurd, Dror Caspi, Matthew Fernandez

Synthesis

Frontmatter
Synthesis Through Unification
Abstract
Given a specification and a set of candidate programs (program space), the program synthesis problem is to find a candidate program that satisfies the specification. We present the synthesis through unification (STUN) approach, which is an extension of the counter-example guided inductive synthesis (CEGIS) approach. In CEGIS, the synthesizer maintains a subset S of inputs and a candidate program \(\mathtt {Prog}\) that is correct for S. The synthesizer repeatedly checks if there exists a counterexample input c such that the execution of \(\mathtt {Prog}\) is incorrect on c. If so, the synthesizer enlarges S to include c, and picks a program from the program space that is correct for the new set S.
The STUN approach extends CEGIS with the idea that given a program \(\mathtt {Prog}\) that is correct for a subset of inputs, the synthesizer can try to find a program \(\mathtt {Prog}'\) that is correct for the rest of the inputs. If \(\mathtt {Prog}\) and \(\mathtt {Prog}'\) can be unified into a program in the program space, then a solution has been found. We present a generic synthesis procedure based on the STUN approach and specialize it for three different domains by providing the appropriate unification operators. We implemented these specializations in prototype tools, and we show that our tools often performs significantly better on standard benchmarks than a tool based on a pure CEGIS approach.
Rajeev Alur, Pavol Černý, Arjun Radhakrishna
From Non-preemptive to Preemptive Scheduling Using Synchronization Synthesis
Abstract
We present a computer-aided programming approach to concurrency. The approach allows programmers to program assuming a friendly, non-preemptive scheduler, and our synthesis procedure inserts synchronization to ensure that the final program works even with a preemptive scheduler. The correctness specification is implicit, inferred from the non-preemptive behavior. Let us consider sequences of calls that the program makes to an external interface. The specification requires that any such sequence produced under a preemptive scheduler should be included in the set of such sequences produced under a non-preemptive scheduler. The solution is based on a finitary abstraction, an algorithm for bounded language inclusion modulo an independence relation, and rules for inserting synchronization. We apply the approach to device-driver programming, where the driver threads call the software interface of the device and the API provided by the operating system. Our experiments demonstrate that our synthesis method is precise and efficient, and, since it does not require explicit specifications, is more practical than the conventional approach based on user-provided assertions.
Pavol Černý, Edmund M. Clarke, Thomas A. Henzinger, Arjun Radhakrishna, Leonid Ryzhyk, Roopsha Samanta, Thorsten Tarrach
Counterexample-Guided Quantifier Instantiation for Synthesis in SMT
Abstract
We introduce the first program synthesis engine implemented inside an SMT solver. We present an approach that extracts solution functions from unsatisfiability proofs of the negated form of synthesis conjectures. We also discuss novel counterexample-guided techniques for quantifier instantiation that we use to make finding such proofs practically feasible. A particularly important class of specifications are single-invocation properties, for which we present a dedicated algorithm. To support syntax restrictions on generated solutions, our approach can transform a solution found without restrictions into the desired syntactic form. As an alternative, we show how to use evaluation function axioms to embed syntactic restrictions into constraints over algebraic datatypes, and then use an algebraic datatype decision procedure to drive synthesis. Our experimental evaluation on syntax-guided synthesis benchmarks shows that our implementation in the CVC4 SMT solver is competitive with state-of-the-art tools for synthesis.
Andrew Reynolds, Morgan Deters, Viktor Kuncak, Cesare Tinelli, Clark Barrett
Deductive Program Repair
Abstract
We present an approach to program repair and its application to programs with recursive functions over unbounded data types. Our approach formulates program repair in the framework of deductive synthesis that uses existing program structure as a hint to guide synthesis. We introduce a new specification construct for symbolic tests. We rely on such user-specified tests as well as automatically generated ones to localize the fault and speed up synthesis. Our implementation is able to eliminate errors within seconds from a variety of functional programs, including symbolic computation code and implementations of functional data structures. The resulting programs are formally verified by the Leon system.
Etienne Kneuss, Manos Koukoutos, Viktor Kuncak
Quantifying Conformance Using the Skorokhod Metric
Abstract
The conformance testing problem for dynamical systems asks, given two dynamical models (e.g., as Simulink diagrams), whether their behaviors are “close” to each other. In the semi-formal approach to conformance testing, the two systems are simulated on a large set of tests, and a metric, defined on pairs of real-valued, real-timed trajectories, is used to determine a lower bound on the distance. We show how the Skorokhod metric on continuous dynamical systems can be used as the foundation for conformance testing of complex dynamical models. The Skorokhod metric allows for both state value mismatches and timing distortions, and is thus well suited for checking conformance between idealized models of dynamical systems and their implementations. We demonstrate the robustness of the metric by proving a transference theorem: trajectories close under the Skorokhod metric satisfy “close” logical properties in the timed linear time logic TLTL augmented with a rich class of temporal and spatial constraint predicates. We provide an efficient window-based streaming algorithm to compute the Skorokhod metric, and use it as a basis for a conformance testing tool for Simulink. We experimentally demonstrate the effectiveness of our tool in finding discrepant behaviors on a set of control system benchmarks, including an industrial challenge problem.
Jyotirmoy V. Deshmukh, Rupak Majumdar, Vinayak S. Prabhu
Pareto Curves of Multidimensional Mean-Payoff Games
Abstract
In this paper, we study the set of thresholds that the protagonist can force in a zero-sum two-player multidimensional mean-payoff game. The set of maximal elements of such a set is called the Pareto curve, a classical tool to analyze trade-offs. As thresholds are vectors of real numbers in multiple dimensions, there exist usually an infinite number of such maximal elements. Our main results are as follow. First, we study the geometry of this set and show that it is definable as a finite union of convex sets given by linear inequations. Second, we provide a \(\varSigma _2\) P algorithm to decide if this set intersects a convex set defined by linear inequations, and we prove the optimality of our algorithm by providing a matching complexity lower bound for the problem. Furthermore, we show that, under natural assumptions, i.e. fixed number of dimensions and polynomially bounded weights in the game, the problem can be solved in deterministic polynomial time. Finally, we show that the Pareto curve can be effectively constructed, and under the former natural assumptions, this construction can be done in deterministic polynomial time.
Romain Brenguier, Jean-François Raskin

Termination

Frontmatter
Conflict-Driven Conditional Termination
Abstract
Conflict-driven learning, which is essential to the performance of sat and smt solvers, consists of a procedure that searches for a model of a formula, and refutation procedure for proving that no model exists. This paper shows that conflict-driven learning can improve the precision of a termination analysis based on abstract interpretation. We encode non-termination as satisfiability in a monadic second-order logic and use abstract interpreters to reason about the satisfiability of this formula. Our search procedure combines decisions with reachability analysis to find potentially non-terminating executions and our refutation procedure uses a conditional termination analysis. Our implementation extends the set of conditional termination arguments discovered by an existing termination analyzer.
Vijay D’Silva, Caterina Urban
Predicate Abstraction and CEGAR for Disproving Termination of Higher-Order Functional Programs
Abstract
We propose an automated method for disproving termination of higher-order functional programs. Our method combines higher-order model checking with predicate abstraction and CEGAR. Our predicate abstraction is novel in that it computes a mixture of under- and overapproximations. For non-determinism of a source program (such as random number generation), we apply underapproximation to generate a subset of the actual branches, and check that some of the branches in the abstract program is non-terminating. For operations on infinite data domains (such as integers), we apply overapproximation to generate a superset of the actual branches, and check that every branch is non-terminating. Thus, disproving non-termination reduces to the problem of checking a certain branching property of the abstract program, which can be solved by higher-order model checking. We have implemented a prototype non-termination prover based on our method and have confirmed the effectiveness of the proposed approach through experiments.
Takuya Kuwahara, Ryosuke Sato, Hiroshi Unno, Naoki Kobayashi
Complexity of Bradley-Manna-Sipma Lexicographic Ranking Functions
Abstract
In this paper we turn the spotlight on a class of lexicographic ranking functions introduced by Bradley, Manna and Sipma in a seminal CAV 2005 paper, and establish for the first time the complexity of some problems involving the inference of such functions for linear-constraint loops (without precondition). We show that finding such a function, if one exists, can be done in polynomial time in a way which is sound and complete when the variables range over the rationals (or reals). We show that when variables range over the integers, the problem is harder—deciding the existence of a ranking function is coNP-complete. Next, we study the problem of minimizing the number of components in the ranking function (a.k.a. the dimension). This number is interesting in contexts like computing iteration bounds and loop parallelization. Surprisingly, and unlike the situation for some other classes of lexicographic ranking functions, we find that even deciding whether a two-component ranking function exists is harder than the unrestricted problem: NP-complete over the rationals and \(\varSigma ^P_2\)-complete over the integers.
Amir M. Ben-Amram, Samir Genaim
Measuring with Timed Patterns
Abstract
We propose a declarative measurement specification language for quantitative performance evaluation of hybrid (discrete-continuous) systems based on simulation traces. We use timed regular expressions with events to specify patterns that define segments of simulation traces over which measurements are to be taken. In addition, we associate measure specifications over these patterns to describe a particular type of performance evaluation (maximization, average, etc.) to be done over the matched signal segments. The resulting language enables expressive and versatile specification of measurement objectives. We develop an algorithm for our measurement framework, implement it in a prototype tool, and apply it in a case study of an automotive communication protocol. Our experiments demonstrate that the proposed technique is usable with very low overhead to a typical (computationally intensive) simulation.
Thomas Ferrère, Oded Maler, Dejan Ničković, Dogan Ulus
Automatic Verification of Stability and Safety for Delay Differential Equations
Abstract
Delay differential equations (DDEs) arise naturally as models of, e.g., networked control systems, where the communication delay in the feedback loop cannot always be ignored. Such delays can prompt oscillations and may cause deterioration of control performance, invalidating both stability and safety properties. Nevertheless, state-exploratory automatic verification methods have until now concentrated on ordinary differential equations (and their piecewise extensions to hybrid state) only, failing to address the effects of delays on system dynamics. We overcome this problem by iterating bounded degree interval-based Taylor overapproximations of the time-wise segments of the solution to a DDE, thereby identifying and automatically analyzing the operator that yields the parameters of the Taylor overapproximation for the next temporal segment from the current one. By using constraint solving for analyzing the properties of this operator, we obtain a procedure able to provide stability and safety certificates for a simple class of DDEs.
Liang Zou, Martin Fränzle, Naijun Zhan, Peter Nazier Mosaad
Time Robustness in MTL and Expressivity in Hybrid System Falsification
Abstract
Building on the work by Fainekos and Pappas and the one by Donzé and Maler, we introduce \(\mathbf{AvSTL }\), an extension of metric interval temporal logic by averaged temporal operators. Its expressivity in capturing both space and time robustness helps solving falsification problems (searching for a critical path in hybrid system models); it does so by communicating a designer’s intention more faithfully to the stochastic optimization engine employed in a falsification solver. We also introduce a sliding window-like algorithm that keeps the cost of computing truth/robustness values tractable.
Takumi Akazaki, Ichiro Hasuo

Concurrency

Frontmatter
Adaptive Concretization for Parallel Program Synthesis
Abstract
Program synthesis tools work by searching for an implementation that satisfies a given specification. Two popular search strategies are symbolic search, which reduces synthesis to a formula passed to a SAT solver, and explicit search, which uses brute force or random search to find a solution. In this paper, we propose adaptive concretization, a novel synthesis algorithm that combines the best of symbolic and explicit search. Our algorithm works by partially concretizing a randomly chosen, but likely highly influential, subset of the unknowns to be synthesized. Adaptive concretization uses an online search process to find the optimal size of the concretized subset using a combination of exponential hill climbing and binary search, employing a statistical test to determine when one degree of concretization is sufficiently better than another. Moreover, our algorithm lends itself to a highly parallel implementation, further speeding up search. We implemented adaptive concretization for Sketch and evaluated it on a range of benchmarks. We found adaptive concretization is very effective, outperforming Sketch in many cases, sometimes significantly, and has good parallel scalability.
Jinseong Jeon, Xiaokang Qiu, Armando Solar-Lezama, Jeffrey S. Foster
Automatic Completion of Distributed Protocols with Symmetry
Abstract
A distributed protocol is typically modeled as a set of communicating processes, where each process is described as an extended state machine along with fairness assumptions. Correctness is specified using safety and liveness requirements. Designing correct distributed protocols is a challenging task. Aimed at simplifying this task, we allow the designer to leave some of the guards and updates to state variables in the description of the protocol as unknown functions. The protocol completion problem then is to find interpretations for these unknown functions while guaranteeing correctness. In many distributed protocols, process behaviors are naturally symmetric, and thus, synthesized expressions are further required to obey symmetry constraints. Our counterexample-guided synthesis algorithm consists of repeatedly invoking two phases. In the first phase, candidates for unknown expressions are generated using the SMT solver Z3. This phase requires carefully orchestrating constraints to enforce the desired symmetry constraints. In the second phase, the resulting completed protocol is checked for correctness using a custom-built model checker that handles fairness assumptions, safety and liveness requirements, and exploits symmetry. When model checking fails, our tool examines a set of counterexamples to safety/liveness properties to generate constraints on unknown functions that must be satisfied by subsequent completions. For evaluation, we show that our prototype is able to automatically discover interesting missing details in distributed protocols for mutual exclusion, self stabilization, and cache coherence.
Rajeev Alur, Mukund Raghothaman, Christos Stergiou, Stavros Tripakis, Abhishek Udupa
An Axiomatic Specification for Sequential Memory Models
Abstract
Formalizations of concurrent memory models often represent memory behavior in terms of sequences of operations, where operations are either reads, writes, or synchronizations. More concrete models of (sequential) memory behavior may include allocation and free operations, but also include details of memory layout or data representation. We present an abstract specification for sequential memory models with allocation and free operations, in the form of a set of axioms that provide enough information to reason about memory without overly constraining the behavior of implementations. We characterize a set of “well-behaved” programs that behave uniformly on all instances of the specification. We show that the specification is both feasible—the CompCert memory model implements it—and usable—we can use the axioms to prove the correctness of an optimization that changes the memory behavior of programs in an LLVM-like language.
William Mansky, Dmitri Garbuzov, Steve Zdancewic
Approximate Synchrony: An Abstraction for Distributed Almost-Synchronous Systems
Abstract
Forms of synchrony can greatly simplify modeling, design, and verification of distributed systems. Thus, recent advances in clock synchronization protocols and their adoption hold promise for system design. However, these protocols synchronize the distributed clocks only within a certain tolerance, and there are transient phases while synchronization is still being achieved. Abstractions used for modeling and verification of such systems should accurately capture these imperfections that cause the system to only be “almost synchronized.” In this paper, we present approximate synchrony, a sound and tunable abstraction for verification of almost-synchronous systems. We show how approximate synchrony can be used for verification of both time synchronization protocols and applications running on top of them. We provide an algorithmic approach for constructing this abstraction for symmetric, almost-synchronous systems, a subclass of almost-synchronous systems. Moreover, we show how approximate synchrony also provides a useful strategy to guide state-space exploration. We have implemented approximate synchrony as a part of a model checker and used it to verify models of the Best Master Clock (BMC) algorithm, the core component of the IEEE 1588 precision time protocol, as well as the time-synchronized channel hopping protocol that is part of the IEEE 802.15.4e standard.
Ankush Desai, Sanjit A. Seshia, Shaz Qadeer, David Broman, John C. Eidson
Automated and Modular Refinement Reasoning for Concurrent Programs
Abstract
We present civl, a language and verifier for concurrent programs based on automated and modular refinement reasoning. civl supports reasoning about a concurrent program at many levels of abstraction. Atomic actions in a high-level description are refined to fine-grain and optimized lower-level implementations. A novel combination of automata theoretic and logic-based checks is used to verify refinement. Modular specifications and proof annotations, such as location invariants and procedure pre- and post-conditions, are specified separately, independently at each level in terms of the variables visible at that level. We have implemented civl as an extension to the boogie language and verifier. We have used civl to refine a realistic concurrent garbage collection algorithm from a simple high-level specification down to a highly-concurrent implementation described in terms of individual memory accesses.
Chris Hawblitzel, Erez Petrank, Shaz Qadeer, Serdar Tasiran
Backmatter
Metadaten
Titel
Computer Aided Verification
herausgegeben von
Daniel Kroening
Corina S. Păsăreanu
Copyright-Jahr
2015
Electronic ISBN
978-3-319-21668-3
Print ISBN
978-3-319-21667-6
DOI
https://doi.org/10.1007/978-3-319-21668-3