Skip to main content

2016 | Buch

Automated Technology for Verification and Analysis

14th International Symposium, ATVA 2016, Chiba, Japan, October 17-20, 2016, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 14th International Symposium on Automated Technology for Verification and Analysis, ATVA 2016, held in Chiba, Japan, in October 2016.

The 31 papers presented in this volume were carefully reviewed and selected from 82 submissions. They were organized in topical sections named: keynote; Markov models, chains, and decision processes; counter systems, automata; parallelism, concurrency; complexity, decidability; synthesis, refinement; optimization, heuristics, partial-order reductions; solving procedures, model checking; and program analysis.

Inhaltsverzeichnis

Frontmatter

Keynote

Frontmatter
Synthesizing and Completely Testing Hardware Based on Templates Through Small Numbers of Test Patterns

Here we first introduce Quantified Boolean Formula (QBF) based approaches to logic synthesis and testing in general including automatic corrections of designs. It is formulated as: If some appropriate values are assigned to what we call programmable variables, the resulting circuits behaves as our intentions for all possible input values, that is, they become the ones whose logic functions are the intended ones. In this paper we only target combinational circuits and sequential circuits which are time-frame expanded by fixed times. The QBF problems are solved by repeatedly applying SAT solvers, not QBF solvers, with incremental additions of new constraints for each iteration which come from counter examples for the SAT problems. The required numbers of iterations until solutions are obtained are experimentally shown to be pretty small (in the order of tens) even if there are hundreds of inputs, regardless of the fact that they have exponentially many value combinations. Then the applications of the proposed methodology to logic synthesis, logic debugging, and automatic test pattern generations (ATPG) for multiple faults are discussed with experimental results. In the case of ATPG, a test pattern is generated for each iteration, and programmable variables can represent complete sets of functional and multiple faults, which are the most general faults models.

Masahiro Fujita

Markov Models, Chains, and Decision Processes

Frontmatter
Approximate Policy Iteration for Markov Decision Processes via Quantitative Adaptive Aggregations

We consider the problem of finding an optimal policy in a Markov decision process that maximises the expected discounted sum of rewards over an infinite time horizon. Since the explicit iterative dynamical programming scheme does not scale when increasing the dimension of the state space, a number of approximate methods have been developed. These are typically based on value or policy iteration, enabling further speedups through lumped and distributed updates, or by employing succinct representations of the value functions. However, none of the existing approximate techniques provides general, explicit and tunable bounds on the approximation error, a problem particularly relevant when the level of accuracy affects the optimality of the policy. In this paper we propose a new approximate policy iteration scheme that mitigates the state-space explosion problem by adaptive state-space aggregation, at the same time providing rigorous and explicit error bounds that can be used to control the optimality level of the obtained policy. We evaluate the new approach on a case study, demonstrating evidence that the state-space reduction results in considerable acceleration of the policy iteration scheme, while being able to meet the required level of precision.

Alessandro Abate, Milan Češka, Marta Kwiatkowska
Optimizing the Expected Mean Payoff in Energy Markov Decision Processes

Energy Markov Decision Processes (EMDPs) are finite-state Markov decision processes where each transition is assigned an integer counter update and a rational payoff. An EMDP configuration is a pair s(n), where s is a control state and n is the current counter value. The configurations are changed by performing transitions in the standard way. We consider the problem of computing a safe strategy (i.e., a strategy that keeps the counter non-negative) which maximizes the expected mean payoff.

Tomáš Brázdil, Antonín Kučera, Petr Novotný
Parameter Synthesis for Markov Models: Faster Than Ever

We propose a conceptually simple technique for verifying probabilistic models whose transition probabilities are parametric. The key is to replace parametric transitions by nondeterministic choices of extremal values. Analysing the resulting parameter-free model using off-the-shelf means yields (refinable) lower and upper bounds on probabilities of regions in the parameter space. The technique outperforms the existing analysis of parametric Markov chains by several orders of magnitude regarding both run-time and scalability. Its beauty is its applicability to various probabilistic models. It in particular provides the first sound and feasible method for performing parameter synthesis of Markov decision processes.

Tim Quatmann, Christian Dehnert, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen
Bounded Model Checking for Probabilistic Programs

In this paper we investigate the applicability of standard model checking approaches to verifying properties in probabilistic programming. As the operational model for a standard probabilistic program is a potentially infinite parametric Markov decision process, no direct adaption of existing techniques is possible. Therefore, we propose an on–the–fly approach where the operational model is successively created and verified via a step–wise execution of the program. This approach enables to take key features of many probabilistic programs into account: nondeterminism and conditioning. We discuss the restrictions and demonstrate the scalability on several benchmarks.

Nils Jansen, Christian Dehnert, Benjamin Lucien Kaminski, Joost-Pieter Katoen, Lukas Westhofen

Counter Systems, Automata

Frontmatter
How Hard is It to Verify Flat Affine Counter Systems with the Finite Monoid Property?

We study several decision problems for counter systems with guards defined by convex polyhedra and updates defined by affine transformations. In general, the reachability problem is undecidable for such systems. Decidability can be achieved by imposing two restrictions: (1) the control structure of the counter system is flat, meaning that nested loops are forbidden, and (2) the multiplicative monoid generated by the affine update matrices present in the system is finite. We provide complexity bounds for several decision problems of such systems, by proving that reachability and model checking for Past Linear Temporal Logic stands in the second level of the polynomial hierarchy $$\varSigma ^P_2$$, while model checking for First Order Logic is PSPACE-complete.

Radu Iosif, Arnaud Sangnier
Solving Language Equations Using Flanked Automata

We define a new subclass of nondeterministic finite automata for prefix-closed languages called Flanked Finite Automata (FFA). Our motivation is to provide an efficient way to compute the quotient and inclusion of regular languages without the need to determinize the underlying automata. These operations are the building blocks of several verification algorithms that can be interpreted as language equation solving problems. We provide a construction for computing a FFA accepting the quotient and product of languages that is compositional and that does not incur an exponential blow up in size. This makes flanked automata a good candidate as a formalism for compositional design and verification of systems.

Florent Avellaneda, Silvano Dal Zilio, Jean-Baptiste Raclet
Spot 2.0 — A Framework for LTL and -Automata Manipulation

We present Spot 2.0, a C++ library with Python bindings and an assortment of command-line tools designed to manipulate LTL and $$\omega $$-automata in batch. New automata-manipulation tools were introduced in Spot 2.0; they support arbitrary acceptance conditions, as expressible in the Hanoi Omega Automaton format. Besides being useful to researchers who have automata to process, its Python bindings can also be used in interactive environments to teach $$\omega $$-automata and model checking.

Alexandre Duret-Lutz, Alexandre Lewkowicz, Amaury Fauchille, Thibaud Michaud, Étienne Renault, Laurent Xu
MoChiBA: Probabilistic LTL Model Checking Using Limit-Deterministic Büchi Automata

The limiting factor for quantitative analysis of Markov decision processes (MDP) against specifications given in linear temporal logic (LTL) is the size of the generated product. As recently shown, a special subclass of limit-deterministic Büchi automata (LDBA) can replace deterministic Rabin automata in quantitative probabilistic model checking algorithms. We present an extension of PRISM for LTL model checking of MDP using LDBA. While existing algorithms can be used only with minimal changes, the new approach takes advantage of the special structure and the smaller size of the obtained LDBA to speed up the model checking. We demonstrate the speed up experimentally by a comparison with other approaches.

Salomon Sickert, Jan Křetínský

Parallelism, Concurrency

Frontmatter
Synchronous Products of Rewrite Systems

We present a concept of module composition for rewrite systems that we call synchronous product, and also a corresponding concept for doubly labeled transition systems (as proposed by De Nicola and Vaandrager) used as semantics for the former. In both cases, synchronization happens on states and on transitions, providing in this way more flexibility and more natural specifications. We describe our implementation in Maude, a rewriting logic-based language and system. A series of examples shows their use for modular specification and hints at other possible uses, including modular verification.

Óscar Martín, Alberto Verdejo, Narciso Martí-Oliet
Specifying and Verifying Secrecy in Workflows with Arbitrarily Many Agents

Web-based workflow management systems, like EasyChair, HealthVault, Ebay, or Amazon, often deal with confidential information such as the identity of reviewers, health data, or credit card numbers. Because the number of participants in the workflow is in principle unbounded, it is difficult to describe the information flow policy of such systems in specification languages that are limited to a fixed number of agents. We introduce a first-order version of HyperLTL, which allows us to express information flow requirements in workflows with arbitrarily many agents. We present a bounded model checking technique that reduces the violation of the information flow policy to the satisfiability of a first-order formula. We furthermore identify conditions under which the resulting satisfiability problem is guaranteed to be decidable.

Bernd Finkbeiner, Helmut Seidl, Christian Müller
Lazy Sequentialization for the Safety Verification of Unbounded Concurrent Programs

Lazy sequentialization has emerged as one of the most promising approaches for concurrent program analysis but the only efficient implementation given so far works just for bounded programs. This restricts the approach to bug-finding purposes. In this paper, we describe and evaluate a new lazy sequentialization translation that does not unwind loops and thus allows to analyze unbounded computations, even with an unbounded number of context switches. In connection with an appropriate sequential backend verification tool it can thus also be used for the safety verification of concurrent programs, rather than just for bug-finding. The main technical novelty of our translation is the simulation of the thread resumption in a way that does not use gotos and thus does not require that each statement is executed at most once. We have implemented this translation in the UL-CSeq tool for C99 programs that use the pthreads API. We evaluate UL-CSeq on several benchmarks, using different sequential verification backends on the sequentialized program, and show that it is more effective than previous approaches in proving the correctness of the safe benchmarks, and still remains competitive with state-of-the-art approaches for finding bugs in the unsafe benchmarks.

Truc L. Nguyen, Bernd Fischer, Salvatore La Torre, Gennaro Parlato
Parallel SMT-Based Parameter Synthesis with Application to Piecewise Multi-affine Systems

We propose a novel scalable parallel algorithm for synthesis of interdependent parameters from CTL specifications for non-linear dynamical systems. The method employs a symbolic representation of sets of parameter valuations in terms of the first-order theory of the reals. To demonstrate its practicability, we apply the method to a class of piecewise multi-affine dynamical systems representing dynamics of biological systems with complex non-linear behaviour.

Nikola Beneš, Luboš Brim, Martin Demko, Samuel Pastva, David Šafránek

Complexity, Decidability

Frontmatter
On Finite Domains in First-Order Linear Temporal Logic

We consider First-Order Linear Temporal Logic (FO-LTL) over linear time. Inspired by the success of formal approaches based upon finite-model finders, such as Alloy, we focus on finding models with finite first-order domains for FO-LTL formulas, while retaining an infinite time domain. More precisely, we investigate the complexity of the following problem: given a formula $$\varphi $$ and an integer n, is there a model of $$\varphi $$ with domain of cardinality at most n? We show that depending on the logic considered (FO or FO-LTL) and on the precise encoding of the problem, the problem is either NP-complete, NEXPTIME-complete, PSPACE-complete or EXPSPACE-complete. In a second part, we exhibit cases where the Finite Model Property can be lifted from fragments of FO to their FO-LTL extension.

Denis Kuperberg, Julien Brunel, David Chemouil
Decidability Results for Multi-objective Stochastic Games

We study stochastic two-player turn-based games in which the objective of one player is to ensure several infinite-horizon total reward objectives, while the other player attempts to spoil at least one of the objectives. The games have previously been shown not to be determined, and an approximation algorithm for computing a Pareto curve has been given. The major drawback of the existing algorithm is that it needs to compute Pareto curves for finite horizon objectives (for increasing length of the horizon), and the size of these Pareto curves can grow unboundedly, even when the infinite-horizon Pareto curve is small.By adapting existing results, we first give an algorithm that computes the Pareto curve for determined games. Then, as the main result of the paper, we show that for the natural class of stopping games and when there are two reward objectives, the problem of deciding whether a player can ensure satisfaction of the objectives with given thresholds is decidable. The result relies on an intricate and novel proof which shows that the Pareto curves contain only finitely many points.As a consequence, we get that the two-objective discounted-reward problem for unrestricted class of stochastic games is decidable.

Romain Brenguier, Vojtěch Forejt
A Decision Procedure for Separation Logic in SMT

This paper presents a complete decision procedure for the entire quantifier-free fragment of Separation Logic ($$\mathsf {SL}$$) interpreted over heaplets with data elements ranging over a parametric multi-sorted (possibly infinite) domain. The algorithm uses a combination of theories and is used as a specialized solver inside a DPLL(T) architecture. A prototype was implemented within the CVC4 SMT solver. Preliminary evaluation suggests the possibility of using this procedure as a building block of a more elaborate theorem prover for $$\mathsf {SL}$$ with inductive predicates, or as back-end of a bounded model checker for programs with low-level pointer and data manipulations.

Andrew Reynolds, Radu Iosif, Cristina Serban, Tim King
Solving Mean-Payoff Games on the GPU

General purpose computation on graphics processing units (GPGPU) is a recent trend in areas which heavily depend on linear algebra, in particular solving large systems of linear equations. Many games, both qualitative (e.g. parity games) and quantitative (e.g. mean-payoff games) can be seen as systems of linear equations, too, albeit on more general algebraic structures. Building up on our GPU-based implementation of several solvers for parity games [8], we present in this paper a solver for mean-payoff games. Our implementation uses OpenCL which allows us to execute it without any changes on both the CPU and on the GPU allowing for direct comparison.We evaluate our implementation on several benchmarks (obtained via reduction from parity games and optimization of controllers for hybrid systems [10]) where we obtain a speedup of up to 10 on the GPU in cases of MPGs with $$20\cdot 10^6$$ nodes and $$60\cdot 10^6$$ edges.

Philipp J. Meyer, Michael Luttenberger

Synthesis, Refinement

Frontmatter
Synthesizing Skeletons for Reactive Systems

We present an analysis technique for temporal specifications of reactive systems that identifies, on the level of individual system outputs over time, which parts of the implementation are determined by the specification, and which parts are still open. This information is represented in the form of a labeled transition system, which we call skeleton. Each state of the skeleton is labeled with a three-valued assignment to the output variables: each output can be true, false, or open, where true or false means that the value must be true or false, respectively, and open means that either value is still possible. We present algorithms for the verification of skeletons and for the learning-based synthesis of skeletons from specifications in linear-time temporal logic (LTL). The algorithm returns a skeleton that satisfies the given LTL specification in time polynomial in the size of the minimal skeleton. Our new analysis technique can be used to recognize and repair specifications that underspecify critical situations. The technique thus complements existing methods for the recognition and repair of overspecifications via the identification of unrealizable cores.

Bernd Finkbeiner, Hazem Torfah
Observational Refinement and Merge for Disjunctive MTSs

Modal Transition System (MTS) is a well studied formalism for partial model specification. It allows a modeller to distinguish between required, prohibited and possible transitions. Disjunctive MTS (DMTS) is an extension of MTS that has been getting attention in recent years. A key concept for (D)MTS is refinement, supporting a development process where abstract specifications are gradually refined into more concrete ones. Refinement comes in different flavours: strong, observational (where $$\tau $$-labelled transitions are taken into account), and alphabet (allowing the comparison of models defined on different alphabets). Another important operation on (D)MTS is that of merge: given two models M and N, their merge is a model P which refines both M and N, and which is the least refined one.In this paper, we fill several missing parts in the theory of DMTS refinement and merge. First and foremost, we define observational refinement for DMTS. While an elementary concept, such a definition is missing from the literature to the best of our knowledge. We prove that our definition is sound and that it complies with all relevant definitions from the literature. Based on the new observational refinement for DMTS, we examine the question of DMTS merge, which was defined so far for strong refinement only. We show that observational merge can be achieved as a natural extension of the existing algorithm for strong merge of DMTS. For alphabet merge however, the situation is different. we prove that DMTSs do not have a merge under alphabet refinement.

Shoham Ben-David, Marsha Chechik, Sebastian Uchitel
Equivalence-Based Abstraction Refinement for HORS Model Checking

Kobayashi and Igarashi proposed model checking of $$\mu $$HORS (recursively-typed higher-order recursion schemes), by which a wide range of programs such as object-oriented programs and multi-threaded programs can be precisely modeled and verified. In this work, we present a procedure for $$\mu $$HORS model checking that improves the procedure based on automata-based abstraction refinement proposed by Kobayashi and Li. The new procedure optimizes each step of the abstract-check-refine paradigm of the previous procedure. Specially, it combines the strengths of automata-based and type-based abstraction refinement as equivalence-based abstraction refinement. We have implemented the new procedure, and confirmed that it always outperformed the original automata-based procedure on runtime efficiency, and successfully verified all benchmarks which were previously impossible.

Xin Li, Naoki Kobayashi

Optimization, Heuristics, Partial-Order Reductions

Frontmatter
Greener Bits: Formal Analysis of Demand Response

Demand response is a promising approach to deal with the emerging power generation fluctuations introduced by the increasing amount of renewable energy sources fed into the grid. Consumers need to be able to adapt their energy consumption with respect to the given demand pattern and at the same time ensure that their adaptation (i.e., response) does not interfere with their various operational objectives. Finding, evaluating and verifying adaptation strategies which aim to be optimal w.r.t. multiple criteria is a challenging task and is currently mainly addressed by hand, heuristics or guided simulation. In this paper we carry out a case study of a demand response system with an energy adaptive data center on the consumer side for which we propose a formal model and perform a quantitative system analysis using probabilistic model checking. Our first contribution is a fine-grained formal model and the identification of significant properties and quantitative measures (e.g., expected energy consumption, average workload or total penalties for violating adaptation contracts) that are relevant for the data center as an adaptive consumer. The formal model can serve as a starting point for the application of different formal analysis methods. The second contribution is an evaluation of our approach using the prominent model checker PRISM. We report on the experimental results computing various functional properties and quantitative measures that yield important insights into the viability of given adaptation strategies and how to find close-to-optimal strategies.

Christel Baier, Sascha Klüppelholz, Hermann de Meer, Florian Niedermeier, Sascha Wunderlich
Heuristics for Checking Liveness Properties with Partial Order Reductions

Checking liveness properties with partial-order reductions requires a cycle proviso to ensure that an action cannot be postponed forever. The proviso forces each cycle to contain at least one fully expanded state. We present new heuristics to select which state to expand, hoping to reduce the size of the resulting graph. The choice of the state to expand is done when encountering a “dangerous edge”. Almost all existing provisos expand the source of this edge, while this paper also explores the expansion of the destination and the use of SCC-based information.

Alexandre Duret-Lutz, Fabrice Kordon, Denis Poitrenaud, Etienne Renault
Partial-Order Reduction for GPU Model Checking

Model checking using GPUs has seen increased popularity over the last years. Because GPUs have a limited amount of memory, only small to medium-sized systems can be verified. For on-the-fly explicit-state model checking, we improve memory efficiency by applying partial-order reduction. We propose novel parallel algorithms for three practical approaches to partial-order reduction. Correctness of the algorithms is proved using a new, weaker version of the cycle proviso. Benchmarks show that our implementation achieves a reduction similar to or better than the state-of-the-art techniques for CPUs, while the amount of runtime overhead is acceptable.

Thomas Neele, Anton Wijs, Dragan Bošnački, Jaco van de Pol
Efficient Verification of Program Fragments: Eager POR

Software verification of concurrent programs is hampered by an exponentially growing state space due to non-deterministic process scheduling. Partial order reduction (POR)-based verification has proven to be a powerful technique to handle large state spaces.In this paper, we propose a novel dynamic POR algorithm, called $$\textit{Eager POR}$$ ($$\textsc {epor}$$), that requires considerably less overhead during state space exploration than existing algorithms. $$\textsc {epor}$$ is based on a formal characterization of program fragments for which exploration can be scheduled in advance and dependency checks can be avoided. We show the correctness of this characterization and evaluate the performance of $$\textsc {epor}$$ in comparison to existing state-of-the-art dynamic POR algorithms. Our evaluation shows substantial improvement in the runtime performance by up to 91 %.

Patrick Metzler, Habib Saissi, Péter Bokor, Robin Hesse, Neeraj Suri

Solving Procedures, Model Checking

Frontmatter
Skolem Functions for DQBF

We consider the problem of computing Skolem functions for satisfied dependency quantified Boolean formulas (DQBFs). We show how Skolem functions can be obtained from an elimination-based DQBF solver and how to take preprocessing steps into account. The size of the Skolem functions is optimized by don’t-care minimization using Craig interpolants and rewriting techniques. Experiments with our DQBF solver HQS show that we are able to effectively compute Skolem functions with very little overhead compared to the mere solution of the formula.

Karina Wimmer, Ralf Wimmer, Christoph Scholl, Bernd Becker
STL Model Checking of Continuous and Hybrid Systems

Signal Temporal Logic (STL) is a formalism for reasoning about temporal properties of continuous-time traces of hybrid systems. Previous work on this subject mostly focuses on robust satisfaction of an STL formula for a particular trace. In contrast, we present a method solving the problem of formally verifying an STL formula for continuous and hybrid system models, which exhibit uncountably many traces. We consider an abstraction of a model as an evolution of reachable sets. Through leveraging the representation of the abstraction, the continuous-time verification problem is reduced to a discrete-time problem. For the given abstraction, the reduction to discrete-time and our decision procedure are sound and complete for finitely represented reach sequences and sampled time STL formulas. Our method does not rely on a special representation of reachable sets and thus any reachability analysis tool can be used to generate the reachable sets. The benefit of the method is illustrated on an example from the context of automated driving.

Hendrik Roehm, Jens Oehlerking, Thomas Heinz, Matthias Althoff
Clause Sharing and Partitioning for Cloud-Based SMT Solving

Satisfiability modulo theories (SMT) allows the modeling and solving of constraint problems arising from practical domains by combining well-engineered and powerful solvers for propositional satisfiability with expressive, domain-specific background theories in a natural way. The increasing popularity of SMT as a modelling approach means that the SMT solvers need to handle increasingly complex problem instances. This paper studies how SMT solvers can use cloud computing to scale to challenging problems through sharing of learned information in the form of clauses with approaches based on both divide-and-conquer and algorithm portfolios. Our initial experiments, executed on the OpenSMT2 solver, show that parallelization with clause sharing speeds up the solving of instances, on average, by a factor of four or five depending on the problem domain.

Matteo Marescotti, Antti E. J. Hyvärinen, Natasha Sharygina
Symbolic Model Checking for Factored Probabilistic Models

The long line of research in probabilistic model checking has resulted in efficient symbolic verification engines. Nevertheless, scalability is still a key concern. In this paper we ask two questions. First, can we lift, to the probabilistic world, successful hardware verification techniques that exploit local variable dependencies in the analyzed model? And second, will those techniques lead to significant performance improvement on models with such structure, such as dynamic Bayesian networks?To the first question we give a positive answer by proposing a probabilistic model checking approach based on factored symbolic representation of the transition probability matrix of the analyzed model. Our experimental evaluation on several benchmarks designed to favour this approach answers the second question negatively. Intuitively, the reason is that the effect of techniques for reducing the size of BDD-based symbolic representations do not carry over to quantitative symbolic data structures. More precisely, the size of MTBDDs depends not only on the number of variables but also on the number of different terminals they have (which influences sharing), and which is not reduced by these techniques.

David Deininger, Rayna Dimitrova, Rupak Majumdar

Program Analysis

Frontmatter
A Sketching-Based Approach for Debugging Using Test Cases

Manually locating and removing bugs in faulty code is often tedious and error-prone. Despite much progress in automated debugging, developing effective debugging techniques remains a challenge. This paper introduces a novel approach that uses a well-known program synthesis technique to automate debugging. As inputs, our approach takes a program and a test suite (with some passing and some failing tests), similar to various other recent techniques. Our key insight is to reduce the problem of finding a fix to the problem of program sketching. We translate the faulty program into a sketch of the correct program, and use off-the-shelf sketching technology to create a program that is correct with respect to the given test cases. The experimental evaluation using a suite of small, yet complex programs shows that our prototype embodiment of our approach is more effective than previous state-of-the-art.

Jinru Hua, Sarfraz Khurshid
Polynomial Invariants by Linear Algebra

We present in this paper a new technique for generating polynomial invariants, divided in two independent parts: a procedure that reduces polynomial assignments composed loops analysis to linear loops under certain hypotheses and a procedure for generating inductive invariants for linear loops. Both of these techniques have a polynomial complexity for a bounded number of variables and we guarantee the completeness of the technique for a bounded degree which we successfully implemented for C programs verification.

Steven de Oliveira, Saddek Bensalem, Virgile Prevosto
Certified Symbolic Execution

We propose a certification approach for checking the analysis results produced by symbolic execution. Given a program P under test, an analysis producer performs symbolic execution on P and creates a certificate C that represents the results of symbolic execution. The analysis consumer checks the validity of C with respect to P using efficient symbolic re-execution of P. The certificates are simple to create and easy to validate. Each certificate is a list of witnesses that include: test inputs that validate path feasibility without requiring any constraint solving; and infeasibility summaries that provide hints on how to efficiently establish path infeasibility. To account for incompleteness in symbolic execution (due to incompleteness of the backend solver), the certificate also contains an incompleteness summary. Our approach deploys constraint slicing and other heuristics as performance optimizations. Experimental results using a prototype certification tool based on Symbolic PathFinder for Java show that certification can be 3X to 370X (on average, 75X) faster than traditional symbolic execution. We also show the benefits of the approach for the reliability assessment of a software component under different probabilistic environment conditions.

Rui Qiu, Corina S. Păsăreanu, Sarfraz Khurshid
Tighter Loop Bound Analysis

We present a new algorithm for computing upper bounds on the number of executions of each program instruction during any single program run. The upper bounds are expressed as functions of program input values. The algorithm is primarily designed to produce bounds that are relatively tight, i.e. not unnecessarily blown up. The upper bounds for instructions allow us to infer loop bounds, i.e. upper bounds on the number of loop iterations. Experimental results show that the algorithm implemented in a prototype tool Looperman often produces tighter bounds than current tools for loop bound analysis.

Pavel Čadek, Jan Strejček, Marek Trtík
Backmatter
Metadaten
Titel
Automated Technology for Verification and Analysis
herausgegeben von
Cyrille Artho
Axel Legay
Doron Peled
Copyright-Jahr
2016
Electronic ISBN
978-3-319-46520-3
Print ISBN
978-3-319-46519-7
DOI
https://doi.org/10.1007/978-3-319-46520-3