Skip to main content

2011 | Buch

Tools and Algorithms for the Construction and Analysis of Systems

17th International Conference, TACAS 2011, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2011, Saarbrücken, Germany, March 26–April 3, 2011. Proceedings

herausgegeben von: Parosh Aziz Abdulla, K. Rustan M. Leino

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 17th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2011, held in Saarbrücken, Germany, March 26—April 3, 2011, as part of ETAPS 2011, the European Joint Conferences on Theory and Practice of Software. The 32 revised full papers presented were carefully reviewed and selected from 112 submissions. The papers are organized in topical sections on memory models and consistency, invariants and termination, timed and probabilistic systems, interpolations and SAT-solvers, learning, model checking, games and automata, verification, and probabilistic systems.

Inhaltsverzeichnis

Frontmatter

Reliable Software Development: Analysis-Aware Design

Reliable Software Development: Analysis-Aware Design

The application of formal methods in software development does not have to be an

all-or-nothing

proposition. Progress can be made with the introduction of relatively unobtrusive techniques that simplify analysis. This approach is meant replace traditional

analysis-agnostic

coding with an

analysis-aware

style of software development.

Gerard J. Holzmann
Transition Invariants and Transition Predicate Abstraction for Program Termination

Originally, the concepts of transition invariants and transition predicate abstraction were used to formulate a proof rule and an abstraction-based algorithm for the verification of liveness properties of concurrent programs under fairness assumptions. This note applies the two concepts for proving termination of sequential programs. We believe that the specialized setting exhibits the underlying principles in a more direct way.

Andreas Podelski, Andrey Rybalchenko

Memory Models and Consistency

Sound and Complete Monitoring of Sequential Consistency for Relaxed Memory Models

We present a technique for verifying that a program has no executions violating sequential consistency (SC) when run under the relaxed memory models Total Store Order (TSO) and Partial Store Order (PSO). The technique works by monitoring sequentially consistent executions of a program to detect if similar program executions could fail to be sequentially consistent under TSO or PSO. We propose novel monitoring algorithms that are sound and complete for TSO and PSO—if a program can exhibit an SC violation under TSO or PSO, then the corresponding monitor can detect this on some SC execution. The monitoring algorithms arise naturally from the operational definitions of these relaxed memory models, highlighting an advantage of viewing relaxed memory models operationally rather than axiomatically. We apply our technique to several concurrent data structures and synchronization primitives, detecting a number of violations of sequential consistency.

Jabob Burnim, Koushik Sen, Christos Stergiou
Compositionality Entails Sequentializability

We show that any concurrent program that is amenable to compositional reasoning can be effectively translated to a sequential program. More precisely, we give a reduction from the verification problem for concurrent programs against safety specifications to the verification of sequential programs against safety specifications, where the reduction is parameterized by a set of auxiliary variables

A

, such that the concurrent program

compositionally

satisfies its specification using auxiliary variables

A

iff the sequentialization satisfies its specification. Existing sequentializations for concurrent programs work only for underapproximations like bounded context-switching, while our sequentialization has the salient feature that it can

prove

concurrent programs entirely correct, as long as it has a compositional proof. The sequentialization allows us to use sequential verification tools (including deductive verification tools and predicate abstraction tools) to analyze and prove concurrent programs correct. We also report on our experience in the deductive verification of concurrent programs by proving their sequential counterparts using the program verifier

Boogie

.

Pranav Garg, P. Madhusudan
Litmus: Running Tests against Hardware

Shared memory multiprocessors typically expose subtle, poorly understood and poorly specified relaxed-memory semantics to programmers. To understand them, and to develop formal models to use in program verification, we find it essential to take an empirical approach, testing what results parallel programs can actually produce when executed on the hardware. We describe a key ingredient of our approach, our litmus tool, which takes small ‘litmus test’ programs and runs them for many iterations to find interesting behaviour. It embodies various techniques for making such interesting behaviour appear more frequently.

Jade Alglave, Luc Maranget, Susmit Sarkar, Peter Sewell

Invariants and Termination

Canonized Rewriting and Ground AC Completion Modulo Shostak Theories

AC-completion efficiently handles equality modulo associative and commutative function symbols. When the input is ground, the procedure terminates and provides a decision algorithm for the word problem. In this paper, we present a modular extension of ground ACcompletion for deciding formulas in the combination of the theory of equality with user-defined AC symbols, uninterpreted symbols and an arbitrary signature disjoint Shostak theory

X

. Our algorithm, called

AC(X)

, is obtained by augmenting in a modular way ground AC-completion with the canonizer and solver present for the theory X. This integration rests on canonized rewriting, a new relation reminiscent to normalized rewriting, which integrates canonizers in rewriting steps.

AC(X)

is proved sound, complete and terminating, and is implemented to extend the core of the

Alt-Ergo

theorem prover.

Sylvain Conchon, Evelyne Contejean, Mohamed Iguernelala
Invariant Generation in Vampire

This paper describes a loop invariant generator implemented in the theorem prover Vampire. It is based on the symbol elimination method proposed by two authors of this paper. The generator accepts a program written in a subset of C, finds loops in it, analyses the loops, generates and outputs invariants. It also uses a special consequence removal mode added to Vampire to remove invariants implied by other invariants. The generator is implemented as a standalone tool, thus no knowledge of theorem proving is required from its users.

Kryštof Hoder, Laura Kovács, Andrei Voronkov
Enforcing Structural Invariants Using Dynamic Frames

The theory of dynamic frames is a promising approach to handle the so-called

framing problem

, that is, giving a precise characterizations of the locations in the heap that a procedure may modify.

In this paper, we show that the machinery used for dynamic frames may be exploited even further. In particular, we use it to check that implementations of abstract data types maintain certain structural invariants that are very hard to express with usual means, including being acyclic (like non-circular linked lists and trees) and having a unique path between nodes (like in a tree).

The idea is that regions in this formalism over-approximate the set of reachable objects. We can then maintain this structural invariants by including special preconditions in assignments, of the kind that can be verified by state-of-the-art SMT-based tools. To test this approach we modified the verifier for the Dafny programming language in a suitable way and were able to enforce these invariants in non-trivial examples.

Diego Garbervetsky, Daniel Gorín, Ariel Neisen
Loop Summarization and Termination Analysis

We present a technique for program termination analysis based on loop summarization. The algorithm relies on a library of abstract domains to discover well-founded transition invariants. In contrast to state-of-the-art methods it aims to construct a complete ranking argument for all paths through a loop at once, thus avoiding expensive enumeration of individual paths. Compositionality is used as a completeness criterion for the discovered transition invariants. The practical efficiency of the approach is evaluated using a set of Windows device drivers.

Aliaksei Tsitovich, Natasha Sharygina, Christoph M. Wintersteiger, Daniel Kroening

Timed and Probabilistic Systems

Off-Line Test Selection with Test Purposes for Non-deterministic Timed Automata

This paper proposes novel off-line test generation techniques for non-deterministic timed automata with inputs and outputs (TAIOs) in the formal framework of the

tioco

, conformance theory. In this context, a first problem is the determinization of TAIOs, which is necessary to foresee next enabled actions, but is in general impossible. This problem is solved here thanks to an approximate determinization using a game approach, which preserves

tioco

, and guarantees the soundness of generated test cases. A second problem is test selection for which a precise description of timed behaviors to be tested is carried out by expressive test purposes modeled by a generalization of TAIOs. Finally, using a symbolic co-reachability analysis guided by the test purpose, test cases are generated in the form of TAIOs equipped with verdicts.

Nathalie Bertrand, Thierry Jéron, Amélie Stainer, Moez Krichen
Quantitative Multi-objective Verification for Probabilistic Systems

We present a verification framework for analysing multiple quantitative objectives of systems that exhibit both nondeterministic and stochastic behaviour. These systems are modelled as probabilistic automata, enriched with cost or reward structures that capture, for example, energy usage or performance metrics. Quantitative properties of these models are expressed in a specification language that incorporates probabilistic safety and liveness properties, expected total cost or reward, and supports multiple objectives of these types. We propose and implement an efficient verification framework for such properties and then present two distinct applications of it: firstly,

controller synthesis

subject to multiple quantitative objectives; and, secondly, quantitative

compositional verification

. The practical applicability of both approaches is illustrated with experimental results from several large case studies.

Vojtěch Forejt, Marta Kwiatkowska, Gethin Norman, David Parker, Hongyang Qu
Efficient CTMC Model Checking of Linear Real-Time Objectives

This paper makes verifying continuous-time Markov chains (CTMCs) against deterministic timed automata (DTA) objectives practical. We show that verifying 1-clock DTA can be done by analyzing subgraphs of the product of CTMC

${\mathcal C}$

and the region graph of DTA

${\mathcal A}$

. This improves upon earlier results and allows to only use standard analysis algorithms. Our graph decomposition approach naturally enables bisimulation minimization as well as parallelization. Experiments with various examples confirm that these optimizations lead to significant speed-ups. We also report on experiments with multiple-clock DTA objectives. The objectives and the size of the problem instances that can be checked with our prototypical tool go (far) beyond what could be checked so far.

Benoît Barbot, Taolue Chen, Tingting Han, Joost-Pieter Katoen, Alexandru Mereacre

Interpolations and SAT-Solvers

Efficient Interpolant Generation in Satisfiability Modulo Linear Integer Arithmetic

The problem of computing Craig interpolants in SAT and SMT has recently received a lot of interest, mainly for its applications in formal verification. Efficient algorithms for interpolant generation have been presented for some theories of interest —including that of equality and uninterpreted functions

$({\mathcal{EUF}})$

, linear arithmetic over the rationals

$({\mathcal{LA}(\mathbb{Q})})$

, and their combination— and they are successfully used within model checking tools. For the theory of linear arithmetic over the integers

$({\mathcal{LA}(\mathbb{Z})})$

, however, the problem of finding an interpolant is more challenging, and the task of developing efficient interpolant generators for the full theory

${\mathcal{LA}(\mathbb{Z})}$

is still the objective of ongoing research.

In this paper we try to close this gap. We build on previous work and present a novel interpolation algorithm for SMT

$({\mathcal{LA}(\mathbb{Z})})$

, which exploits the full power of current state-of-the-art SMT

$({\mathcal{LA}(\mathbb{Z})})$

solvers. We demonstrate the potential of our approach with an extensive experimental evaluation of our implementation of the proposed algorithm in the

MathSAT

SMT solver.

Alberto Griggio, Thi Thieu Hoa Le, Roberto Sebastiani
Generalized Craig Interpolation for Stochastic Boolean Satisfiability Problems

The stochastic Boolean satisfiability (SSAT) problem has been introduced by Papadimitriou in 1985 when adding a probabilistic model of uncertainty to propositional satisfiability through randomized quantification. SSAT has many applications, among them bounded model checking (BMC) of symbolically represented Markov decision processes. This paper identifies a notion of

Craig interpolant

for the SSAT framework and develops an algorithm for computing such interpolants based on SSAT resolution. As a potential application, we address the use of interpolation in SSAT-based BMC, turning the falsification procedure into a verification approach for probabilistic safety properties.

Tino Teige, Martin Fränzle
Specification-Based Program Repair Using SAT

Removing bugs in programs – even when location of faulty statements is known – is tedious and error-prone, particularly because of the increased likelihood of introducing new bugs as a result of fixing known bugs. We present an automated approach for generating likely bug fixes using behavioral specifications. Our key insight is to replace a faulty statement that has deterministic behavior with one that has nondeterministic behavior, and to use the specification constraints to prune the ensuing nondeterminism and repair the faulty statement. As an enabling technology, we use the SAT-based Alloy tool-set to describe specification constraints as well as for solving them. Initial experiments show the effectiveness of our approach in repairing programs that manipulate structurally complex data. We believe specification-based automated debugging using SAT holds much promise.

Divya Gopinath, Muhammad Zubair Malik, Sarfraz Khurshid
Optimal Base Encodings for Pseudo-Boolean Constraints

This paper formalizes the

optimal base problem

, presents an algorithm to solve it, and describes its application to the encoding of Pseudo-Boolean constraints to SAT. We demonstrate the impact of integrating our algorithm within the Pseudo-Boolean constraint solver

MiniSat

 + 

. Experimentation indicates that our algorithm scales to bases involving numbers up to 1,000,000, improving on the restriction in

MiniSat

 + 

to prime numbers up to 17. We show that, while for many examples primes up to 17 do suffice, encoding with respect to optimal bases reduces the CNF sizes and improves the subsequent SAT solving time for many examples.

Michael Codish, Yoav Fekete, Carsten Fuhs, Peter Schneider-Kamp

Learning

Predicate Generation for Learning-Based Quantifier-Free Loop Invariant Inference

We address the predicate generation problem in the context of loop invariant inference. Motivated by the interpolation-based abstraction refinement technique, we apply the interpolation theorem to synthesize predicates implicitly implied by program texts. Our technique is able to improve the effectiveness and efficiency of the learning-based loop invariant inference algorithm in [14]. Experiments excerpted from Linux, SPEC2000, and Tar source codes are reported.

Yungbum Jung, Wonchan Lee, Bow-Yaw Wang, Kwangkuen Yi
Next Generation LearnLib

The Next Generation LearnLib (NGLL) is a framework for model-based construction of dedicated learning solutions on the basis of extensible component libraries, which comprise various methods and tools to deal with realistic systems including test harnesses, reset mechanisms and abstraction/refinement techniques. Its construction style allows application experts to control, adapt, and evaluate complex learning processes with minimal programming expertise.

Maik Merten, Bernhard Steffen, Falk Howar, Tiziana Margaria

Model Checking

Applying CEGAR to the Petri Net State Equation

We propose a reachability verification technique that combines the

Petri net state equation

(a linear algebraic overapproximation of the set of reachable states) with the concept of

counterexample guided abstraction refinement

. In essence, we replace the search through the set of reachable states by a search through the space of solutions of the state equation. We demonstrate the excellent performance of the technique on several real-world examples. The technique is particularly useful in those cases where the reachability query yields a negative result: While state space based techniques need to fully expand the state space in this case, our technique often terminates promptly. In addition, we can derive some diagnostic information in case of unreachability while state space methods can only provide witness paths in the case of reachability.

Harro Wimmel, Karsten Wolf
Biased Model Checking Using Flows

We describe two new state exploration algorithms, called biased-dfs and biased-bfs, that bias the search towards regions more likely to have error states using high level hints supplied by the user. These hints are in the form of priorities or markings describing which transitions are important and which aren’t. We will then describe a natural way to mark the transitions using flows or partial orders on system events. Apart from being easy to understand, flows express succinctly the basic organization of a system. An advantage of this approach is that assigning priorities does not involve low level details of the system. Using flow-derived priorities we study the performance of the biased algorithms in the context of cache coherence protocols by comparing them against standard bfs, dfs and directed model checking. Preliminary results are encouraging with biased-bfs finding bugs about 3 times faster on average than standard bfs while returning shortest counter examples almost always. Biased-dfs on the other hand is couple of orders of magnitude faster than bfs and slightly faster than even standard dfs while being more robust than it.

Muralidhar Talupur, Hyojung Han
S-TaLiRo: A Tool for Temporal Logic Falsification for Hybrid Systems

S-TaLiRo

is a Matlab (TM) toolbox that searches for trajectories of minimal robustness in Simulink/Stateflow diagrams. It can analyze arbitrary Simulink models or user defined functions that model the system. At the heart of the tool, we use randomized testing based on stochastic optimization techniques including Monte-Carlo methods and Ant-Colony Optimization. Among the advantages of the toolbox is the seamless integration inside the Matlab environment, which is widely used in the industry for model-based development of control software. We present the architecture of

S-TaLiRo

and its working on an application example.

Yashwanth Annpureddy, Che Liu, Georgios Fainekos, Sriram Sankaranarayanan

Games and Automata

GAVS+: An Open Platform for the Research of Algorithmic Game Solving

This paper presents a major revision of the tool GAVS. First, the number of supported games has been greatly extended and now encompasses in addition many classes important for the design and analysis of programs, e.g., it now allows to explore concurrent / probabilistic / distributed games, and games played on pushdown graphs. Second, among newly introduced utility functions, GAVS+ includes features such that the user can now process and synthesize planning (game) problems described in the established

Strips

/PDDL language by introducing a slight extension which allows to specify a second player. This allows researchers in verification to profit from the rich collection of examples coming from the AI community.

Chih-Hong Cheng, Alois Knoll, Michael Luttenberger, Christian Buckl
Büchi Store: An Open Repository of Büchi Automata

We introduce the Büchi Store, an open repository of Büchi automata for model-checking practice, research, and education. The repository contains Büchi automata and their complements for common specification patterns and numerous temporal formulae. These automata are made as small as possible by various construction techniques, in view that smaller automata are easier to understand and often help in speeding up the model-checking process. The repository is open, allowing the user to add new automata or smaller ones that are equivalent to some existing automaton. Such a collection of Büchi automata is also useful as a benchmark for evaluating complementation or translation algorithms and as examples for studying Büchi automata and temporal logic.

Yih-Kuen Tsay, Ming-Hsien Tsai, Jinn-Shu Chang, Yi-Wen Chang
QUASY: Quantitative Synthesis Tool

We present the tool

Quasy

, a quantitative synthesis tool.

Quasy

takes qualitative and quantitative specifications and automatically constructs a system that satisfies the qualitative specification and optimizes the quantitative specification, if such a system exists. The user can choose between a system that satisfies and optimizes the specifications (a) under all possible environment behaviors or (b) under the most-likely environment behaviors given as a probability distribution on the possible input sequences.

Quasy

solves these two quantitative synthesis problems by reduction to instances of 2-player games and Markov Decision Processes (MDPs) with quantitative winning objectives.

Quasy

can also be seen as a game solver for quantitative games. Most notable, it can solve lexicographic mean-payoff games with 2 players, MDPs with mean-payoff objectives, and ergodic MDPs with mean-payoff parity objectives.

Krishnendu Chatterjee, Thomas A. Henzinger, Barbara Jobstmann, Rohit Singh
Unbeast: Symbolic Bounded Synthesis

We present

Unbeast v.0.6

, a tool for synthesising finite-state systems from specifications written in linear-time temporal logic (LTL). We combine bounded synthesis, specification splitting and symbolic game solving with binary decision diagrams (BDDs), which allows tackling specifications that previous tools were typically unable to handle. In case of realizability of a given specification, our tool computes a prototype implementation in a fully symbolic way, which is especially beneficial for settings with many input and output bits.

Rüdiger Ehlers

Verification (I)

Abstractions and Pattern Databases: The Quest for Succinctness and Accuracy

Directed model checking is a well-established technique for detecting error states in concurrent systems efficiently. As error traces are important for debugging purposes, it is preferable to find as short error traces as possible. A wide spread method to find provably

shortest

error traces is to apply the A* search algorithm with distance heuristics that never overestimate the real error distance. An important class of such distance estimators is the class of

pattern database

heuristics, which are built on abstractions of the system under consideration. In this paper, we propose a systematic approach for the construction of pattern database heuristics. We formally define a concept to measure the accuracy of abstractions. Based on this technique, we address the challenge of finding abstractions that are succinct on the one hand, and accurate to produce informed pattern databases on the other hand. We evaluate our approach on large and complex industrial problems. The experiments show that the resulting distance heuristic impressively advances the state of the art.

Sebastian Kupferschmid, Martin Wehrle
The ACL2 Sedan Theorem Proving System

The ACL2 Sedan theorem prover (ACL2s) is an Eclipse plug-in that provides a modern integrated development environment, supports several modes of interaction, provides a powerful termination analysis engine, and includes fully automatic bug-finding methods based on a synergistic combination of theorem proving and random testing. ACL2s is publicly available and open source. It has also been used in several sections of a required freshman course at Northeastern University to teach over 200 undergraduate students how to reason about programs.

Harsh Raju Chamarthi, Peter Dillinger, Panagiotis Manolios, Daron Vroon

Probabilistic Systems

On Probabilistic Parallel Programs with Process Creation and Synchronisation

We initiate the study of probabilistic parallel programs with dynamic process creation and synchronisation. To this end, we introduce

probabilistic split-join systems (pSJSs)

, a model for parallel programs, generalising both probabilistic pushdown systems (a model for sequential probabilistic procedural programs which is equivalent to recursive Markov chains) and stochastic branching processes (a classical mathematical model with applications in various areas such as biology, physics, and language processing). Our pSJS model allows for a possibly recursive spawning of parallel processes; the spawned processes can synchronise and return values. We study the basic performance measures of pSJSs, especially the distribution and expectation of space, work and time. Our results extend and improve previously known results on the subsumed models. We also show how to do performance analysis in practice, and present two case studies illustrating the modelling power of pSJSs.

Stefan Kiefer, Dominik Wojtczak
Confluence Reduction for Probabilistic Systems

This paper presents a novel technique for state space reduction of probabilistic specifications, based on a newly developed notion of confluence for probabilistic automata. We prove that this reduction preserves branching probabilistic bisimulation and can be applied on-the-fly. To support the technique, we introduce a method for detecting confluent transitions in the context of a probabilistic process algebra with data, facilitated by an earlier defined linear format. A case study demonstrates that significant reductions can be obtained.

Mark Timmer, Mariëlle Stoelinga, Jaco van de Pol
Model Repair for Probabilistic Systems

We introduce the problem of Model Repair for Probabilistic Systems as follows. Given a probabilistic system

M

and a probabilistic temporal logic formula

φ

such that

M

fails to satisfy

φ

, the Model Repair problem is to find an

M

′ that satisfies

φ

and differs from

M

only in the transition flows of those states in

M

that are deemed controllable. Moreover, the cost associated with modifying

M

’s transition flows to obtain

M

′ should be minimized. Using a new version of parametric probabilistic model checking, we show how the Model Repair problem can be reduced to a nonlinear optimization problem with a minimal-cost objective function, thereby yielding a solution technique. We demonstrate the practical utility of our approach by applying it to a number of significant case studies, including a DTMC reward model of the Zeroconf protocol for assigning IP addresses, and a CTMC model of the highly publicized Kaminsky DNS cache-poisoning attack.

Ezio Bartocci, Radu Grosu, Panagiotis Katsaros, C. R. Ramakrishnan, Scott A. Smolka

Verification (II)

Boosting Lazy Abstraction for SystemC with Partial Order Reduction

The SystemC language is a de-facto standard for the description of systems on chip. A promising technique, called ESST, has recently been proposed for the formal verification of SystemC designs. ESST combines

Explicit

state techniques to deal with the SystemC

Scheduler

, with

Symbolic

techniques, based on lazy abstraction, to deal with the

Threads

. Despite its relative effectiveness, this approach suffers from the potential explosion of thread interleavings.

In this paper we propose the adoption of partial order reduction (POR) techniques to alleviate the problem. We extend ESST with two complementary POR techniques (persistent set, and sleep set), and we prove the soundness of the approach in the case of safety properties. The extension is only seemingly trivial: the POR, applied to the scheduler, must be proved not to interfere with the lazy abstraction of the threads.

We implemented the techniques within the software model checker

Kratos

, and we carried out an experimental evaluation on benchmarks taken from the SystemC distribution and from the literature. The results showed a significant improvement in terms of the number of visited abstract states and run times.

Alessandro Cimatti, Iman Narasamdya, Marco Roveri
Modelling and Verification of Web Services Business Activity Protocol

WS-Business Activity specification defines two coordination protocols in order to ensure a consistent agreement on the outcome of long-running distributed applications. We use the model checker

Uppaal

to analyse the Business Agreement with Coordination Completion protocol type. Our analyses show that the protocol, as described in the standard specification, violates correct operation by reaching invalid states for all underlying communication media except for the perfect FIFO. Based on this result, we propose changes to the protocol. A further investigation of the modified protocol suggests that messages should be received in the same order as they are sent so that a correct protocol behaviour is preserved. Another important property of communication protocols is that all parties always reach their final states. Based on the verification with different communication models, we prove that our enhanced protocol satisfies this property for asynchronous, unreliable, order-preserving communication whereas the original protocol does not.

Anders P. Ravn, Jiří Srba, Saleem Vighio
CADP 2010: A Toolbox for the Construction and Analysis of Distributed Processes

Cadp

(Construction and Analysis of Distributed Processes)

is a comprehensive software toolbox that implements the results of concurrency theory. Started in the mid 80s,

Cadp

has been continuously developed by adding new tools and enhancing existing ones. Today,

Cadp

benefits from a worldwide user community, both in academia and industry. This paper presents the latest release

Cadp

2010, which is the result of a considerable development effort spanning the last four years. The paper first describes the theoretical principles and the modular architecture of

Cadp

, which has inspired several other recent model checkers. The paper then reviews the main features of

Cadp

2010, including compilers for various formal specification languages, equivalence checkers, model checkers, performance evaluation tools, and parallel verification tools running on clusters and grids.

Hubert Garavel, Frédéric Lang, Radu Mateescu, Wendelin Serwe
GameTime: A Toolkit for Timing Analysis of Software

Timing analysis is a key step in the design of dependable real-time embedded systems. In this paper, we present GameTime, a toolkit for execution time analysis of software. GameTime is based on a combination of game-theoretic online learning and systematic testing using satisfiability modulo theories (SMT) solvers. In contrast with many existing tools for timing analysis, GameTime can be used for a range of tasks, including estimating worst-case execution time, predicting the distribution of execution times of a task, and finding timing-related bugs in programs. We describe key implementation details of GameTime and illustrate its usage through examples.

Sanjit A. Seshia, Jonathan Kotker
Backmatter
Metadaten
Titel
Tools and Algorithms for the Construction and Analysis of Systems
herausgegeben von
Parosh Aziz Abdulla
K. Rustan M. Leino
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-19835-9
Print ISBN
978-3-642-19834-2
DOI
https://doi.org/10.1007/978-3-642-19835-9

Premium Partner