Skip to main content
Top

2011 | Book

NASA Formal Methods

Third International Symposium, NFM 2011, Pasadena, CA, USA, April 18-20, 2011. Proceedings

Editors: Mihaela Bobaru, Klaus Havelund, Gerard J. Holzmann, Rajeev Joshi

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the Third International Symposium on NASA Formal Methods, NFM 2011, held in Pasadena, CA, USA, in April 2011.

The 26 revised full papers presented together with 12 tool papers, 3 invited talks, and 2 invited tutorials were carefully reviewed and selected from 141 submissions. The topics covered by NFM 2011 included but were not limited to: theorem proving, logic model checking, automated testing and simulation, model-based engineering, real-time and stochastic systems, SAT and SMT solvers, symbolic execution, abstraction and abstraction refinement, compositional verification techniques; static and dynamic analysis techniques, fault protection, cyber security, specification formalisms, requirements analysis, and applications of formal techniques.

Table of Contents

Frontmatter

Invited Talks

From Retrospective Verification to Forward-Looking Development

One obstacle in applying program verification is coming up with specifications. That is, if you want to verify a program, you need to write down what it means for the program to be correct. But doesn’t that seem terribly wrong? Why don’t we see it as “one obstacle in program design is coming up with code”? That is, if you want to realize a specification, you need to write down how the machine is supposed do it. Phrased this way, we may want to change our efforts of verification into efforts of what is known as correct-by-construction or stepwise-refinement. But the choice is not so clear and there are plenty of obstacles on both sides. For example, many programs are developed from specifications, but the specifications are not in a form suitable for refinement tools. For other programs, the clearest specifications may be given by pseudo-code, but such specification may not be suitable for some verification tools. In this talk, I will discuss verification tools and refinement-based tools, considering how they may be combined.

K. Rustan M. Leino
Specifications for Free

Recent advances in software validation and verification make it possible to widely automate the check whether a specification is satisfied. This progress is hampered, though, by the persistent difficulty of writing specifications. Are we facing a “specification crisis”? By mining specifications from existing systems, we can alleviate this burden, reusing and extending the knowledge of 60 years of programming, and bridging the gap between formal methods and real-world software. In this NFM 2011 invited keynote, I present the state of the art in specification mining, its challenges, and its potential, up to a vision of seamless integration of specification and programming.

Andreas Zeller

Invited Tutorials

The Theory and Practice of SALT

Salt

is a general purpose specification and assertion language developed for creating concise temporal specifications to be used in industrial verification environments. It incorporates ideas of existing approaches, such as PSL or Specification Patterns, in that it provides operators to express scopes and exceptions, as well as support for a subset of regular expressions. On the one hand side,

Salt

exceeds specific features of these approaches, for example, in that it allows the nesting of scopes and supports the specification of real-time properties. On the other hand,

Salt

is fully translatable to LTL, if no real-time operators are used, and to TLTL (also known as state-clock logic), if real-time operators appear in a specification. The latter is needed in particular for verification tasks to do with reactive systems imposing strict execution times and deadlines.

Salt

’s semantics is defined in terms of a translation to temporal (real-time) logic, and a compiler is freely available from the project web site, including an interactive web interface to test drive the compiler. This tutorial paper details on the theoretical foundations of

Salt

as well as its practical use in applications such as model checking and runtime verification.

Andreas Bauer, Martin Leucker
VeriFast: A Powerful, Sound, Predictable, Fast Verifier for C and Java

VeriFast is a prototype verification tool for single-threaded and multithreaded C and Java programs. In this paper, we first describe the basic symbolic execution approach in some formal detail. Then we zoom in on two technical aspects: the approach to permission accounting, including fractional permissions, precise predicates, and counting permissions; and the approach to lemma function termination in the presence of dynamically-bound lemma function calls. Finally, we describe three ongoing efforts: application to JavaCard programs, integration of shape analysis, and application to Linux device drivers.

Bart Jacobs, Jan Smans, Pieter Philippaerts, Frédéric Vogels, Willem Penninckx, Frank Piessens
Verifying Functional Correctness of C Programs with VCC

VCC [2] is an industrial-strength verification environment for low-level concurrent systems code written in C. VCC takes a program (annotated with function contracts, state assertions, and type invariants) and attempts to prove the correctness of these annotations. VCC’s verification methodology [4] allows global two-state invariants that restrict update of shared state and enforces simple, semantic conditions sufficient for checking those global invariants modularly. VCC works by translating C, via Boogie [1] intermediate verification language, to verification conditions handled by the Z3 [5] SMT solver.

The environment includes tools for monitoring proof attempts and constructing partial counterexample executions for failed proofs and has been used to verify functional correctness of tens of thousands of lines of Microsoft’s Hyper-V virtualization platform and of SYSGOs embedded real-time operating system PikeOS.

In this talk, I am going to showcase various tools that come with VCC: the verifier itself, VCC Visual Studio plugin, and Boogie Verification Debugger. I am going to cover the basics of VCC’s verification methodology on various examples: concurrency primitives, lock-free data-structures, and recursive data-structures.

The sources and binaries of VCC are available for non-commercial use at

http://vcc.codeplex.com/

. A tutorial [3] is also provided. VCC can be also tried online at

http://rise4fun.com/Vcc

.

Michał Moskal

Regular Papers

Bakar Kiasan: Flexible Contract Checking for Critical Systems Using Symbolic Execution

Spark

, a subset of Ada for engineering safety and security-critical systems, is designed for verification and includes a software contract language for specifying functional properties of procedures. Even though

Spark

and its static analysis components are beneficial and easy to use, its contract language is almost never used due to the burdens the associated tool support imposes on developers. In this paper, we present: (a) SymExe techniques for checking software contracts in embedded critical systems, and (b) Bakar Kiasan, a tool that implements these techniques in an integrated development environment for

Spark

. We describe a methodology for using Bakar Kiasan that provides significant increases in automation, usability, and functionality over existing

Spark

tools, and we present results from experiments on its application to industrial examples.

Jason Belt, John Hatcliff, Robby, Patrice Chalin, David Hardin, Xianghua Deng
Approximate Quantifier Elimination for Propositional Boolean Formulae

This paper describes an approximate quantifier elimination procedure for propositional Boolean formulae. The method is based on computing prime implicants using SAT and successively refining over-approximations of a given formula. This construction naturally leads to an

anytime

algorithm, that is, it can be interrupted at anytime without compromising soundness. This contrasts with classical monolithic (all or nothing) approaches based on resolution or model enumeration.

Jörg Brauer, Andy King
Towards Flight Control Verification Using Automated Theorem Proving

To ensure that an aircraft is safe to fly, a complex, lengthy and costly process must be undertaken. Current aircraft control systems verification methodologies are based on conducting extensive simulations in an attempt to cover all worst-case scenarios. A Nichols plot is a technique that can be used to conclusively determine if a control system is stable. However, to guarantee stability within a certain margin of uncertainty requires an informal visual inspection of many plots. To leverage the safety verification problem, we present in this paper a method for performing a formal Nichols Plot analysis using the MetiTarski automated theorem prover. First the transfer function for the flight control system is extracted from a Matlab/Simulink design. Next, using the conditions for a stable dynamical system, an exclusion region of the Nichols Plot is defined. MetiTarski is then used to prove that the exclusion region is never entered. We present a case study of the proposed approach applied to the lateral autopilot of a Model 24 Learjet.

William Denman, Mohamed H. Zaki, Sofiène Tahar, Luis Rodrigues
Generalized Rabin(1) Synthesis with Applications to Robust System Synthesis

Synthesis of finite-state machines from linear-time temporal logic (LTL) formulas is an important formal specification debugging technique for reactive systems and can quickly generate prototype implementations for realizable specifications.

It has been observed, however, that automatically generated implementations typically do not share the robustness of manually constructed solutions with respect to assumption violations, i.e., they typically do not degenerate nicely when the assumptions in the specification are violated. As a remedy, robust synthesis methods have been proposed. Unfortunately, previous such techniques induced obstacles to their efficient implementation in practice and typically do not scale well.

In this paper, we introduce generalized Rabin(1) synthesis as a solution to this problem. Our approach inherits the good algorithmic properties of generalized reactivity(1) synthesis but extends it to also allow co-Büchi-type assumptions and guarantees, which makes it usable for the synthesis of robust systems.

Rüdiger Ehlers
Integrating an Automated Theorem Prover into Agda

Agda is a dependently typed functional programming language

and

a proof assistant in which developing programs and proving their correctness is one activity. We show how this process can be enhanced by integrating external automated theorem provers, provide a prototypical integration of the equational theorem prover Waldmeister, and give examples of how this proof automation works in practice.

Simon Foster, Georg Struth
Efficient Predicate Abstraction of Program Summaries

Predicate abstraction is an effective technique for scaling Software Model Checking to real programs. Traditionally, predicate abstraction abstracts each basic block of a program

$\mathcal{P}$

to construct a small finite abstract model – a Boolean program

BP

, whose state-transition relation is over some chosen (finite) set of predicates. This is called Small-Block Encoding (SBE). A recent advancement is Large-Block Encoding (LBE) where abstraction is applied to a “summarized” program so that the abstract transitions of

BP

correspond to loop-free fragments of

$\mathcal{P}$

. In this paper, we expand on the original notion of LBE to promote flexibility. We explore and describe efficient ways to perform CEGAR bottleneck operations: generating and solving predicate abstraction queries (PAQs). We make the following contributions. First, we define a general notion of program summarization based on loop cutsets. Second, we give a linear time algorithm to construct PAQs for a loop-free fragment of a program. Third, we compare two approaches to solving PAQs: a classical AllSAT-based one, and a new one based on Linear Decision Diagrams (LDDs). The approaches are evaluated on a large benchmark from open-source software. Our results show that the new LDD-based approach significantly outperforms (and complements) the AllSAT one.

Arie Gurfinkel, Sagar Chaki, Samir Sapra
Synthesis for PCTL in Parametric Markov Decision Processes

In parametric Markov decision processes (PMDPs), transition probabilities are not fixed, but are given as functions over a set of parameters. A PMDP denotes a family of concrete MDPs. This paper studies the synthesis problem for PCTL in PMDPs: Given a specification Φ in PCTL, we synthesise the parameter valuations under which Φ is true. First, we divide the possible parameter space into hyper-rectangles. We use existing decision procedures to check whether Φ holds on each of the Markov processes represented by the hyper-rectangle. As it is normally impossible to cover the whole parameter space by hyper-rectangles, we allow a limited area to remain undecided. We also consider an extension of PCTL with reachability rewards. To demonstrate the applicability of the approach, we apply our technique on a case study, using a preliminary implementation.

Ernst Moritz Hahn, Tingting Han, Lijun Zhang
Formalizing Probabilistic Safety Claims

A safety claim for a system is a statement that the system, which is subject to hazardous conditions, satisfies a given set of properties. Following work by John Rushby and Bev Littlewood, this paper presents a mathematical framework that can be used to state and formally prove probabilistic safety claims. It also enables hazardous conditions, their uncertainties, and their interactions to be integrated into the safety claim. This framework provides a formal description of the probabilistic composition of an arbitrary number of hazardous conditions and their effects on system behavior. An example is given of a probabilistic safety claim for a conflict detection algorithm for aircraft in a 2D airspace. The motivation for developing this mathematical framework is that it can be used in an automated theorem prover to formally verify safety claims.

Heber Herencia-Zapana, George Hagen, Anthony Narkawicz
The OpenTheory Standard Theory Library

Interactive theorem proving is tackling ever larger formalization and verification projects, and there is a critical need for theory engineering techniques to support these efforts. One such technique is cross-prover package management, which has the potential to simplify the development of logical theories and effectively share theories between different theorem prover implementations. The OpenTheory project has developed standards for packaging theories of the higher order logic implemented by the HOL family of theorem provers. What is currently missing is a standard theory library that can serve as a published contract of interoperability and contain proofs of basic properties that would otherwise appear in many theory packages. The core contribution of this paper is the presentation of a standard theory library for higher order logic represented as an OpenTheory package. We identify the core theory set of the HOL family of theorem provers, and describe the process of instrumenting the HOL Light theorem prover to extract a standardized version of its core theory development. We profile the axioms and theorems of our standard theory library and investigate the performance cost of separating the standard theory library into coherent hierarchical theory packages.

Joe Hurd
Instantiation-Based Invariant Discovery

We present a general scheme for automated instantiation-based invariant discovery. Given a transition system, the scheme produces

k

-inductive invariants from templates representing decidable predicates over the system’s data types. The proposed scheme relies on efficient reasoning engines such as SAT and SMT solvers, and capitalizes on their ability to quickly generate counter-models of non-invariant conjectures. We discuss in detail two practical specializations of the general scheme in which templates represent partial orders. Our experimental results show that both specializations are able to quickly produce invariants from a variety of synchronous systems which prove quite useful in proving safety properties for these systems.

Temesghen Kahsai, Yeting Ge, Cesare Tinelli
Stuttering Mostly Speeds Up Solving Parity Games

We study the process theoretic notion of stuttering equivalence in the setting of parity games. We demonstrate that stuttering equivalent vertices have the same winner in the parity game. This means that solving a parity game can be accelerated by minimising the game graph with respect to stuttering equivalence. While, at the outset, it might not be clear that this strategy should pay off, our experiments using typical verification problems illustrate that stuttering equivalence speeds up solving parity games in many cases.

Sjoerd Cranen, Jeroen J. A. Keiren, Tim A. C. Willemse
Counterexample-Based Error Localization of Behavior Models

Behavior models are often used to describe behaviors of the system-to-be during requirements analysis or design phases. The correctness of the specified model can be formally verified by model checking techniques. Model checkers provide counterexamples if the model does not satisfy the given property. However, the tasks to analyze counterexamples and identify the model errors require manual labor because counterexamples do not directly indicate where and why the errors exist, and when liveness properties are checked, counterexamples have infinite trace length, which makes it harder to automate the analysis. In this paper, we propose a novel automated approach to find errors in a behavior model using an infinite counterexample. We find similar witnesses to the counterexample then compare them to elicit errors. Our approach reduces the problem to a single-source shortest path search problem on directed graphs and is applicable to liveness properties.

Tsutomu Kumazawa, Tetsuo Tamai
Call Invariants

Program verifiers based on first-order theorem provers model the program heap as a collection of mutable maps. In such verifiers, preserving unmodified facts about the heap across procedure calls is difficult because of scoping and modification of possibly unbounded set of heap locations. Existing approaches to deal with this problem are either too imprecise, require introducing untrusted assumptions in the verifier, or resort to unpredictable reasoning using quantifiers. In this work, we propose a new approach to solve this problem. The centerpiece of our approach is the

call invariant

, a new annotation for procedure calls. A

call invariant

allows the user to specify at a call site an assertion that is inductively preserved across an arbitrary update to a heap location modified in the call. Our approach allows us to leverage existing techniques for reasoning about call-free programs to precisely and predictably reason about programs with procedure calls. We have implemented the approach and applied it to the verification of examples containing dynamic memory allocations, linked lists, and arrays. We observe that most call invariants have a fairly simple shape and discuss ways to reduce the annotation overhead.

Shuvendu K. Lahiri, Shaz Qadeer
Symmetry for the Analysis of Dynamic Systems

Graph Transformation Systems (GTSs) provide visual and explicit semantics for dynamically evolving multi-process systems such as network programs and communication protocols. Existing symmetry reduction techniques that generate a reduced, bisimilar model for alleviating state explosion in model checking are not applicable to dynamic models such as those given by GTSs. We develop symmetry reduction techniques applicable to evolving GTS models and the programs that generate them. We also provide an on-the-fly algorithm for generating a symmetry-reduced quotient model directly from a set of graph transformation rules. The generated quotient model is GTS-bisimilar to the model under verification and may be exponentially smaller than that model. Thus, analysis of the system model can be performed by checking the smaller GTS-bisimilar model.

Zarrin Langari, Richard Trefler
Implementing Cryptographic Primitives in the Symbolic Model

When discussing protocol properties in the symbolic (Dolev-Yao; term-based) model of cryptography, the set of cryptographic primitives is defined by the constructors of the term algebra and by the equational theory on top of it. The set of considered primitives is not easily modifiable during the discussion. In particular, it is unclear what it means to define a new primitive from the existing ones, or why a primitive in the considered set may be unnecessary because it can be modeled using other primitives. This is in stark contrast to the computational model of cryptography where the constructions and relationships between primitives are at the very foundation of the theory. In this paper, we explore how a primitive may be constructed from other primitives in the symbolic model, such that no protocol breaks if an atomic primitive is replaced by the construction. As an example, we show the construction of (symbolic) “randomized” symmetric encryption from (symbolic) one-way functions and exclusive or.

Peeter Laud
Model Checking Using SMT and Theory of Lists

A main idea underlying bounded model checking is to limit the length of the potential counter-examples, and then prove properties for the bounded version of the problem. In software model checking, that means that only program traces up to a given length are considered. Additionally, the program’s input space must be made finite by defining bounds for all input parameters. To ensure the finiteness of the program traces, these techniques typically require that all loops are explicitly unrolled some constant number of times. Here, we show how to avoid explicit loop unrolling by using the SMT Theory of Lists to model feasible, potentially unbounded program traces. We argue that this approach is easier to use, and, more importantly, increases the confidence in verification results over the typical bounded approach. To demonstrate the feasibility of this idea, we implemented a fully automated prototype software model checker and verified several example algorithms. We also applied our technique to a non software model-checking problem from biology – we used it to analyze and synthesize correct executions from scenario-based requirements in the form of Live Sequence Charts.

Aleksandar Milicevic, Hillel Kugler
Automated Test Case Generation with SMT-Solving and Abstract Interpretation

In this paper we describe an approach for automated model-based test case and test data generation based on constraint types well known from bounded model checking. Our main contribution consists of a demonstration showing how this process can be considerably accelerated by using abstract interpretation techniques for preliminary explorations of the model state space. The techniques described support models for concurrent synchronous reactive systems under test with clocks and dense-time.

Jan Peleska, Elena Vorobev, Florian Lapschies
Generating Data Race Witnesses by an SMT-Based Analysis

Data race is one of the most dangerous errors in multithreaded programming, and despite intensive studies, it remains a notorious cause of failures in concurrent systems. Detecting data races is already a hard problem, and yet it is even harder for a programmer to decide

whether

or

how

a reported data race can appear in the actual program execution. In this paper we propose an algorithm for generating debugging aid information called

witnesses

, which are concrete thread schedules that can deterministically trigger the data races. More specifically, given a concrete execution trace, e.g. non-erroneous one which may have triggered a warning in Eraser-style data race detectors, we use a symbolic analysis based on SMT solvers to search for a data race witness among alternative interleavings of events of that trace. Our symbolic analysis precisely encodes the sequential consistency semantics using a scalable predictive model to ensure that the reported witness is always feasible.

Mahmoud Said, Chao Wang, Zijiang Yang, Karem Sakallah
Applying Atomicity and Model Decomposition to a Space Craft System in Event-B

Event-B is a formal method for modeling and verifying consistency of systems. In formal methods such as Event-B, refinement is the process of enriching or modifying an abstract model in a step-wise manner in order to manage the development of complex and large systems. To further alleviate the complexity of developing large systems, Event-B refinement can be augmented with two techniques, namely atomicity decomposition and model decomposition. Our main objective in this paper is to investigate and evaluate the application of these techniques when used in a refinement based development. These techniques have been applied to the formal development of a space craft system. The outcomes of this experimental work are presented as assessment results. The experience and assessment can form the basis for some guidelines in applying these techniques in future cases.

Asieh Salehi Fathabadi, Abdolbaghi Rezazadeh, Michael Butler
A Theory of Skiplists with Applications to the Verification of Concurrent Datatypes

This paper presents a theory of skiplists with a decidable satisfiability problem, and shows its applications to the verification of concurrent skiplist implementations. A skiplist is a data structure used to implement sets by maintaining several ordered singly-linked lists in memory, with a performance comparable to balanced binary trees. We define a theory capable of expressing the memory layout of a skiplist and show a decision procedure for the satisfiability problem of this theory. We illustrate the application of our decision procedure to the temporal verification of an implementation of concurrent lock-coupling skiplists. Concurrent lock-coupling skiplists are a particular version of skiplists where every node contains a lock at each possible level, reducing granularity of mutual exclusion sections.

The first contribution of this paper is the theory

TSL

K

.

TSL

K

is a decidable theory capable of reasoning about list reachability, locks, ordered lists, and sublists of ordered lists. The second contribution is a proof that

TSL

K

enjoys a finite model property and thus it is decidable. Finally, we show how to reduce the satisfiability problem of quantifier-free

TSL

K

formulas to a combination of theories for which a many-sorted version of Nelson-Oppen can be applied.

Alejandro Sánchez, César Sánchez
CORAL: Solving Complex Constraints for Symbolic PathFinder

Symbolic execution is a powerful automated technique for generating test cases. Its goal is to achieve high coverage of software. One major obstacle in adopting the technique in practice is its inability to handle complex mathematical constraints. To address the problem, we have integrated

CORAL

’s heuristic solvers into NASA Ames’ Symbolic PathFinder symbolic execution tool.

CORAL

’s solvers have been designed to deal with mathematical constraints and their heuristics have been improved based on examples from the aerospace domain. This integration significantly broadens the application of Symbolic PathFinder at NASA and in industry.

Matheus Souza, Mateus Borges, Marcelo d’Amorim, Corina S. Păsăreanu
Automated Formal Verification of the TTEthernet Synchronization Quality

Clock synchronization is the foundation of distributed real-time architectures such as the Timed-Triggered Architecture. Maintaining the local clocks synchronized is particularly important for fault tolerance, as it allows one to use simple and effective fault-tolerance algorithms that have been developed in the synchronous system model.

Clock synchronization algorithms have been extensively studied since the 1980s, and many fundamental results have been established. Traditionally, the correctness of a new clock synchronization algorithm is shown by reduction to these results. Until now, formal proofs of correctness all relied on interactive theorem provers such as PVS or Isabelle/HOL. In this paper, we present an automated proof of the

TTEthernet

clock-synchronization algorithm that is based on the SAL model checker.

Wilfried Steiner, Bruno Dutertre
Extending the GWV Security Policy and Its Modular Application to a Separation Kernel

Nowadays formal methods are required for high assurance security and safety systems. Formal methods allow a precise specification and a deep analysis of system designs. However, usage of formal methods in a certification process can be very expensive. In this context, we analyse the security policy proposed by Greve et al in the theorem prover Isabelle/HOL. We show how this policy with some extensions can be applied in a modular way, and hence, reduce the number of formal models and artifacts to certify. Thus, we show how the security policy for a separation kernel is derived from the security policy of the micro-kernel that forms the basis of the separation kernel. We apply our approach to an example derived from an industrial real-time operating system.

Sergey Tverdyshev
Combining Partial-Order Reduction and Symbolic Model Checking to Verify LTL Properties

BDD-based symbolic techniques and partial-order reduction (POR) are two fruitful approaches to deal with the combinatorial explosion of model checking. Unfortunately, past experience has shown that BDD-based techniques do not work well for loosely-synchronized models, whereas POR methods allow explicit-state model checkers to deal with large concurrent models. This paper presents an algorithm that combines symbolic model checking and POR to verify linear temporal logic properties without the next operator (LTL

X

), which performs better on models featuring asynchronous processes. Our algorithm adapts and combines three methods: Clarke et al.’s tableau-based symbolic LTL model checking, Iwashita et al.’s forward symbolic CTL model checking and Lerda et al.’s ImProviso symbolic reachability with POR. We present our approach, outline the proof of its correctness, and present a prototypal implementation and an evaluation on two examples.

José Vander Meulen, Charles Pecheur
Towards Informed Swarm Verification

In this paper, we propose a new method to perform large scale grid model checking. A manager distributes the workload over many embarrassingly parallel jobs. Only little communication is needed between a worker and the manager, and only once the worker is ready for more work. The novelty here is that the individual jobs together form a so-called cumulatively exhaustive set, meaning that even though each job explores only a part of the state space, together, the tasks explore all states reachable from the initial state.

Anton Wijs
Scaling Up with Event-B: A Case Study

Ability to scale up from toy examples to real life problems is a crucial issue for formal methods. Formalizing a algorithm used in vehicle automation (platooning control) in a certification perspective, we had the opportunity to study the scaling up when going from a (toy) model in 1D to a (more realistic) model in 2D. The formalism, Event-B, belongs to the family of mathematical state based methods. Increase was quantitative: 3 times more events and 4 times more proofs; and qualitative: trigonometric functions and integrals are used. Edition and verification of the specification scale up well. The crucial part of the work was the adaptation of the mathematical and physical model through standard heuristics. The validation of temporal properties and behaviors do not scale up so well. Analysis of the difficulties suggests improvements in both tool support and formalism.

Faqing Yang, Jean-Pierre Jacquot

Tool Papers

D-Finder 2: Towards Efficient Correctness of Incremental Design

D-Finder

2

is a new tool for deadlock detection in concurrent systems based on effective invariant computation to approximate the effects of interactions among modules. It is part of the BIP framework, which provides various tools centered on a component-based language for incremental design. The presented tool shares its theoretical roots with a previous implementation, but was completely rewritten to take advantage of a new version of BIP and various new results on the theory of invariant computation. The improvements are demonstrated by comparison with previous work and reports on new results on a practical case study.

Saddek Bensalem, Andreas Griesmayer, Axel Legay, Thanh-Hung Nguyen, Joseph Sifakis, Rongjie Yan
Infer: An Automatic Program Verifier for Memory Safety of C Programs

Infer

is a new automatic program verification tool aimed at proving memory safety of C programs. It attempts to build a compositional proof of the program at hand by composing proofs of its constituent modules (functions/procedures). Bugs are extracted from failures of proof attempts. We describe the main features of

Infer

and some of the main ideas behind it.

Cristiano Calcagno, Dino Distefano
Model Construction and Priority Synthesis for Simple Interaction Systems

VissBIP

is a software tool for visualizing and automatically orchestrating component-based systems consisting of a set of components and their possible interactions. The graphical interface of

VissBIP

allows the user to interactively construct BIP models [3], from which executable code (C/C++) is generated. The main contribution of

VissBIP

is an analysis and synthesis engine for orchestrating components. Given a set of BIP components together with their possible interactions and a safety property, the

VissBIP

synthesis engine restricts the set of possible interactions in order to rule out unsafe states. The synthesis engine of

VissBIP

is based on automata-based (game-theoretic) notions. It checks if the system satisfies a given safety property. If the check fails, the tool automatically generates additional constraints on the interactions that ensure the desired property. The generated constraints define priorities between interactions and are therefore well-suited for conflict resolution between components.

Chih-Hong Cheng, Saddek Bensalem, Barbara Jobstmann, Rongjie Yan, Alois Knoll, Harald Ruess
OpenJML: JML for Java 7 by Extending OpenJDK

The Java Modeling Language is a widely used specification language for Java. However, the tool support has not kept pace with advances in the Java language. This paper describes OpenJML, an implementation of JML tools built by extending the OpenJDK Java tool set. OpenJDK has a readily extendible architecture, though its details could be revised to further facilitate extension. The result is a suite of JML tools for Java 7 that provides static analysis, specification documentation, and runtime checking, an API that is used for other tools, uses Eclipse as an IDE, and can be extended for further research. In addition, OpenJML can leverage the community effort devoted to OpenJDK.

David R. Cok
jSMTLIB: Tutorial, Validation and Adapter Tools for SMT-LIBv2

The SMT-LIB standard defines an input format and response requirements for Satisfiability-Modulo-Theories automated reasoning tools. The standard has been an incentive to improving and comparing the increasing supply of SMT solvers. It could also be more widely used in applications, providing a uniform interface and portability across different SMT tools. This tool paper describes a tutorial and accompanying software package,

jSMTLIB

, that will help users of SMT solvers understand and apply the newly revised SMT-LIB format; the tutorial also describes fine points of the SMT-LIB format which, along with a compliance suite, will be useful to SMT implementors. Finally, the tool suite includes adapters that allow using some older solvers, such as Simplify, as SMT-LIB compliant tools.

David R. Cok
opaal: A Lattice Model Checker

We present a new open source model checker,

opaal

, for automatic verification of models using lattice automata. Lattice automata allow the users to incorporate abstractions of a model into the model itself. This provides an efficient verification procedure, while giving the user fine-grained control of the level of abstraction by using a method similar to Counter-Example Guided Abstraction Refinement. The

opaal

engine supports a subset of the UPPAAL timed automata language extended with lattice features. We report on the status of the first public release of

opaal

, and demonstrate how

opaal

can be used for efficient verification on examples from domains such as database programs, lossy communication protocols and cache analysis.

Andreas Engelbredt Dalsgaard, René Rydhof Hansen, Kenneth Yrke Jørgensen, Kim Gulstrand Larsen, Mads Chr. Olesen, Petur Olsen, Jiří Srba
A Tabular Expression Toolbox for Matlab/Simulink

Tabular expressions have been successfully used in developing safety critical systems, however insufficient tool support has hampered their wider adoption. To address this shortfall we have developed the Tabular Expression Toolbox for Matlab/Simulink. An intuitive user interface allows users to easily create, modify and check the completeness and disjointness of tabular expressions using the ATP PVS or SMT solver CVC3. The tabular expressions are translated to m-functions allowing their seamless use with Matlab’s simulation and code generation.

Colin Eles, Mark Lawford
LLVM2CSP: Extracting CSP Models from Concurrent Programs

In this paper, we present the

llvm2csp

tool which extracts CSP models from the LLVM compiler intermediate representation of concurrent programs. The generation of CSP models is controlled by user annotations and designed to create models of different levels of abstraction for subsequent analysis with standard CSP tools.

Moritz Kleine, Björn Bartels, Thomas Göthel, Steffen Helke, Dirk Prenzel
Multi-Core LTSmin: Marrying Modularity and Scalability

The LTS

min

toolset provides multiple generation and on-the-fly analysis algorithms for large graphs (

state spaces

), typically generated from concise behavioral specifications (

models

) of systems. LTS

min

supports a variety of input languages, but its key feature is modularity: language frontends, optimization layers, and algorithmic backends are completely decoupled, without sacrificing performance. To complement our existing symbolic and distributed model checking algorithms, we added a multi-core backend for checking safety properties, with several new features to improve efficiency and memory usage: low-overhead load balancing, incremental hashing and scalable state compression.

Alfons Laarman, Jaco van de Pol, Michael Weber
GiNaCRA: A C++ Library for Real Algebraic Computations

We present the growing

C++

library

GiNaCRA

, which provides efficient and easy-to-integrate data structures and methods for

real algebra

. It is based on the

C++

library

GiNaC

, supporting the symbolic representation and manipulation of polynomials. In contrast to other similar tools, our

open source

library aids

exact

, real algebraic computations based on an appropriate

data type representing real zeros

of polynomials. The only non-standard library

GiNaCRA

depends on is

GiNaC

, which makes the installation and usage of our library simple. Our long-term goal is to integrate decision procedures for real algebra within the Satisfiability-Modulo-Theories (SMT) context and thereby provide tool support for many applied formal methods.

Ulrich Loup, Erika Ábrahám
Kopitiam: Modular Incremental Interactive Full Functional Static Verification of Java Code

We are developing Kopitiam, a tool to interactively prove full functional correctness of Java programs using separation logic by interacting with the interactive theorem prover Coq. Kopitiam is an Eclipse plugin, enabling seamless integration into the workflow of a developer. Kopitiam enables a user to develop proofs side-by-side with Java programs in Eclipse.

Hannes Mehnert
Milestones: A Model Checker Combining Symbolic Model Checking and Partial Order Reduction

Symbolic techniques and partial order reduction (POR) are two fruitful approaches to deal with the combinatorial explosion of model checking. Unfortunately, past experience has shown that symbolic techniques do not work well for loosely-synchronized models, whereas, by applying POR methods, explicit-state model checkers are able to deal with large concurrent models. This paper presents the Milestones model checker which combines symbolic techniques and POR. Its goal is to verify temporal properties on concurrent systems. On such a system, Milestones allows to check the absence of deadlock, LTL properties, and CTL properties. In order to compare our approach to others, Milestones is able to translate a model into an equivalent Spin model [7] or NuSMV model [4]. We briefly present the theoretical foundation on which Milestones is based on. Then, we present the Milestones model checker, and an evaluation based on an example.

José Vander Meulen, Charles Pecheur
Backmatter
Metadata
Title
NASA Formal Methods
Editors
Mihaela Bobaru
Klaus Havelund
Gerard J. Holzmann
Rajeev Joshi
Copyright Year
2011
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-20398-5
Print ISBN
978-3-642-20397-8
DOI
https://doi.org/10.1007/978-3-642-20398-5

Premium Partner