Skip to main content

2008 | Buch

FM 2008: Formal Methods

15th International Symposium on Formal Methods, Turku, Finland, May 26-30, 2008 Proceedings

herausgegeben von: Jorge Cuellar, Tom Maibaum, Kaisa Sere

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book presents the refereed proceedings of the 15th International Symposium on Formal Methods, FM 2008, held in Turku, Finland in May 2008. The 23 revised full papers presented together with 4 invited contributions and extended abstracts of 5 invited industrial presentations were carefully reviewed and selected from 106 submissions. The papers are organized in topical sections on programming language analysis, verification, real-time and concurrency, grand chellenge problems, fm practice, runtime monitoring and analysis, communication, constraint analysis, and design.

Inhaltsverzeichnis

Frontmatter

Session 1. Invited Talks

Aspects and Formal Methods

Aspects are now commonly used to add functionality that otherwise would cut across the structure of object systems. In this survey, both directions in the connection between aspects and formal methods are examined. On the one hand, the use of aspects to facilitate (general) software verification, and especially model checking, is demonstrated. On the other hand, the new challenges to formal specification and verification posed by aspects are defined, and several existing solutions are described.

Shmuel Katz
Getting Formal Verification into Design Flow

The ultimate goal of formal methods is to provide assurances about the quality, performance, security, etc. of systems. While formal tools have advanced greatly over the past two decades, widespread proliferation has not yet occurred, and the full impact of formal methods is still to be realized. This paper presents some ideas on how to catalyze the growth of formal techniques in day-to-day engineering practice. We draw on our experience as hardware engineers that want to use, and have tried to use, formal methods in our own designs. The points we make have probably been made before. However we illustrate each one with concrete designs. Our examples support three major themes: (1) correctness depends highly on the application and even a collection of formal methods cannot handle the whole problem; (2) high-level design languages can facilitate the interaction between design and formal methods; and (3) formal method tools should be presented as integrated debugging aids as opposed to one requiring mastering a foreign language or esoteric concepts.

Arvind, Nirav Dave, Michael Katelman
Lessons in the Weird and Unexpected: Some Experiences from Checking Large Real Systems

This talk will draw on our efforts in using static analysis, model checking, and symbolic execution to find bugs in real code, both in academic and commercial settings. The unifying religion driving all these efforts has been: results matter more than anything. That which works is good, that which does not is not. While this worldview is simple, reality is not. I will discuss some what we learned in struggling with this mismatch.

Dawson Engler
Simulation, Orchestration and Logical Clocks

A language in which discrete event simulations can be coded needs to support the features (1) to describe behavior of a single physical process, (2) to describe concurrent ctivities of multiple physical processes, including communication, synchronization and interruption, (3) to account for passage of time, and (4) to record system state at appropriate points and create statistical summaries. Orc, a recent language for orchestration of distributed services, combines these features so that complex simulations can be expressed very succinctly. This talk describes the relevant features of Orc for simulation and illustrates them using a number of realistic examples. Additionally, we show that certain combinatorial problems, such as shortest paths in graphs and many problems in computational geometry, can be cast as simulation problems, and solved very simply in Orc.

David Kitchen, Evan Powell, Jayadev Misra

Session 2. Programming Language Analysis

CoVaC: Compiler Validation by Program Analysis of the Cross-Product

The paper presents a deductive framework for proving program equivalence and its application to automatic verification of transformations performed by optimizing compilers. To leverage existing program analysis techniques, we reduce the equivalence checking problem to analysis of one system – a cross-product of the two input programs. We show how the approach can be effectively used for checking equivalence of consonant (i.e., structurally similar) programs. Finally, we report on the prototype tool that applies the developed methodology to verify that a compiler optimization run preserves the program semantics. Unlike existing frameworks, CoVaC accommodates absence of compiler annotations and handles most of the classical intraprocedural optimizations such as constant folding, reassociation, common subexpression elimination, code motion, dead code elimination, branch optimizations, and others.

Anna Zaks, Amir Pnueli
Lazy Behavioral Subtyping

Late binding allows flexible code reuse but complicates formal reasoning significantly, as a method call’s receiver class is not statically known. This is especially true when programs are incrementally developed by extending class hierarchies. This paper develops a novel method to reason about late bound method calls. In contrast to traditional behavioral subtyping, reverification is avoided without restricting method overriding to fully behavior-preserving redefinition. The approach ensures that when analyzing the methods of a class, it suffices to consider that class and its superclasses. Thus, the full class hierarchy is not needed, and

incremental

reasoning is supported. We formalize this approach as a calculus which lazily imposes context-dependent subtyping constraints on method definitions. The calculus ensures that all method specifications required by late bound calls remain satisfied when new classes extend a class hierarchy. The calculus does not depend on a specific program logic, but the examples in the paper use a Hoare-style proof system. We show soundness of the analysis method.

Johan Dovland, Einar Broch Johnsen, Olaf Owe, Martin Steffen
Checking Well-Formedness of Pure-Method Specifications

Contract languages such as JML and Spec# specify invariants and pre- and postconditions using side-effect free expressions of the programming language, in particular, pure methods. For such contracts to be meaningful, they must be well-formed: First, they must respect the partiality of operations, for instance, the preconditions of pure methods used in the contract. Second, they must enable a consistent encoding of pure methods in a program logic, which requires that their specifications are satisfiable and that recursive specifications are well-founded.

This paper presents a technique to check well-formedness of contracts. We give proof obligations that are sufficient to guarantee the existence of a model for the specification of pure methods. We improve over earlier work by providing a systematic solution including a soundness result and by supporting more forms of recursive specifications. Our technique has been implemented in the Spec# programming system.

Arsenii Rudich, Ádám Darvas, Peter Müller

Session 3. Verification

Verifying Dynamic Pointer-Manipulating Threads

We present a novel approach to the verification of concurrent pointer-manipulating programs with dynamic thread creation and memory allocation as well as destructive updates operating on arbitrary (possibly cyclic) singly-linked data structures. Correctness properties of such programs are expressed by combining a simple pointer logic for specifying heap properties with linear-time (LTL) operators for reasoning about system executions. To automatically solve the corresponding model-checking problem, which is undecidable in general, we abstract from non-interrupted sublists in the heap, resulting in a finite-state representation of the data space. We also show that the control flow of a concurrent program with unbounded thread creation can be characterized by a Petri net, making LTL model checking decidable (though not feasible in practice). In a second abstraction step we also derive a finite-state representation of the control flow, which then allows us to employ standard LTL model checking techniques.

Thomas Noll, Stefan Rieger
Proofs and Refutations for Probabilistic Refinement

We consider the issue of finding and presenting counterexamples to a claim “this

spec

is implemented by that

imp

”, that is

$\textit{spec} \sqsubseteq \textit{imp}$

(refinement), in the context of

probabilistic

systems: using a geometric interpretation of the probabilistic/demonic semantic domain we are able to encode both refinement success and refinement failure as linear satisfaction problems, which can then be analysed automatically by an SMT solver. This allows the automatic discovery of certificates for counterexamples in independently and efficiently checkable form. In many cases the counterexamples can subsequently be converted into “source level” hints for the verifier.

A. K. McIver, C. C. Morgan, C. Gonzalia
Assume-Guarantee Verification for Interface Automata

Interface automata provide a formalism capturing the high level interactions between software components. Checking compatibility, and other safety properties, in an automata-based system suffers from the scalability issues inherent in exhaustive techniques such as model checking. This work develops a theoretical framework and automated algorithms for modular verification of interface automata. We propose sound and complete assume-guarantee rules for interface automata, and learning-based algorithms to automate assumption generation. Our algorithms have been implemented in a practical model-checking tool and have been applied to a realistic NASA case study.

Michael Emmi, Dimitra Giannakopoulou, Corina S. Păsăreanu

Session 4. Real-Time and Concurrency

Automated Verification of Dense-Time MTL Specifications Via Discrete-Time Approximation

This paper presents a verification technique for dense-time MTL based on discretization. The technique reduces the validity problem of MTL formulas from dense to discrete time, through the notion of

sampling invariance

, introduced in previous work [13]. Since the reduction is from an undecidable problem to a decidable one, the technique is necessarily incomplete, so it fails to provide conclusive answers for some formulas. The paper discusses this shortcoming and hints at how it can be mitigated in practice. The verification technique has been implemented on top of the ℤot tool [19] for discrete-time bounded validity checking; the paper also reports on in-the-small experiments with the tool, which show some results that are promising in terms of performance.

Carlo A. Furia, Matteo Pradella, Matteo Rossi
A Model Checking Language for Concurrent Value-Passing Systems

Modal

μ

-calculus is an expressive specification formalism for temporal properties of concurrent programs represented as Labeled Transition Systems (

Lts

s). However, its practical use is hampered by the complexity of the formulas, which makes the specification task difficult and error-prone. In this paper, we propose

Mcl

(

Model Checking Language

), an enhancement of modal

μ

-calculus with high-level operators aimed at improving expressiveness and conciseness of formulas. The main

Mcl

ingredients are parameterized fixed points, action patterns extracting data values from

Lts

actions, modalities on transition sequences described using extended regular expressions and programming language constructs, and an infinite looping operator specifying fairness. We also present a method for on-the-fly model checking of

Mcl

formulas on finite

Lts

s, based on the local resolution of boolean equation systems, which has a linear-time complexity for alternation-free and fairness formulas.

Mcl

is supported by the

Evaluator

4.0 model checker developed within the

Cadp

verification toolbox.

Radu Mateescu, Damien Thivolle

Session 5. Grand Chellenge Problems

Verification of Mondex Electronic Purses with KIV: From a Security Protocol to Verified Code

We present a verified JavaCard implementation for the Mondex Verification Challenge. This completes a series of verification efforts that we made to verify the Mondex case study starting at abstract transaction specifications, continuing with an introduction of a security protocol and now finally the refinement of this protocol to running source code. We show that current verification techniques and tool support are not only suitable to verify the original case study as stated in the Grand Challenge but also can cope with extensions of it resulting in verified and running code. The Mondex verification presented in this paper is the first one that carries security properties proven on an abstract level to an implementation level using refinement.

Holger Grandy, Markus Bischof, Kurt Stenzel, Gerhard Schellhorn, Wolfgang Reif
Incremental Development of a Distributed Real-Time Model of a Cardiac Pacing System Using VDM

The construction of formal models of real-time distributed systems is a considerable practical challenge. We propose and illustrate a pragmatic incremental approach in which detail is progressively added to abstract system-level specifications of functional and timing properties via intermediate models that express system architecture, concurrency and timing behaviour. The approach is illustrated by developing a new formal model of the cardiac pacemaker system proposed as a “grand challenge” problem in 2007. The models are expressed using the Vienna Development Method (VDM) and are validated primarily by scenario-based tests, including the analysis of timed traces. We argue that the insight gained using this staged modelling approach will be valuable in the subsequent development of implementations, and in detecting potential bottlenecks within suggested implementation architectures.

Hugo Daniel Macedo, Peter Gorm Larsen, John Fitzgerald

Session 6. FM Practice

Industrial Use of Formal Methods for a High-Level Security Evaluation

This paper presents an effective use of formal methods for the development and for the security certification of smart card software. The approach is based on the Common Criteria’s methodology that requires the use of formal methods to prove that a product implements the claimed security level. This work led to the world-first certification of a commercial Java Card

TM

product involving all formal assurances needed to reach the highest security level. For this certification, formal methods have been used for the design and the implementation of the security functions of the Java Card system embedded in the product. We describe the refinement scheme used to meet the Common Criteria’s requirements on formal models and proofs. In particular, we show how to build the proof that the implementation ensures the security objectives claimed in the security specification. We also provide some lessons learned from this important application of formal methods to the smart cards industry.

Boutheina Chetali, Quang-Huy Nguyen
Secret Ninja Formal Methods

The use of formal methods can significantly improve software quality. However, many instructors and students consider formal methods to be too difficult, impractical, and esoteric for use in undergraduate classes. This paper describes a method, used successfully at several universities, that combines ninja stealth with the latest advances in formal methods tools and technologies to integrate applied formal methods into software engineering courses.

Joseph R. Kiniry, Daniel M. Zimmerman
Specification and Checking of Software Contracts for Conditional Information Flow

Information assurance applications providing

Multi-Level Secure

(MLS) solutions must often implement information flow policies that are

conditional

in the sense that data is allowed to flow between system components only when the system satisfies certain state predicates. However, existing specification and verification environments, such as SPARK, used to develop such applications, are capable of capturing only unconditional information flows. Motivated by the need to better formally specify and certify MLS applications in industrial contexts, we present an enhancement of the SPARK system that enables specification, inference, and compositional checking of conditional information flow contracts. We report on the use of this framework for a collection of SPARK examples.

Torben Amtoft, John Hatcliff, Edwin Rodríguez, Robby, Jonathan Hoag, David Greve

Session 7. Runtime Moitoring and Analysis

JML Runtime Assertion Checking: Improved Error Reporting and Efficiency Using Strong Validity

The Java Modeling Language (JML) recently switched to an assertion semantics based on “strong validity” in which an assertion is taken to be valid precisely when it is defined and true. Elsewhere we have shared our positive experiences with the realization and use of this new semantics in the context of ESC/Java2. In this paper, we describe the challenges faced by and the redesign required for the implementation of the new semantics in the JML Runtime Assertion Checker (RAC) compiler. Not only is the new semantics effective at helping developers identify formerly undetected errors in specifications, we also demonstrate how the realization of the new semantics is more efficient—resulting in more compact instrumented code that runs slightly faster. More importantly, under the new semantics, the JML RAC can now compile sizeable JML annotated Java programs (like ESC/Java2) which it was unable to compile before.

Patrice Chalin, Frédéric Rioux
Provably Correct Runtime Monitoring
(Extended Abstract)

Runtime monitoring is an established technique for enforcing a wide range of program safety and security properties. We present a formalization of monitoring and monitor inlining, for the Java Virtual Machine. Monitors are security automata given in a special-purpose monitor specification language, ConSpec. The automata operate on finite or infinite strings of calls to a fixed API, allowing local dependencies on parameter values and heap content. We use a two-level class file annotation scheme to characterize two key properties: (i) that the program is correct with respect to the monitor as a constraint on allowed program behavior, and (ii) that the program has an instance of the given monitor embedded into it, which yields state changes at prescribed points according to the monitor’s transition function. As our main application of these results we describe a concrete inliner, and use the annotation scheme to characterize its correctness. For this inliner, correctness of the level II annotations can be decided efficiently by a weakest precondition annotation checker, thus allowing on-device checking of inlining correctness in a proof-carrying code setting.

Irem Aktug, Mads Dam, Dilian Gurov

Session 8. Communication

A Schedulerless Semantics of TLM Models Written in SystemC Via Translation into LOTOS

TLM (Transaction-Level Modeling) was introduced to cope with the increasing complexity of Systems-on-Chip designs by raising the modeling level. Currently, TLM is primarily used for system-level functional testing and simulation using the SystemC C++ API widely accepted in industry. Nevertheless, TLM requires a careful handling of asynchronous concurrency. In this paper, we give a semantics to TLM models written in SystemC via a translation into the process algebra LOTOS, enabling the verification of the models with the CADP toolbox dedicated to asynchronous systems. Contrary to other works on formal verification of TLM models written in SystemC, our approach targets fully asynchronous TLM without the restrictions imposed by the SystemC simulation semantics. We argue that this approach leads to more dependable models.

Olivier Ponsini, Wendelin Serwe
A Rigorous Approach to Networking: TCP, from Implementation to Protocol to Service

Despite more then 30 years of research on protocol specification, the major protocols deployed in the Internet, such as TCP, are described only in informal prose RFCs and executable code. In part this is because the scale and complexity of these protocols makes them challenging targets for formalization.

In this paper we show how these difficulties can be addressed. We develop a high-level specification for TCP and the Sockets API, expressed in the HOL proof assistant, describing the byte-stream service that TCP provides to users. This complements our previous low-level specification of the protocol internals, and makes it possible for the first time to state what it means for TCP to be correct: that the protocol implements the service. We define a precise abstraction function between the models and validate it by testing, using verified testing infrastructure within HOL. This is a pragmatic alternative to full proof, providing reasonable confidence at a relatively low entry cost.

Together with our previous validation of the low-level model, this shows how one can rigorously tie together concrete implementations, low-level protocol models, and specifications of the services they claim to provide, dealing with the complexity of real-world protocols throughout.

Tom Ridge, Michael Norrish, Peter Sewell

Session 9. Constraint Analysis

Constraint Prioritization for Efficient Analysis of Declarative Models

The declarative modeling language Alloy and its automatic analyzer provide an effective tool-set for building designs of systems and checking their properties. The Alloy Analyzer performs bounded exhaustive analysis using off-the-shelf SAT solvers. The analyzer’s performance hinges on the complexity of the models and so far, its feasibility has been shown only within limited bounds. We present a novel optimization technique that defines program slicing for declarative models and enables efficient analyses exploiting partial solutions. We present an algorithm that computes transient slices for Alloy models by partitioning them into a base and a derived slice. A satisfying solution to the base slice is systematically extended to generate a solution for the entire model, while unsatisfiability of the base implies unsatisfiability of the entire model.

By generating slices, our approach enables constraint prioritization, where the base slice assumes higher priority than the derived slice. Compared to the complete model, base and derived slices represent smaller and, ideally, simpler sub-problems, which, in turn, enables efficient analyses for the underlying SAT solvers. Our approach analyzes the structure of a given model and constructs a set of candidate slicing criteria. Our prototype tool, Kato, performs a small-scope analysis for each criterion to determine whether declarative slicing optimization provides any performance gain and, if so, to select a criterion that is likely to provide an optimal performance enhancement. The experimental results show that, with declarative slicing, it is possible to achieve significant improvements compared to the Alloy Analyzer.

Engin Uzuncaova, Sarfraz Khurshid
Finding Minimal Unsatisfiable Cores of Declarative Specifications

Declarative specifications exhibit a variety of problems, such as inadvertently overconstrained axioms and underconstrained conjectures, that are hard to diagnose with model checking and theorem proving alone.

Recycling core extraction

is a new coverage analysis that pinpoints an irreducible unsatisfiable core of a declarative specification. It is based on resolution refutation proofs generated by

resolution engines

, such as SAT solvers and resolution theorem provers. The extraction algorithm is described, and proved correct, for a generalized specification language with a

regular

translation to the input logic of a resolution engine. It has been implemented for the Alloy language and evaluated on a variety of specifications, with promising results.

Emina Torlak, Felix Sheng-Ho Chang, Daniel Jackson
Precise Interval Analysis vs. Parity Games

In [8], a practical algorithm for precise interval analysis is provided for which, however, no non-trivial upper complexity bound is known. Here, we present a lower bound by showing that precise interval analysis is at least as hard as computing the sets of winning positions in parity games. Our lower-bound proof relies on an encoding of parity games into systems of particular integer equations. Moreover, we present a simplification of the algorithm for integer systems from [8]. For the given encoding of parity games, the new algorithm provides another algorithm for parity games which is almost as efficient as the discrete strategy improvement algorithm by Vöge and Jurdziński [17].

Thomas Gawlitza, Helmut Seidl

Session 10. Design

Introducing Objects through Refinement

We present a strategy for using the existing theory of class refinement in Object-Z to introduce an arbitrary number of object instances into a specification. Since class refinement applies only to a single class, the key part of the strategy is the use of references to objects of the class being refined. Once object instances have been introduced through local class refinements in this way, they can be turned into foreign class instantiations through the application of straight-forward equivalence preserving transformations. We introduce a set of logical classifiers to allow for the precise determination of which parts of the specification logic must be moved into the foreign class.

Tim McComb, Graeme Smith
Masking Faults While Providing Bounded-Time Phased Recovery

We focus on synthesis techniques for transforming existing fault-intolerant real-time programs to fault-tolerant programs that provide

phased recovery

. A fault-tolerant program is one that satisfies its

safety

and

liveness

specifications as well as

timing constraints

in the presence of faults. We argue that in many commonly considered programs (especially in mission-critical systems), when faults occur, simple recovery to the program’s normal behavior is necessary, but not sufficient. For such programs, it is necessary that recovery is accomplished in a sequence of phases, each ensuring that the program satisfies certain properties. In this paper, we show that, in general, synthesizing fault-tolerant real-time programs that provide bounded-time phased recovery is NP-complete. We also characterize a sufficient condition for cases where synthesizing fault-tolerant real-time programs that provide bounded-time phased recovery can be accomplished in polynomial-time in the size of the input program’s region graph.

Borzoo Bonakdarpour, Sandeep S. Kulkarni
Towards Consistent Specifications of Product Families

Addressing the challenges faced today during the development of multi-functional system families, we suggest a service-oriented approach to formally specifying the functionality and, in particular, the functional variability already in the requirement engineering phase. In this paper, we precisely define the underlying concepts, such as the notion of individual services, the combination of services, inter-service dependencies, and variability. Thereby, we especially focus on establishing the

consistency

of the overall specification. To that end, we formally define conflicts between requirements and describe how they can be detected and resolved based on the introduced formal concepts.

Alexander Harhurin, Judith Hartmann

Session 11. Industry Day

Formal Methods for Trustworthy Skies: Building Confidence in the Security of Aircraft Assets Distribution

A recent application in commercial aviation is the electronic distribution of loadable software parts and data. Its safe and beneficial use, however, warrants that information security vulnerabilities are analyzed and mitigated at an adequate assurance level. In our prior work, we have identified security threats and assurance requirements for a generic aircraft asset distribution system or AADS. In this paper, we focus on supporting analytical processes to address security vulnerabilities as well as describing our experiences in applying formal methods to AADS.

Scott Lintelman, Richard Robinson, Mingyan Li, Krishna Sampigethaya
An Industrial Case: Pitfalls and Benefits of Applying Formal Methods to the Development of a Network-Centric RTOS

This paper describes a project to develop a network-centric RTOS from scratch using formal methods. The (initial) purposes of the project was to get acquainted with the use of formal methods for software engineering and to obtain a trustworthy RTOS as a component for building networked embedded systems. The work was done by a small, distributed team that had no prior experience on using formal methods and with a small budget. The outcome is that the use of formal methods is most useful as an architectural design method, perhaps more than as a formal verification of software code. The resulting software has many properties that were not anticipated at the beginning and would likely not have been achieved without the use of Formal Methods.

Eric Verhulst, Gjalt de Jong, Vitaliy Mezhuyev
Software Engineering with Formal Methods: Experiences with the Development of a Storm Surge Barrier Control System

This paper revisits the experiences with the use of formal methods in the development of the control system for the

Maeslant Kering

. The

Maeslant Kering

is the movable barrier which has to protect Rotterdam from floodings while, at almost the same time, not restricting shipping traffic to the port of Rotterdam. The control system, called BOS, completely autonomously decides about closing and opening of the barrier and, when necessary, also performs these tasks without human intervention. BOS is a safety-critical software system of the highest Safety Integrity Level according to the IEC 61508 standard. One of the reliability increasing techniques used during its development is

formal methods

. This paper revisits the earlier published experiences with the project after the system is in operation for ten years and has performed its first autonomous barrier operation on November 11

th

, 2007.

Klaas Wijbrans, Franc Buve, Robin Rijkers, Wouter Geurts
Application of a Formal Specification Language in the Development of the “Mobile FeliCa” IC Chip Firmware for Embedding in Mobile Phone

We have adopted formal specification language in the development of firmware of “Mobile FeliCa” IC chip and have achieved successful results and confirmed its effectiveness.

Taro Kurita, Miki Chiba, Yasumasa Nakatsugawa
Safe and Reliable Metro Platform Screen Doors Control/Command Systems

In this article we would like to present some recent applications of the B formal method to the development of safety critical system. These SIL3/SIL4 compliant systems have their functional specification based on a formal model. This model has been proved, guaranteeing a correct by construction behaviour of the system in absence of failure of its components. The constructive process used during system specification and design leads to a high quality system which has been qualified by French authorities.

Thierry Lecomte
Backmatter
Metadaten
Titel
FM 2008: Formal Methods
herausgegeben von
Jorge Cuellar
Tom Maibaum
Kaisa Sere
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-68237-0
Print ISBN
978-3-540-68235-6
DOI
https://doi.org/10.1007/978-3-540-68237-0

Premium Partner