Skip to main content

2011 | Buch

FM 2011: Formal Methods

17th International Symposium on Formal Methods, Limerick, Ireland, June 20-24, 2011. Proceedings

herausgegeben von: Michael Butler, Wolfram Schulte

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 17th International Symposium on Formal Methods, FM 2011, held in Limerick, Ireland, in June 2011.

The 29 revised full papers presented together with 3 invited talks were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on cyber-physical systems, runtime analysis, case studies/tools, experience, program compilation and transformation, security, progress algebra, education, concurrency, dynamic structures, and model checking.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Model Integration and Cyber Physical Systems: A Semantics Perspective

Recent attention to Cyber Physical Systems (CPS) is driven by the need for deeper integration of design disciplines that dominate physical and computational domains. Consequently, heterogeneity is the norm as well as the main challenge in CPS design: components and systems are modeled using multiple physical, logical, functional and non-functional modeling aspects. The scope of relevant design domains includes (1) physical domains, such as structure, mechanical dynamics, thermal, propulsion, fluid, electrical, acoustics/vibration and (2) computational/networking domains, such as system control, sensors, health management, mission management, communication. However, the practice of multi-modeling – using established domain-specific modeling languages and tools independently in the design process – is insufficient. Modeling and analyzing cross-domain interactions among physical and computational/networking domains and understanding the effects of heterogeneous abstraction layers in the design flow are fundamental part of CPS design theories. I will cast this challenge as a model integration problem and discuss solutions for capturing interdependencies across the modeling domains using constructs for meta-model composition and integration.

Janos Sztipanovits
Some Thoughts on Behavioral Programming

The talk starts from a dream/vision paper I published in 2008, whose title is a play on that of John Backus’ famous Turing Award Lecture (and paper). I will propose that — or rather ask whether — programming can be made a lot closer to the way we humans think about dynamics, and the way we somehow manage to get others (e.g., our children, our employees, etc.) to do what we have in mind. Technically, the question is whether we can liberate programming from its three main straightjackets: (1) having to directly produce a precise artifact in some language; (2) having actually to produce two separate artifacts (the program and the requirements) and having then to pit one against the other; (3) having to program each piece/part/object of the system separately. The talk will then get a little more technical, providing some evidence of feasibility of the dream, via LSCs and the play-in/play-out approach to scenario-based programming, and its more recent Java variant. The entire body of work around these ideas can be framed as a paradigm, which we call behavioral programming.

David Harel
The Only Way Is Up

We draw an analogy between biology and computer hardware systems and argue for the need of a tower of abstractions to tame complexity of living systems. Much like in hardware design, where engineers use a tower of abstractions to produce the most complex man-made systems, we stress that in reverse engineering of biological systems; only by using a tower of abstractions we would be able to understand the “program of life”.

Jasmin Fisher, Nir Piterman, Moshe Y. Vardi

Cyber-Physical Systems

Does It Pay to Extend the Perimeter of a World Model?

Will the cost for observing additional real-world phenomena in a world model be recovered by the resulting increase in the quality of the implementations based on the model? We address the quest for optimal models in light of industrial practices in systems engineering, where the development of control strategies is based on combined models of a system and its environment. We introduce the notion of remorsefree dominance between strategies, where one strategy is preferred over another if it outperforms the other strategy in comparable situations, even if neither strategy is guaranteed to achieve all objectives. We call a world model optimal if it is sufficiently precise to allow for a remorsefree dominating strategy that is guaranteed to remain dominant even if the world model is refined. We present algorithms for the automatic verification and synthesis of dominant strategies, based on tree automata constructions from reactive synthesis.

Werner Damm, Bernd Finkbeiner
System Verification through Program Verification

We present an automatable approach to verify that a system satisfies its requirements by verification of the program that controls the system. The approach can be applied if the interaction of the program with the system hardware can be faithfully described by a table relating domain phenomena and program variables. We show the applicability of the approach with a case study based on a real-world system.

Daniel Dietsch, Bernd Westphal, Andreas Podelski
Adaptive Cruise Control: Hybrid, Distributed, and Now Formally Verified

Car safety measures can be most effective when the cars on a street coordinate their control actions using distributed cooperative control. While each car optimizes its navigation planning locally to ensure the driver reaches his destination, all cars coordinate their actions in a distributed way in order to minimize the risk of safety hazards and collisions. These systems control the physical aspects of car movement using cyber technologies like local and remote sensor data and distributed V2V and V2I communication. They are thus cyber-physical systems. In this paper, we consider a distributed car control system that is inspired by the ambitions of the California PATH project, the CICAS system, SAFESPOT and PReVENT initiatives. We develop a formal model of a distributed car control system in which every car is controlled by adaptive cruise control. One of the major technical difficulties is that faithful models of distributed car control have both distributed systems and hybrid systems dynamics. They form distributed hybrid systems, which makes them very challenging for verification. In a formal proof system, we verify that the control model satisfies its main safety objective and guarantees collision freedom for arbitrarily many cars driving on a street, even if new cars enter the lane from on-ramps or multi-lane streets. The system we present is in many ways one of the most complicated cyber-physical systems that has ever been fully verified formally.

Sarah M. Loos, André Platzer, Ligia Nistor

Runtime Analysis

TraceContract: A Scala DSL for Trace Analysis

In this paper we describe

TraceContract

, an API for trace analysis, implemented in the

Scala

programming language. We argue that for certain forms of trace analysis the best weapon is a high level programming language augmented with constructs for temporal reasoning. A trace is a sequence of events, which may for example be generated by a running program, instrumented appropriately to generate events. The API supports writing properties in a notation that combines an advanced form of data parameterized state machines with temporal logic. The implementation utilizes

Scala

’s support for defining internal Domain Specific Languages (DSLs). Furthermore

Scala

’s combination of object oriented and functional programming features, including partial functions and pattern matching, makes it an ideal host language for such an API.

Howard Barringer, Klaus Havelund
Using Debuggers to Understand Failed Verification Attempts

Automatic program verification allows programmers to detect program errors at compile time. When an attempt to automatically verify a program fails the reason for the failure is often difficult to understand. Many program verifiers provide a counterexample of the failed attempt. These counterexamples are usually very complex and therefore not amenable to manual inspection. Moreover, the counterexample may be invalid, possibly misleading the programmer. We present a new approach to help the programmer understand failed verification attempts by generating an executable program that reproduces the failed verification attempt described by the counterexample. The generated program (1) can be executed within the program debugger to systematically explore the counterexample, (2) encodes the program semantics used by the verifier, which allows us to detect errors in specifications as well as in programs, and (3) contains runtime checks for all specifications, which allows us to detect spurious errors. Our approach is implemented within the Spec# programming system.

Peter Müller, Joseph N. Ruskiewicz
Sampling-Based Runtime Verification

The literature of runtime verification mostly focuses on

event-triggered

solutions, where a monitor is invoked by every change in the state of the system and evaluates properties of the system. This constant invocation introduces two major drawbacks to the system under scrutiny at run time: (1) significant

overhead

and (2)

unpredictability

. To circumvent the latter drawback, in this paper, we introduce a

time-triggered

approach, where the monitor frequently takes samples from the system to analyze the system’s health. We propose formal semantics of sampling-based monitoring and discuss how to optimize the sampling period using minimum auxiliary memory. We show that such optimization is NP-complete and consequently introduce a mapping to

Integer Linear Programming

. Experiments on benchmark applications show that our approach introduces bounded overhead and effectively reduces involvement of the monitor at run time using negligible auxiliary memory.

Borzoo Bonakdarpour, Samaneh Navabpour, Sebastian Fischmeister

Case Studies / Tools

Specifying and Verifying the SYNERGY Reconfiguration Protocol with LOTOS NT and CADP

Dynamic software systems that provide the ability to reconfigure themselves seem to be reaching a complexity that suggests the use of formal methods in the design process, helping system designers master that complexity, better understand their systems, find and correct bugs rapidly, and ultimately build strong confidence in the correctness of their systems. As an illustration of this trend, this paper reports on our experience with the co-design and specification of the reconfiguration protocol of a component-based platform, intended as the foundation for building robust dynamic systems. We wrote the specification in

Lotos NT

, whose evolution from the

E-Lotos

standard proved especially suited to this work. We extensively verified the protocol using the

Cadp

toolbox. This formal analysis helped to detect several issues which enabled us to correct various parts of the protocol. The protocol is implemented in the

Synergy

virtual machine, the prototype of an ongoing research programme on reconfigurable and robust component-aware virtual machines.

Fabienne Boyer, Olivier Gruber, Gwen Salaün
Formal Development of a Tool for Automated Modelling and Verification of Relay Interlocking Systems

This paper describes a tool for formal modelling relay interlocking systems and explains how it has been stepwise, formally developed using the RAISE method. The developed tool takes the circuit diagrams of a relay interlocking system as input and gives as result a state transition system modelling the dynamic behaviour of the interlocking system, i.e. the dynamic behaviour of the circuits depicted in the diagrams. The resulting state transition system (model) is expressed in the SAL language such that the SAL model checker can be used to model check required properties of this model of the interlocking system. The tool has been applied to the circuit diagrams of Stenstrup station in Denmark and the resulting formal model has then been model checked to satisfy a number of required safety properties.

Anne E. Haxthausen, Andreas A. Kjær, Marie Le Bliguet
Relational Reasoning via SMT Solving

This paper explores the idea of using a SAT Modulo Theories (SMT) solver for proving properties of relational specifications. The goal is to automatically establish or refute consistency of a set of constraints expressed in a first-order relational logic, namely Alloy, without limiting the analysis to a bounded scope. Existing analysis of relational constraints – as performed by the Alloy Analyzer – is based on SAT solving and thus requires finitizing the set of values that each relation can take. Our technique complements this approach by axiomatizing all relational operators in a first-order SMT logic, and taking advantage of the background theories supported by SMT solvers. Consequently, it can potentially prove that a formula is a tautology – a capability completely missing from the Alloy Analyzer – and generate a counterexample when the proof fails. We also report on our experiments of applying this technique to various systems specified in Alloy.

Aboubakr Achraf El Ghazi, Mana Taghdiri
Building VCL Models and Automatically Generating Z Specifications from Them

VCL is a visual and formal language for abstract specification of software systems. Its novelty lies in its capacity to describe predicates visually. This paper presents work-in-progress on a tool for VCL; the tool version presented here supports the VCL notations of structural and assertion diagrams (a subset of the whole VCL suite), enabling the generation of Z specifications from them.

Nuno Amálio, Christian Glodt, Pierre Kelsen

Experience

The 1st Verified Software Competition: Experience Report

We, the organizers and participants, report our experiences from the 1st Verified Software Competition, held in August 2010 in Edinburgh at the VSTTE 2010 conference.

Vladimir Klebanov, Peter Müller, Natarajan Shankar, Gary T. Leavens, Valentin Wüstholz, Eyad Alkassar, Rob Arthan, Derek Bronish, Rod Chapman, Ernie Cohen, Mark Hillebrand, Bart Jacobs, K. Rustan M. Leino, Rosemary Monahan, Frank Piessens, Nadia Polikarpova, Tom Ridge, Jan Smans, Stephan Tobies, Thomas Tuerk, Mattias Ulbrich, Benjamin Weiß

Program Compilation and Transformation

Validated Compilation through Logic

To reason about programs written in a language, one needs to define its formal semantics, derive a reasoning mechanism (e.g. a program logic), and maximize the proof automation. Unfortunately, a compiler may involve multiple languages and phases; it is tedious and error prone to do so for each language and each phase.

We present an approach based on the use of higher order logic to ease this burden. All the Intermediate Representations (IRs) are special forms of the logic of a prover such that IR programs can be reasoned about directly in the logic. We use this technique to construct and validate an optimizing compiler. New techniques are used to compile-with-proof all the programs into the logic,

e.g.

a logic specification is derived automatically from the monad interpretation of a piece of assembly code.

Guodong Li
Certification of Safe Polynomial Memory Bounds

In previous works, we have developed several algorithms for inferring upper bounds to heap and stack consumption for a simple functional language called

Safe

. The bounds inferred for a particular recursive function with

n

arguments takes the form of symbolic

n

-ary functions from (ℝ

 + 

)

n

to ℝ

 + 

relating the input argument sizes to the number of cells or words respectively consumed in the heap and in the stack. Most frequently, these functions are multivariate polynomials of any degree, although exponential and other functions can be inferred in some cases.

Certifying memory bounds is important because the analyses could be unsound, or have been wrongly implemented. But the certifying process should not be necessarily tied to the method used to infer those bounds. Although the motivation for the work presented here is certifying the bounds inferred by our compiler, we have developed a certifying method which could equally be applied to bounds computed by hand.

The certification process is divided into two parts: (a) an off-line part consisting of proving the soundness of a set of proof rules. This part is independent of the program being certified, and its correctness is established once forever by using the proof assistant Isabelle/HOL; and (b) a compile-time program-specific part in which the proof rules are applied to a particular program and their premises proved correct.

The key idea for the first part is proving an Isabelle/HOL theorem for each syntactic construction of the language, relating the symbolic information asserted by the proof-rule to the dynamic properties about the heap and stack consumption satisfied at runtime. For the second part, we use a mathematical tool for proving instances of Tarski’s decision problem on quantified formulas in real closed fields.

Javier de Dios, Ricardo Peña
Relational Verification Using Product Programs

Relational program logics are formalisms for specifying and verifying properties about two programs or two runs of the same program. These properties range from correctness of compiler optimizations or equivalence between two implementations of an abstract data type, to properties like non-interference or determinism. Yet the current technology for relational verification remains underdeveloped. We provide a general notion of product program that supports a direct reduction of relational verification to standard verification. We illustrate the benefits of our method with selected examples, including non-interference, standard loop optimizations, and a state-of-the-art optimization for incremental computation. All examples have been verified using the Why tool.

Gilles Barthe, Juan Manuel Crespo, César Kunz

Security

Specifying Confidentiality in Circus

This paper presents an approach for extending the

Circus

formalism to accommodate information flow security concerns. Working with the semantics of

Circus

, we introduce a notation for specifying which aspects of

Circus

processes are confidential and should not be revealed to low-level users. We also describe a novel procedure for verifying that a process satisfies its confidentiality properties.

Michael J. Banks, Jeremy L. Jacob
Formally Verifying Isolation and Availability in an Idealized Model of Virtualization

Hypervisors allow multiple guest operating systems to run on shared hardware, and offer a compelling means of improving the security and the flexibility of software systems. We formalize in the Coq proof assistant an idealized model of a hypervisor, and formally establish that the hypervisor ensures strong isolation properties between the different operating systems, and guarantees that requests from guest operating systems are eventually attended.

Gilles Barthe, Gustavo Betarte, Juan Diego Campo, Carlos Luna
The Safety-Critical Java Memory Model: A Formal Account

Safety-Critical Java (SCJ) is a version of Java for real-time programming that facilitates certification of implementations of safety-critical systems. It is the result of an international effort involving industry and academia. What we provide here is, as far as we know, the first formalisation of the SCJ model of memory regions. We use the Unifying Theories of Programming (UTP) to enable the integration of our theory with refinement models for object-orientation and concurrency. In developing the SCJ theory, we also make a contribution to the UTP by providing a general theory of invariants (of which the SCJ theory is an instance). Our results are a first essential ingredient to formalise the novel programming paradigm embedded in SCJ, and enable the justification and development of reasoning techniques based on refinement.

Ana Cavalcanti, Andy Wellings, Jim Woodcock

Process Algebra

Failure-Divergence Refinement of Compensating Communicating Processes

Compensating CSP (cCSP) extends CSP for specification and verification of long running transactions. The original cCSP is a modest extension to a subset of CSP that does not consider non-deterministic choice, synchronized composition, and recursion. There are a few further extensions. However, it remains a challenge to develop a fixed-point theory of process refinement in cCSP. This paper provides a complete solution to this problem and develops a theory of cCSP, corresponding to the theory of CSP, so that the verification techniques and their tools, such as FDR, can be extended for compensating processes.

Zhenbang Chen, Zhiming Liu, Ji Wang
Termination without $\checkmark$ in CSP

We recast each of the three standard denotational models of CSP, namely

Traces

,

Stable Failures

and

Failures-Divergences

, by replacing the

$\checkmark$

pseudo-event in each of them by an explicit representation of the termination traces of a process. The resulting recast models have simpler axiomatisations than their respective original counterparts and admit formulations of the compositional semantics of the basic processes and operators of CSP which are arguably clearer and therefore more intuitively appealing than those in the original models. Furthermore, the recast models facilitate the resolution of certain longstanding problematic issues, such as the offering of termination in an external choice alongside other behaviours, without resort to any incongruous special-casing which might compromise their regularity.

Steve Dunne
Timed Migration and Interaction with Access Permissions

We introduce and study a process algebra able to model the systems composed of processes (agents) which may migrate within a distributed environment comprising a number of distinct locations. Two processes may communicate if they are present in the same location and, in addition, they have appropriate access permissions to communicate over a channel. Access permissions are dynamic, and processes can acquire new access permissions or lose some existing permissions while migrating from one location to another. Timing constraints coordinate and control both the communication between processes and migration between locations. We completely characterise those situations when a process is always guaranteed to possess safe access permissions. The consequences of such a result are twofold. First, we are able to validate systems where one does not need to check (at least partially) access permissions as they are guaranteed not to be violated, improving efficiency of implementation. Second, one can design systems in which processes are not blocked (deadlocked) because of the lack of dynamically changing access permissions.

Gabriel Ciobanu, Maciej Koutny

Education

From a Community of Practice to a Body of Knowledge: A Case Study of the Formal Methods Community

A Body of Knowledge (BoK) is an ontology for a particular professional domain. A Community of Practice (CoP) is the collection of people developing such knowledge. In the paper we explore these concepts in the context of the formal methods community in general and the Z notation community, as has been supported by the Z User Group, in particular. The existing SWEBOK Software Engineering Body of Knowledge is considered with respect to formal methods and a high-level model for the possible structure of of a BoK is provided using the Z notation.

Jonathan P. Bowen, Steve Reeves

Concurrency

Verifying Linearisability with Potential Linearisation Points

Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the

lazy

implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses

thread local

simulation conditions and the technique of

potential

linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.

John Derrick, Gerhard Schellhorn, Heike Wehrheim
Refinement-Based Verification of Local Synchronization Algorithms

Synchronization algorithms are mandatory for simulating local computation models of distributed algorithms. Therefore, correctness of these algorithms becomes crucial, because it gives confidence that local computations are simulated as designed and do not behave harmfully. However, these algorithms are considered to be very complex to prove since they are integrating both distributed and probabilistic aspects. We derive proofs of synchronization algorithms relied upon the correct-by-construction paradigm; it is supported by a progressive and incremental process controlled by the refinement techniques. We illustrate our approach by examples like the handshake and the LC1 algorithms. These algorithms are designed for an asynchronous distributed network of anonymous processes which use the message-passing feature as a model for the communication.

Dominique Méry, Mohamed Mosbah, Mohamed Tounsi
Simulating Concurrent Behaviors with Worst-Case Cost Bounds

Modern software systems are increasingly being developed for deployment on a range of architectures. For this purpose, it is interesting to capture aspects of low-level deployment concerns in high-level modeling languages. In this paper, an executable object-oriented modeling language is extended with resource-restricted deployment components. To analyze model behavior a formal methodology is proposed to assess resource consumption, which balances the scalability of the method and the reliability of the obtained results. The approach applies to a general notion of resource, including traditional cost measures (e.g., time, memory) as well as concurrency-related measures (e.g., requests to a server, spawned tasks). The main idea of our approach is to combine reliable (but expensive) worst-case cost analysis of statically predictable parts of the model with fast (but inherently incomplete) simulations of the concurrent aspects in order to avoid the state-space explosion. The approach is illustrated by the analysis of memory consumption.

Elvira Albert, Samir Genaim, Miguel Gómez-Zamalloa, Einar Broch Johnsen, Rudolf Schlatte, S. Lizeth Tapia Tarifa

Dynamic Structures

Automatically Refining Partial Specifications for Program Verification

Automatically verifying heap-manipulating programs is a challenging task, especially when dealing with complex data structures with strong invariants, such as sorted lists and AVL/red-black trees. The verification process can greatly benefit from human assistance through specification annotations, but this process requires intellectual effort from users and is error-prone. In this paper, we propose a new approach to program verification that allows users to provide only partial specification to methods. Our approach will then refine the given annotation into a more complete specification by discovering missing constraints. The discovered constraints may involve both numerical and multi-set properties that could be later confirmed or revised by users. We further augment our approach by requiring only partial specification to be given for primary methods. Specifications for loops and auxiliary methods can then be systematically discovered by our augmented mechanism, with the help of information propagated from the primary methods. Our work is aimed at verifying beyond shape properties, with the eventual goal of analysing full functional properties for pointer-based data structures. Initial experiments have confirmed that we can automatically refine partial specifications with non-trivial constraints, thus making it easier for users to handle specifications with richer properties.

Shengchao Qin, Chenguang Luo, Wei-Ngan Chin, Guanhua He
Structured Specifications for Better Verification of Heap-Manipulating Programs

Conventional specifications typically have a flat structure that is based primarily on the underlying logic. Such specifications lack structures that could have provided better guidance to the verification process. In this work, we propose to add three new structures to a specification framework for separation logic to achieve a

more precise

and

better guided

verification for pointer-based programs. The newly introduced structures empower users with more control over the verification process in the following ways: (i) case analysis can be invoked to take advantage of disjointness conditions in the logic. (ii) early, as opposed to late, instantiation can minimise on the use of existential quantification. (iii) formulae that are staged provide better reuse of the verification process.

Initial experiments have shown that structured specifications can lead to more precise verification without incurring any performance overhead.

Cristian Gherghina, Cristina David, Shengchao Qin, Wei-Ngan Chin
Verification of Unloadable Modules

Programs in unsafe languages, like C and C++, may dynamically load and unload modules. For example, some operating system kernels support dynamic loading and unloading of device drivers. This causes specific difficulties in the verification of such programs and modules; in particular, it must be verified that no functions or global variables from the module are used after the module is unloaded.

We present the approach we used to add support for loading and unloading modules to our separation-logic-based program verifier VeriFast. Our approach to the specification and verification of function pointer calls, based on parameterizing function types by predicates, is sound in the presence of unloading, but at the same time does not complicate the verification of programs that perform no unloading, and does not require callers to distinguish between function pointers that point into unloadable modules and ones that do not.

We offer a machine-checked formalization and soundness proof and we report on verifying a small kernel-like program using VeriFast. To the best of our knowledge, ours is the first approach for sound modular verification of C programs that load and unload modules.

Bart Jacobs, Jan Smans, Frank Piessens

Model Checking

A Multi-encoding Approach for LTL Symbolic Satisfiability Checking

Formal behavioral specifications written early in the system-design process and communicated across all design phases have been shown to increase the efficiency, consistency, and quality of the system under development. To prevent introducing design or verification errors, it is crucial to test specifications for

satisfiability

. Our focus here is on specifications expressed in linear temporal logic (LTL).

We introduce a novel encoding of symbolic transition-based Büchi automata and a novel, “sloppy,” transition encoding, both of which result in improved scalability. We also define novel BDD variable orders based on tree decomposition of formula parse trees. We describe and extensively test a new multi-encoding approach utilizing these novel encoding techniques to create 30 encoding variations. We show that our novel encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking.

Kristin Y. Rozier, Moshe Y. Vardi
On Combining State Space Reductions with Global Fairness Assumptions

Model checking has established itself as an effective system analysis method, as it is capable of proving/dis-proving properties automatically. Its application to practical systems is however limited by state space explosion. Among effective state reduction techniques are symmetry reduction and partial order reduction. Global fairness often plays a vital role in designing self-stabilizing population protocols. It is known that combining fairness and symmetry reduction is nontrivial. In this work, we first show that global fairness, unlike weak/strong fairness, can be combined with symmetry reduction. We extend the PAT model checker with the technique and demonstrate its usability by verifying recently proposed population protocols. Second, we show that partial order reduction is not property-preserving with global fairness.

Shao Jie Zhang, Jun Sun, Jun Pang, Yang Liu, Jin Song Dong
Backmatter
Metadaten
Titel
FM 2011: Formal Methods
herausgegeben von
Michael Butler
Wolfram Schulte
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-21437-0
Print ISBN
978-3-642-21436-3
DOI
https://doi.org/10.1007/978-3-642-21437-0