Skip to main content

2013 | Buch

Integrated Formal Methods

10th International Conference, IFM 2013, Turku, Finland, June 10-14, 2013. Proceedings

herausgegeben von: Einar Broch Johnsen, Luigia Petre

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 10th International Conference on Integrated Formal Methods, IFM 2013, held in Turku, Finland, in June 2013.

The 25 revised full papers presented together with 4 invited papers were carefully reviewed and selected from 84 full paper submissions. The papers cover the spectrum of integrated formal methods, focusing on refinement, integration, translation, verification, reachability and model checking, usability and testing, distributed systems, semantics, and system-level analysis.

Inhaltsverzeichnis

Frontmatter

Invited Paper 1

Refinement, Integration, Translation

Systems Design Guided by Progress Concerns
Abstract
We present Unit-B, a formal method inspired by Event-B and UNITY, for designing systems via step-wise refinement preserving both safety and liveness properties. In particular, we introduce the notion of coarse- and fine-schedules for events, a generalisation of weak- and strong-fairness assumptions. We propose proof rules for reasoning about progress properties related to the schedules. Furthermore, we develop techniques for refining systems by adapting event schedules such that liveness properties are preserved. We illustrate our approach by an example to show that Unit-B developments can be guided by both safety and liveness requirements.
Simon Hudon, Thai Son Hoang
Assume-Guarantee Specifications of State Transition Diagrams for Behavioral Refinement
Abstract
In this paper, we consider extending state transition diagrams (SDs) by new features which add new events, states and transitions. The main goal is to capture when the behavior of a state transition diagram is preserved under such an extension, which we call behavioral refinement. Our behavioral semantics is based on the observable traces of input and output events. We use assume/guarantee specifications to specify both the base SD and the extensions, where assumptions limit the permitted input streams. Based on this, we formalize behavioral refinement and also determine suitable assumptions on the input for the extended SD. We argue that existing techniques for behavioral refinement are limited by only abstracting from newly added events. Instead, we generalize this to new refinement concepts based on the elimination of the added behavior on the trace level. We introduce new refinement relations and show that properties are preserved even when the new features are added.
Christian Prehofer
Translating VDM to Alloy
Abstract
The Vienna Development Method is one of the longest established formal methods. Initial software design is often best described using implicit specifications but limited tool support exists to help with the difficult task of validating that such specifications capture their intended meaning. Traditionally, theorem provers are used to prove that specifications are correct but this process is highly dependent on expert users. Alternatively, model finding has proved to be useful for validation of specifications. The Alloy Analyzer is an automated model finder for checking and visualising Alloy specifications. However, to take advantage of the automated analysis of Alloy, the model-oriented VDM specifications must be translated into a constraint-based Alloy specifications. We describe how a subset of VDM can be translated into Alloy and how assertions can be expressed in VDM and checked by the Alloy Analyzer.
Kenneth Lausdahl
Verification of EB3 Specifications Using CADP
Abstract
\(\textsc{eb}^3\) is a specification language for information systems.The core of the \(\textsc{eb}^3\) language consists of process algebraic specifications describing the behaviour of the entities in a system, and attribute function definitions describing the entity attributes.The verification of \(\textsc{eb}^3\) specifications against temporal properties is of great interest to users of \(\textsc{eb}^3\).In this paper, we propose a translation from \(\textsc{eb}^3\) to LOTOS NT (LNT for short), a value-passing concurrent language with classical process algebra features. Our translation ensures the one-to-one correspondence between states and transitions of the labelled transition systems corresponding to the \(\textsc{eb}^3\) and LNT specifications. We automated this translation with the \(\textsc{eb}^3\)2lnt tool, thus equipping the \(\textsc{eb}^3\) method with the functional verification features available in the CADP toolbox.
Dimitris Vekris, Frédéric Lang, Catalin Dima, Radu Mateescu
From Z to B and then Event-B: Assigning Proofs to Meaningful Programs
Abstract
The very first paper on Z [1] was published in 1980 (at the time, the name Z was not “invented”), then the book on the B method [2] was published in 1996, and, finally, the book on Event-B [3] was published in 2010. So, 30 years separate Z from Event-B. It is thus clear that I spent a significant time of my scientific professional life working with the same kind of subject in mind, roughly speaking specification languages. I do not know whether this kind of addiction is good or bad, but what I know is that I enjoyed it a lot.
Jean-Raymond Abrial

Invited Paper 2

Verification

Automated Anonymity Verification of the ThreeBallot Voting System
Abstract
In recent years, a large number of secure voting protocols have been proposed in the literature. Often these protocols contain flaws, but because they are complex protocols, rigorous formal analysis has proven hard to come by.
Rivest’s ThreeBallot voting system is important because it aims to provide security (voter anonymity and voter verifiability) without requiring cryptography. In this paper, we construct a CSP model of ThreeBallot, and use it to produce the first automated formal analysis of its anonymity property.
Along the way, we discover that one of the crucial assumptions under which ThreeBallot (and many other voting systems) operates-the Short Ballot Assumption-is highly ambiguous in the literature.We give various plausible precise interpretations, and discover that in each case, the interpretation either is unrealistically strong, or else fails to ensure anonymity. Therefore, we give a version of the Short Ballot Assumption for ThreeBallot that is realistic but still provides a guarantee of anonymity.
Murat Moran, James Heather, Steve Schneider
Compositional Verification of Software Product Lines
Abstract
This paper presents a novel approach to the design verification of Software Product Lines (SPL). The proposed approach assumes that the requirements and designs at the feature level are modeled as finite state machines with variability information. The variability information at the requirement and design levels are expressed differently and at different levels of abstraction. Also the proposed approach supports verification of SPL in which new features and variability may be added incrementally. Given the design and requirements of an SPL, the proposed design verification method ensures that every product at the design level behaviourally conforms to a product at the requirement level. The conformance procedure is compositional in the sense that the verification of an entire SPL consisting of multiple features is reduced to the verification of the individual features. The method has been implemented and demonstrated in a prototype tool SPLEnD (SPL Engine for Design Verification) on a couple of fairly large case studies.
Jean-Vivien Millo, S. Ramesh, Shankara Narayanan Krishna, Ganesh Khandu Narwane
Deductive Verification of State-Space Algorithms
Abstract
As any software, model-checkers are subject to bugs. They can thus report false negatives or validate a model that they should not. Different methods, such as theorem provers or Proof-Carrying Code, have been used to gain more confidence in the results of model-checkers. In this paper, we focus on using a verification condition generator that takes annotated algorithms and ensures their termination and correctness. We study four algorithms (three sequential and one distributed) of state-space construction as a first step towards mechanically-assisted deductive verification of model-checkers.
Frédéric Gava, Jean Fortin, Michael Guedj
Inductive Verification of Hybrid Automata with Strongest Postcondition Calculus
Abstract
Safety verification of hybrid systems is a key technique in developing embedded systems that have a strong coupling with the physical environment. We propose an automated logical analytic method for verifying a class of hybrid automata. The problems are more general than those solved by the existing model checkers: our method can verify models with symbolic parameters and nonlinear equations as well. First, we encode the execution trace of a hybrid automaton as an imperative program. Its safety property is then translated into proof obligations by strongest postcondition calculus. Finally, these logic formulas are discharged by state-of-the-art arithmetic solvers (e.g., Mathematica). Our proposed algorithm efficiently performs inductive reasoning by unrolling the execution for some steps and generating loop invariants from verification failures. Our experimental results along with examples taken from the literature show that the proposed approach is feasible.
Daisuke Ishii, Guillaume Melquiond, Shin Nakajima
Knowledge for the Distributed Implementation of Constrained Systems
(Extended Abstract)
Abstract
Deriving distributed implementations from global specifications has been extensively studied for different application domains, under different assumptions and constraints. We explore here the knowledge perspective: a process decides to take a local action when it has the knowledge to do so. We discuss typical knowledge atoms that are useful for expressing local enabling conditions with respect to different notions of correctness, as well as different means for obtaining knowledge and for representing it locally in an efficient manner. Our goal is to use such a knowledge-based representation of the distribution problem for either deriving distributed implementations automatically from global specifications on which some constraint is enforced, or for improving the efficiency of existing protocols by exploiting local knowledge. We also argue that such a knowledge-based presentation helps achieving the necessary correctness proofs.
Susanne Graf, Sophie Quinton

Invited Paper 3

Reachability and Model Checking

Improved Reachability Analysis in DTMC via Divide and Conquer
Abstract
Discrete Time Markov Chains (DTMCs) are widely used to model probabilistic systems in many domains, such as biology, network and communication protocols. There are two main approaches for probability reachability analysis of DTMCs, i.e., solving linear equations or using value iteration. However, both approaches have drawbacks. On one hand, solving linear equations can generate accurate results, but it can be only applied to relatively small models. On the other hand, value iteration is more scalable, but often suffers from slow convergence. Furthermore, it is unclear how to parallelize (i.e., taking advantage of multi-cores or distributed computers) these two approaches. In this work, we propose a divide-and-conquer approach to eliminate loops in DTMC and hereby speed up probabilistic reachability analysis. A DTMC is separated into several partitions according to our proposed cutting criteria. Each partition is then solved by Gauss-Jordan elimination effectively and the state space is reduced afterwards. This divide and conquer algorithm will continue until there is no loop existing in the system. Experiments are conducted to demonstrate that our approach can generate accurate results, avoid the slow convergence problems and handle larger models.
Songzheng Song, Lin Gui, Jun Sun, Yang Liu, Jin Song Dong
Solving Games Using Incremental Induction
Abstract
Recently, IC3 has been presented as a new algorithm for formal verification. Based on incremental induction, it is often much faster compared to otherwise used fixpoint-based model checking algorithms. In this paper, we use the idea of incremental induction for solving two-player concurrent games. While formal verification requires to prove that a given system satisfies a given specification, game solving aims at automatically synthesizing a system to satisfy the specification. This involves both universal (player 1) and existential quantification (player 2) over the formulas that represent state transitions. Hence, algorithms for solving games are usually implemented with BDD packages that offer both kinds of quantification. In this paper, we show how to compute a solution of games by using incremental induction.
Andreas Morgenstern, Manuel Gesell, Klaus Schneider
Model-Checking Software Library API Usage Rules
Abstract
Modern software increasingly relies on using libraries which are accessed via Application Programming Interfaces (APIs). Libraries usually impose constraints on how API functions can be used (API usage rules) and programmers have to obey these API usage rules. However, API usage rules often are not well-documented or documented informally. In this work, we show how to use the SCTPL logic to precisely specify API usage rules in libraries, where SCTPL can be seen as an extension of the branching-time temporal logic CTL with variables, quantifiers, and predicates over the stack. This allows library providers to formally describe API usage rules without knowing how their libraries will be used by programmers. We also propose an approach to automatically check whether programs using libraries violate or not the corresponding API usage rules. Our approach consists in modeling programs as pushdown systems (PDSs), and checking API usage rules on programs using SCTPL model checking for PDSs. To make the model-checking procedure more efficient, we propose an abstraction that reduces drastically the size of the program model. Moreover, we characterize a sub-logic rSCTPL of SCTPL preserved by the abstraction. rSCTPL is sufficient to precisely specify all the API usage rules we met. We implemented our techniques in a tool and applied it to check several API usage rules. Our tool detected several previously unknown errors in well-known programs, such as Nssl, Verbs, Acacia+, Walksat and Getafix. Our experimental results are encouraging.
Fu Song, Tayssir Touili
Formal Modelling and Verification of Population Protocols
Abstract
The population protocols of Angluin, Aspnes et al [3] provide a theoretical framework for computability reasoning about algorithms for Mobile Ad-Hoc Networks (MANETs) and Wireless Sensor Networks (WSNs). By developing two example protocols and proving convergence results using the Event-B/RODIN [2] and TLA [11] frameworks, we explore the potential for formal analysis of these protocols.
Dominique Méry, Michael Poppleton

Usability and Testing

Detecting Vulnerabilities in Java-Card Bytecode Verifiers Using Model-Based Testing
Abstract
Java Card security is based on different elements among which the bytecode verifier is one of the most important. Finding vulnerabilities is a complex, tedious and error-prone task. In the case of the Java bytecode verifier, vulnerability tests are typically derived by hand. We propose a new approach to generate vulnerability test suites using model-based testing. Each instruction of the Java bytecode language is represented by an event of an Event-B machine, with a guard that denotes security conditions as defined in the virtual machine specification. We generate vulnerability tests by negating guards of events and generating traces with these faulty events using the ProB model checker. This approach has been applied to a subset of twelve instructions of the bytecode language and tested on five Java Card bytecode verifiers. Vulnerabilities have been found for each of them. We have developed a complete tool chain to support the approach and provide a proof of concept.
Aymerick Savary, Marc Frappier, Jean-Louis Lanet
Integrating Formal Predictions of Interactive System Behaviour with User Evaluation
Abstract
It is well known that human error in the use of interactive devices can have severe safety or business consequences. It is important therefore that aspects of the design that compromise the usability of a device can be predicted before deployment. A range of techniques have been developed for identifying potential usability problems including laboratory based experiments with prototypes and paper based evaluation techniques. This paper proposes a framework that integrates experimental techniques with formal models of the device, along with assumptions about how the device will be used. Abstract models of prototype designs and use assumptions are analysed using model checking techniques. As a result of the analysis hypotheses are formulated about how a design will fail in terms of its usability. These hypotheses are then used in an experimental environment with potential users to test the predictions. Formal methods are therefore integrated with laboratory based user evaluation to give increased confidence in the results of the usability evaluation process. The approach is illustrated by exploring the design of an IV infusion pump designed for use in a hospital context.
Rimvydas Rukšėnas, Paul Curzon, Michael D. Harrison
Automatic Inference of Erlang Module Behaviour
Abstract
Previous work has shown the benefits of using grammar inference techniques to infer models of software behaviour for systems whose specifications are not available. However, this inference has required considerable direction from an expert user who needs to have familiarity with the system’s operation, and must be actively involved in the inference process. This paper presents an approach that can be applied automatically to infer a model of the behaviour of Erlang modules with respect to their presented interface. It integrates the automated learning system StateChum with the automated refactoring tool Wrangler to allow both interface discovery and behaviour inference to proceed without human involvement.
Ramsay Taylor, Kirill Bogdanov, John Derrick

Distributed Systems

Integrating Proved State-Based Models for Constructing Correct Distributed Algorithms
Abstract
The verification of distributed algorithms is a challenge for formal techniques supported by tools, such as model checkers and proof assistants. The difficulties lie in the derivation of proofs of required properties, such as safety and eventuality, for distributed algorithms. In this paper, we present a methodology based on the general concept of refinement that is used for developing distributed algorithms satisfying a given list of safety and liveness properties. The methodology is a recipe for reusing the old ingredients of the classical temporal approaches, which are illustrated through standard example of routing protocols. More precisely, we show how the state-based models can be developed for specific problems and how they can be simply reused by controlling the composition of state-based models through the refinement relationship. The service-as-event paradigm is introduced for helping users to describe algorithms as a composition of simple services and/or to decompose them into simple steps. Consequently, we obtain a framework to derive new distributed algorithms by developing existing distributed algorithms using correct-by-construction approach. The correct-by-construction approach ensures the correctness of developed distributed algorithms.
Manamiary Bruno Andriamiarina, Dominique Méry, Neeraj Kumar Singh
Quantified Abstractions of Distributed Systems
Abstract
When reasoning about distributed systems, it is essential to have information about the different kinds of nodes which compose the system, how many instances of each kind exist, and how nodes communicate with other nodes. In this paper we present a static-analysis-based approach which is able to provide information about the questions above. In order to cope with an unbounded number of nodes and an unbounded number of calls among them, the analysis performs an abstraction of the system producing a graph whose nodes may represent (infinitely) many concrete nodes and arcs represent any number of (infinitely) many calls among nodes. The crux of our approach is that the abstraction is enriched with upper bounds inferred by a resource analysis which limit the number of concrete instances which the nodes and arcs represent. The combined information provided by our approach has interesting applications such as debugging, optimizing and dimensioning distributed systems.
Elvira Albert, Jesús Correas, Germán Puebla, Guillermo Román-Díez
Priced Timed Automata and Statistical Model Checking
Abstract
The notions of priced timed automata (PTA) and energy games (EG) provide useful modeling formalisms for energy-aware and energy-harvesting embedded systems. We review these formalisms and a range of associated decision problems covering cost-optimal reachability, model-checking and cost-bounded infinite strategies. Decidability of several of these problems require tight bounds on the number of clocks and cost variables. Thus, we turn to statistical model checking (SMC), which has emerged as a highly scalable simulation-based “approximate” validation technique. In a series of recent work we have developed a natural stochastic semantics for PTAs allowing for statistical model checking to be performed. The resulting techniques have been implemented in Uppaal-smc, and applied to the performance analysis of a number of systems ranging from real-time scheduling, mixed criticality systems, sensor networks, energy aware systems and systems biology.
Kim Guldstrand Larsen

Invited Paper 4

Semantics

A Compositional Automata-Based Semantics for Property Patterns
Abstract
Dwyer et al. define a language to specify dynamic properties based on predefined patterns and scopes. To define a property, the user has to choose a pattern and a scope among a limited number of them. Dwyer et al. define the semantics of these properties by translating each composition of a pattern and a scope into usual temporal logics (LTL, CTL, etc.). First, this translational semantics is not compositional and thus not easily extensible to other patterns/scopes. Second, it is not always faithful to the natural semantics of the informal definitions.
In this paper, we propose a compositional automata-based approach defining the semantics of each pattern and each scope by an automaton. Then, we propose a composition operation in such a way that the property semantics is defined by composing the automata. Hence, the semantics is compositional and easily extensible as we show it by handling many extensions to the Dwyer et al.’s language. We compare our compositional semantics with the Dwyer et al.’s translational semantics by checking whether our automata are equivalent to the Büchi automata of the LTL expressions given by Dwyer et al. In some cases, our semantics reveals a lack of homogeneity within Dwyer et al.’s semantics.
Kalou Cabrera Castillos, Frédéric Dadeau, Jacques Julliand, Bilal Kanso, Safouan Taha
A Formal Semantics for Complete UML State Machines with Communications
Abstract
UML is a widely used notation, and formalizing its semantics is an important issue. Here, we concentrate on formalizing UML state machines, used to express the dynamic behaviour of software systems. We propose a formal operational semantics covering all features of the latest version (2.4.1) of UML state machines specification. We use labelled transition systems as the semantic model, so as to use automatic verification techniques like model checking. Furthermore, our proposed semantics includes synchronous and asynchronous communications between state machines. We implement our approach in USM2C, a model checker supporting editing, simulation and automatic verification of UML state machines. Experiments show the effectiveness of our approach.
Shuang Liu, Yang Liu, Étienne André, Christine Choppy, Jun Sun, Bimlesh Wadhwa, Jin Song Dong
From Small-Step Semantics to Big-Step Semantics, Automatically
Abstract
Small-step semantics and big-step semantics are two styles for operationally defining the meaning of programming languages. Small-step semantics are given as a relation between program configurations that denotes one computational step; big-step semantics are given as a relation directly associating to each program configuration the corresponding final configuration. Small-step semantics are useful for making precise reasonings about programs, but reasoning in big-step semantics is easier and more intuitive. When both small-step and big-step semantics are needed for the same language, a proof of the fact that the two semantics are equivalent should also be provided in order to trust that they both define the same language. We show that the big-step semantics can be automatically obtained from the small-step semantics when the small-step semantics are given by inference rules satisfying certain assumptions that we identify. The transformation that we propose is very simple and we show that when the identified assumptions are met, it is sound and complete in the sense that the two semantics are equivalent. For a strict subset of the identified assumptions, we show that the resulting big-step semantics is sound but not necessarily complete. We discuss our transformation on a number of examples.
Ştefan Ciobâcă
Program Equivalence by Circular Reasoning
Abstract
We propose a logic and a deductive system for stating and automatically proving the equivalence of programs in deterministic languages having a rewriting-based operational semantics. The deductive system is circular in nature and is proved sound and weakly complete; together, these results say that, when it terminates, our system correctly solves the program-equivalence problem as we state it. We show that our approach is suitable for proving the equivalence of both terminating and non-terminating programs, and also the equivalence of both concrete and symbolic programs. The latter are programs in which some statements or expressions are symbolic variables. By proving the equivalence between symbolic programs, one proves in one shot the equivalence of (possibly, infinitely) many concrete programs obtained by replacing the variables by concrete statements or expressions. A prototype of the proof system for a particular language was implemented and can be tested on-line.
Dorel Lucanu, Vlad Rusu

System-Level Analysis

Structural Transformations for Data-Enriched Real-Time Systems
Abstract
We investigate structural transformations for easier verification of real-time systems with shared data variables, modelled as networks of extended timed automata (ETA). Our contributions to this end are: (1) An operator for layered composition of ETA that yields communication closed equivalences under certain independence and / or precedence conditions. (2) Two reachability preserving transformations of separation and flattening for reducing (under certain cycle conditions) the number cycles of the ETA. (3) The interplay of the three structural transformations (separation, flattening, and layering), illustrated on an enhanced version of Fischer’s real-time mutual exclusion protocol.
Ernst-Rüdiger Olderog, Mani Swaminathan
Deadlock Analysis of Concurrent Objects: Theory and Practice
Abstract
We present a framework for statically detecting deadlocks in a concurrent object language with asynchronous invocations and operations for getting values and releasing the control. Our approach is based on the integration of two static analysis techniques: (i) an inference algorithm to extract abstract descriptions of methods in the form of behavioral types, called contracts, and (ii) an evaluator that computes a fixpoint semantics returning a finite state model of contracts. A potential deadlock is detected when a circular dependency is found in some state of the model. We discuss the theory and the prototype implementation of our framework. Our tool is validated on an industrial case study based on the Fredhopper Access Server (FAS) developed by SDL Fredhoppper. In particular we verify one of the core concurrent components of FAS to be deadlock-free.
Elena Giachino, Carlo A. Grazia, Cosimo Laneve, Michael Lienhardt, Peter Y. H. Wong
Broadcast, Denial-of-Service, and Secure Communication
Abstract
A main challenge in the design of wireless-based Cyber-Physical Systems consists in balancing the need for security and the effect of broadcast communication with the limited capabilities and reliability of sensor nodes. We present a calculus of broadcasting processes that enables to reason about unsolicited messages and lacking of expected communication. Moreover, standard cryptographic mechanisms can be implemented in the calculus via term rewriting. The modelling framework is complemented by an executable specification of the semantics of the calculus in Maude, thereby facilitating solving a number of simple reachability problems.
Roberto Vigo, Flemming Nielson, Hanne Riis Nielson
Characterizing Fault-Tolerant Systems by Means of Simulation Relations
Abstract
In this paper, we study a formal characterization of fault-tolerant behaviors of systems via simulation relations. This formalization makes use of particular notions of simulation and bisimulation in order to compare the executions of a system that exhibit faults with executions where no faults occur. By employing variations of standard (bi)simulation algorithms, our characterization enables us to algorithmically check fault-tolerance, i.e., to verify that a system behaves in an acceptable way even under the occurrence of faults.
Our approach has the benefit of being simple and supporting an efficient automated treatment. We demonstrate the practical application of our formalization through some well-known case studies, which illustrate that the main ideas behind most fault-tolerance mechanisms are naturally captured in our setting.
Ramiro Demasi, Pablo F. Castro, Thomas S. E. Maibaum, Nazareno Aguirre
An Algebraic Theory for Web Service Contracts
Abstract
We study a natural notion of compliance between clients and services in terms of their BPEL (abstract) descriptions. The induced preorder shows interesting connections with the must preorder and has normal form representatives that are parallel-free finite-state activities, called contracts. The preorder also admits the notion of least service contract that is compliant with a client contract, called principal dual contract. Our framework serves as a foundation of Web service technologies for connecting abstract and concrete service definitions and for service discovery.
Cosimo Laneve, Luca Padovani
Backmatter
Metadaten
Titel
Integrated Formal Methods
herausgegeben von
Einar Broch Johnsen
Luigia Petre
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-38613-8
Print ISBN
978-3-642-38612-1
DOI
https://doi.org/10.1007/978-3-642-38613-8

Premium Partner