Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 11th International Conference on Integrated Formal Methods, IFM 2014, held in Bertinoro, Italy, in September 2014. The 21 revised full papers presented together with 2 invited papers were carefully reviewed and selected from 43 submissions. The papers have been organized in the following topical sections: tool integration; model verification; program development; security analysis; analysis and transformation; and concurrency and control.



Invited Talks


Shape and Content

A Database-Theoretic Perspective on the Analysis of Data Structures
The verification community has studied dynamic data structures primarily in a bottom-up way by analyzing pointers and the shapes induced by them. Recent work in fields such as separation logic has made significant progress in extracting shapes from program source code. Many real world programs however manipulate complex data whose structure and content is most naturally described by formalisms from object oriented programming and databases. In this paper, we look at the verification of programs with dynamic data structures from the perspective of content representation. Our approach is based on description logic, a widely used knowledge representation paradigm which gives a logical underpinning for diverse modeling frameworks such as UML and ER. Technically, we assume that we have separation logic shape invariants obtained from a shape analysis tool, and requirements on the program data in terms of description logic. We show that the two-variable fragment of first order logic with counting and trees can be used as a joint framework to embed suitable fragments of description logic and separation logic.
Diego Calvanese, Tomer Kotek, Mantas Šimkus, Helmut Veith, Florian Zuleger

How to Break the Bank: Semantics of Capability Policies

The object capability model is a de-facto industry standard widely adopted for the implementation of secure software. We call capability policies the policies enforced by programs using object capabilities. Such policies tend to restrict the objects and the circumstances which may access services. In this paper we argue that capability policies should be made explicit and written separately from the code implementing them. We also argue that the specification of capability policies requires concepts that go beyond the features of current specification languages. Moreover, we argue that we need methodologies with which to prove that programs adhere to their capability policies as specified.
To give precise semantics to capability policy specifications, we propose execution observations, which talk about various properties of a program’s execution. We use execution observations to write the formal specification of five out of the six informal policies in the mint example, famous in the object capability literature. In these specifications, the conclusions but also the premises may relate to the state before as well as after execution, the code may be existentially or universally quantified, and interpretation quantifies over all modules extending the current module. In statically typed languages, adherence of code to the capability policies relies heavily on the guarantees provided by type system features such as final and private.
Sophia Drossopoulou, James Noble

Tool Integration


Model-Checking Circus State-Rich Specifications

Throughout the past decades two schools have been developing formal techniques for correct software development, taking complementary approaches: the model-based approach and the behavioural approach. Combinations of languages from both approaches have also been proposed. The lack of support for refinement of state-rich reactive systems in a calculational style has motivated the creation of Circus, a combination of Z, CSP, and Djikstra’s commmand language. In this paper, we foster the reuse of theoretical results underpinned on CSP to Circus by providing a sound mapping for processes and refinement from Circus to CSP. This mapping is proved sound from an existing link between these languages, established in the Unifying Theories of Programming (UTP). Our results allow analysing Circus specifications with techniques and tools, like FDR2 and PAT, originally developed for CSP. We illustrate the overall approach with a running example.
Marcel Vinicius Medeiros Oliveira, Augusto C. A. Sampaio, Madiel S. Conserva Filho

An Interactive Verification Tool Meets an IDE

We present a general approach on how to integrate a semi-automatic verification tool into a state-of-the-art integrated development environment (IDE). The objective and challenge is to keep implementation, specification and proofs in sync. Following a change in one of the specifications or implementations, all proofs that could possibly be affected by that change are rescheduled. To improve performance we look at several optimizations. User feedback about proof results is provided within the IDE using standard markers and views. The approach has been implemented and realizes an integration of the interactive verification system KeY into the Eclipse IDE.
Martin Hentschel, Stefan Käsdorf, Reiner Hähnle, Richard Bubel

An Analysis Pathway for the Quantitative Evaluation of Public Transport Systems

We consider the problem of evaluating quantitative service-level agreements in public services such as transportation systems. We describe the integration of quantitative analysis tools for data fitting, model generation, simulation, and statistical model-checking, creating an analysis pathway leading from system measurement data to verification results. We apply our pathway to the problem of determining whether public bus systems are delivering an appropriate quality of service as required by regulators. We exercise the pathway on service data obtained from Lothian Buses about the arrival and departure times of their buses on key bus routes through the city of Edinburgh. Although we include only that example in the present paper, our methods are sufficiently general to apply to other transport systems and other cities.
Stephen Gilmore, Mirco Tribastone, Andrea Vandin

Modeling UML Template Classes with FoCaLiZe

UML is the defacto standard language to graphically describe systems in an object oriented way. Once an application has been specified, Model Driven Architecture (MDA) techniques can be applied to generate code from such specifications. Because UML lacks formal basis to analyze and check model consistency, it is pertinent to choose a formal target language (in the MDA process) to enable proofs and verification techniques. To achieve this goal, we have associated to UML the FoCaLiZe language, an object-oriented development environment using a proof-based formal approach. This paper focuses on a subset of UML constructors, the template classes. These latter allow developers to create generic models that can be instantiated for actual models through a binding relationship. Specifically, we propose a formal transformation of UML template classes annotated with OCL constraints into FoCaLiZe specification. The proposed mapping directly supports most of UML template features.
Messaoud Abbas, Choukri-Bey Ben-Yelles, Renaud Rioboo

Integrating Event-B Modelling and Discrete-Event Simulation to Analyse Resilience of Data Stores in the Cloud

Ensuring resilience of large data stores in the cloud is a challenging engineering issue. It requires the development techniques that allow the designers to predict the main resilience characteristics — fault tolerance and performance — at the early design stages. In this paper, we experiment with integrating Event-B modelling with discrete-event simulation. Event-B allows us to reason about correctness and data integrity properties of data stores, while discrete-event simulation in SimPy enables quantitative assessment of performance and reliability. Since testing in a real cloud environment is expensive and time-consuming, the proposed approach offers several benefits in industrial settings.
Linas Laibinis, Benjamin Byholm, Inna Pereverzeva, Elena Troubitsyna, Kuan Eeik Tan, Ivan Porres

Applying an Integrated Modelling Process to Run-time Management of Many-Core Systems

A Run-Time Management system for many-core architecture is aware of application requirements and able to save energy by sacrificing performance when it will have negligible impact on user experience. This paper outlines the application of a process for development of a run-time management system that integrates a range of modelling, validation, verification and generation tools at appropriate stages. We outline the models, process and tools we used to develop a temperature aware run-time management system for Dynamic Voltage and Frequency Scaling (DVFS) of a media display application. The Event Refinement Structure (ERS) approach is used to visualise the abstract level of the DVFS control. The Model Decomposition technique is used to tackle the complexity of the model. To model the process-oriented aspects of the system we used iUML-B Statemachines. We use several different visual animation tools, running them synchronously to exploit their different strengths, in order to demonstrate the model to stakeholders. In addition, a continuous model of the physical properties of the cores is simulated in conjunction with discrete simulation of the Event-B run-time management system. Finally executable code is generated automatically using the Code Generation plug-in. The main contribution of this paper is to demonstrate the complementarity of the tools and the ease of their integrated use through the Rodin platform.
Asieh Salehi Fathabadi, Colin Snook, Michael Butler

Model Verification


Verifying Behavioral UML Systems via CEGAR

This work presents a novel approach for applying abstraction and refinement in the verification of behavioral UML models.
The Unified Modeling Language (UML) is a widely accepted modeling language for embedded and safety critical systems. As such the correct behavior of systems represented as UML models is crucial. Model checking is a successful automated verification technique for checking whether a system satisfies a desired property. Nevertheless, its applicability is often impeded by its high time and memory requirements. A successful approach to avoiding this limitation is CounterExample-Guided Abstraction-Refinement (CEGAR). We propose a CEGAR-like approach for UML systems. We present a model-to-model transformation that generates an abstract UML system from a given concrete one, and formally prove that our transformation creates an over-approximation.
The abstract system is often much smaller, thus model checking is easier. Because the abstraction creates an over-approximation we are guaranteed that if the abstract model satisfies the property then so does the concrete one. If not, we check whether the resulting abstract counterexample is spurious. In case it is, we automatically refine the abstract system, in order to obtain a more precise abstraction.
Yael Meller, Orna Grumberg, Karen Yorav

Formal Refinement in SysML

SysML is a UML-based graphical notation for systems engineering that is becoming a de facto standard. Whilst it reuses a number of UML diagrams, it introduces new diagrams, and maintains the loose UML semantics. Refinement is a formal technique that supports the validation and verification of models by capturing a notion of correctness based on observable behaviour. In this paper, we analyse the issue of formal refinement in the context of SysML. First, we identify the requirements for supporting refinement in SysML, next we propose extensions to SysML that satisfy these requirements, and finally we present a few refinement laws and discuss their validity.
Alvaro Miyazawa, Ana Cavalcanti

Verifying Modal Workflow Specifications Using Constraint Solving

Nowadays workflows are extensively used by companies to improve organizational efficiency and productivity. This paper focuses on the verification of modal workflow specifications using constraint solving as a computational tool. Its main contribution consists in developing an innovative formal framework based on constraint systems to model executions of workflow Petri nets and their structural properties, as well as to verify their modal specifications. Finally, an implementation and promising experimental results constitute a practical contribution.
Hadrien Bride, Olga Kouchnarenko, Fabien Peureux

Program Development


Proofs and Refutations in Invariant-Based Programming

Invariant-based programming is a correct-by-construction approach to program development in which the invariants of a program are written before the actual code. Socos is an environment for graphically constructing invariant-based programs (as statechart-like diagrams) and verifying their correctness (by invoking an automatic theorem prover). It borrows the specification language, logical framework and proof tactics from the PVS system. In this paper, we describe an extension to Socos for animating invariant-based programs in the logical framework of PVS. An invariant-based program is represented as an abstract datatype encoding the program state coupled a small-step state transition function encoding the operational semantics of the program. Socos visualizes the execution, allowing the user to inspect the current state and identify invalid assertions through test cases. Since programs are executed within the theorem prover framework (rather than translated into another language or compiled to machine code), failed test cases are logically sound refutations of the verification conditions. Invariants not executable in the general (e.g., containing unbounded quantification) can be handled for bounded test cases by introducing custom evaluation functions. While such functions provide no correctness guarantees, they can increase the assurance of a correctness condition before doing the actual proof. We illustrate this workflow through a verification exercise with non-trivial verification conditions, arguing that animation of invariant diagrams serves as an important stepping stone towards a fully verified program.
Johannes Eriksson, Masoumeh Parsa, Ralph-Johan Back

Automated Theorem Prover Assisted Program Calculations

Calculational Style of Programming, while very appealing, has several practical difficulties when done manually. Due to the large number of proofs involved, the derivations can be cumbersome and errorprone. To address these issues, we have developed automated theorem provers assisted program and formula transformation rules, which when coupled with the ability to extract context of a subformula, help in shortening and simplifying the derivations. We have implemented this approach in a Calculational Assistant for Programming from Specifications (CAPS). With the help of simple examples, we show how the calculational assistant helps in taking the drudgery out of the derivation process while ensuring correctness.
Dipak L. Chaudhari, Om Damani

Managing LTL Properties in Event-B Refinement

Refinement in Event-B supports the development of systems via proof based step-wise refinement of events. This refinement approach ensures safety properties are preserved, but additional reasoning is required in order to establish liveness and fairness properties.In this paper we present results which allow a closer integration of two formal methods, Event-B and linear temporal logic. In particular we show how a class of temporal logic properties can carry through a refinement chain of machines. Refinement steps can include introduction of new events, event renaming and event splitting. We also identify a general liveness property that holds for the events of the initial system of a refinement chain. The approach will aid developers in enabling them to verify linear temporal logic properties at early stages of a development, knowing they will be preserved at later stages. We illustrate the results via a simple case study.
Steve Schneider, Helen Treharne, Heike Wehrheim, David M. Williams

Security Analysis


Formal Security Analysis of the MaCAN Protocol

Embedded real-time network protocols such as the CAN bus cannot rely on off-the-shelf schemes for authentication, because of the bandwidth limitations imposed by the network. As a result, both academia and industry have proposed custom protocols that meet such constraints, with solutions that may be deemed insecure if considered out of context. MaCAN is one such compatible authentication protocol, proposed by Volkswagen Research and a strong candidate for being adopted by the automotive industry.
In this work we formally analyse MaCAN with ProVerif, an automated protocol verifier. Our formal analysis identifies two flaws in the original protocol: one creates unavailability concerns during key establishment, and the other allows re-using authenticated signals for different purposes. We propose and analyse a modification that improves its behaviour while fitting the constraints of CAN bus. Although the revised scheme improves the situation, it is still not completely secure. We argue that the modified protocol makes a good compromise between the desire to secure automotive systems and the limitations of CAN networks.
Alessandro Bruni, Michal Sojka, Flemming Nielson, Hanne Riis Nielson

A Probabilistic Framework for Security Scenarios with Dependent Actions

This work addresses the growing need of performing meaningful probabilistic analysis of security. We propose a framework that integrates the graphical security modeling technique of attack–defense trees with probabilistic information expressed in terms of Bayesian networks. This allows us to perform probabilistic evaluation of attack–defense scenarios involving dependent actions. To improve the efficiency of our computations, we make use of inference algorithms from Bayesian networks and encoding techniques from constraint reasoning. We discuss the algebraic theory underlying our framework and point out several generalizations which are possible thanks to the use of semiring theory.
Barbara Kordy, Marc Pouly, Patrick Schweitzer

A Hybrid Analysis for Security Protocols with State

Cryptographic protocols rely on message-passing to coordinate activity among principals. Many richly developed tools, based on well-understood foundations, are available for the design and analysis of pure message-passing protocols. However, in many protocols, a principal uses non-local, mutable state to coordinate its local sessions. Crosssession state poses difficulties for protocol analysis tools.
We provide a framework for modeling stateful protocols, and a hybrid analysis method. We leverage theorem-proving—specifically, PVS—for reasoning about computations over state. An “enrich-by-need” approach—embodied by CPSA—focuses on the message-passing part. The Envelope Protocol, due to Mark Ryan furnishes a case study.
John D. Ramsdell, Daniel J. Dougherty, Joshua D. Guttman, Paul D. Rowe

Analysis and Transformation


Towards a Formal Semantics-Based Technique for Interprocedural Slicing

Interprocedural slicing is a technique applied on programs with procedures which relies on how the information is passed at procedure call/return sites. Such a technique computes program slices (i.e. program fragments restricted w.r.t. a given criterion). The existing approaches to interprocedural slicing exploit the particularities of the underlying language semantics in order to compute program slices. In this paper we propose a generic technique for interprocedural slicing. More specifically, our approach works with inferred particularities of a language semantics, given as a rewriting-logic specification, and computes program slices using a term slicing-based algorithm.
Irina Măriuca Asăvoae, Mihail Asăvoae, Adrián Riesco

Integrating Software and Hardware Verification

Verification of hardware and software usually proceeds separately, software analysis relying on the correctness of processors executing instructions. This assumption is valid as long as the software runs on standard CPUs that have been extensively validated and are in wide use. However, for processors exploiting custom instruction set extensions to meet performance and energy constraints the validation might be less extensive, challenging the correctness assumption.
In this paper we present an approach for integrating software analyses with hardware verification, specifically targeting custom instruction set extensions. We propose three different techniques for deriving the properties to be proven for the hardware implementation of a custom instruction in order to support software analyses. The techniques are designed to explore the trade-off between generality and efficiency and span from proving functional equivalence over checking the rules of a particular analysis domain to verifying actual pre and post conditions resulting from program analysis. We demonstrate and compare the three techniques on example programs with custom instructions, using state-of-the-art software and hardware verification techniques.
Marie-Christine Jakobs, Marco Platzner, Heike Wehrheim, Tobias Wiersema

Code Generation for Event-B

We present an approach to generating program code from Event-B models that is correct-by-construction. Correctness is guaranteed by the combined use of well-definedness restrictions, refinement, and assertions. By enforcing the well-definedness of the translated model, we prevent runtime errors that originate from semantic differences between the target language and Event-B, such as different interpretations of the range of integer values. Using refinement, we show that the generated code correctly implements the original Event-B model. We provide a simple yet powerful scheduling language that allows one to specify an execution sequence of the model’s guarded events where assertions are used to express properties established by the event execution sequence, which are necessary for well-definedness and refinement proofs.
Andreas Fürst, Thai Son Hoang, David Basin, Krishnaji Desai, Naoto Sato, Kunihiko Miyazaki

Concurrency and Control


Verifying Linearizability on TSO Architectures

Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm’s natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification.
John Derrick, Graeme Smith, Brijesh Dongol

A Compositional Proof Method for Linearizability Applied to a Wait-Free Multiset

We introduce a compositional, complete proof method for linearizability that combines temporal logic, rely-guarantee reasoning and possibilities. The basic idea of our proof method is that each process must preserve possibility steps as an additional guarantee condition for linearizability. To illustrate the expressiveness of our method, we apply it to a wait-free multiset implementation with intricate linearization points. Both the soundness of our method as well as its application to our multiset have been mechanized in the interactive verifier KIV.
Bogdan Tofan, Gerhard Schellhorn, Wolfgang Reif

A Separation Principle for Embedded System Interfacing

In designing systems, engineers decompose the problem into smaller, more manageable tasks. A classic example of this is the separation principle from control systems which allows one to decompose the design of an optimal feedback control system into two independent tasks by designing (a) an observer, and (b) a controller. We investigate an analogous result for embedded system interfacing that will allow separation of the design of the input and output hardware interfaces while still guaranteeing the ability of the software to meet the system requirements. We define the notions of observability (controllability) of the system requirements with respect to the input (output) interface. We show that for a system that can be modeled by a functional four-variable model, observability and controllability allow for the separation of the design of the input and output interfaces. We also show that this separation is not always possible for systems that need the general, relational four-variable model. By strengthening either observability or controllability, we restrict the choice of input or output interfaces, but ensure separability of their designs.
Lucian M. Patcas, Mark Lawford, Tom Maibaum


Weitere Informationen

Premium Partner