Skip to main content

2016 | Buch

Integrated Formal Methods

12th International Conference, IFM 2016, Reykjavik, Iceland, June 1-5, 2016, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 12th International Conference on Integrated Formal Methods, IFM 2016, held in Reykjavik, Iceland, in June 2016.
The 33 papers presented in this volume were carefully reviewed and selected from 99 submissions. They were organized in topical sections named: invited contributions; program verification; probabilistic systems; concurrency; safety and liveness; model learning; SAT and SMT solving; testing; theorem proving and constraint satisfaction; case studies.

Inhaltsverzeichnis

Frontmatter

Invited Contributions

Frontmatter
Can Formal Methods Improve the Efficiency of Code Reviews?

Code reviews are a provenly effective technique to find defects in source code as well as to increase its quality. Industrial software production often relies on code reviews as a standard QA mechanism. Surprisingly, though, tool support for reviewing activities is rare. Existing systems help to keep track of the discussion during the review, but do not support the reviewing activity directly. In this paper we argue that such support can be provided by formal analysis tools. Specifically, we use symbolic execution to improve the program understanding subtask during a code review. Tool support is realized by an Eclipse extension called Symbolic Execution Debugger. It allows one to explore visually a symbolic execution tree for the program under inspection. For evaluation we carefully designed a controlled experiment. We provide statistical evidence that with the help of symbolic execution defects are identified in a more effective manner than with a merely code-based view. Our work suggests that there is huge potential for formal methods not only in the production of safety-critical systems, but for any kind of software and as part of a standard development process.

Martin Hentschel, Reiner Hähnle, Richard Bubel
Symbolic Computation and Automated Reasoning for Program Analysis

This talk describes how a combination of symbolic computation techniques with first-order theorem proving can be used for solving some challenges of automating program analysis, in particular for generating and proving properties about the logically complex parts of software. The talk will first present how computer algebra methods, such as Gröbner basis computation, quantifier elimination and algebraic recurrence solving, help us in inferring properties of program loops with non-trivial arithmetic. Typical properties inferred by our work are loop invariants and expressions bounding the number of loop iterations. The talk will then describe our work to generate first-order properties of programs with unbounded data structures, such as arrays. For doing so, we use saturation-based first-order theorem proving and extend first-order provers with support for program analysis. Since program analysis requires reasoning in the combination of first-order theories of data structures, the talk also discusses new features in first-order theorem proving, such as inductive reasoning and built-in boolean sort. These extensions allow us to express program properties directly in first-order logic and hence use further first-order theorem provers to reason about program properties.

Laura Kovács
Perspectives of Model Transformation Reuse

Model Transformations have been called the “heart and soul” of Model-Driven software development. However, they take a lot of effort to build, verify, analyze, and debug. It is thus imperative to develop good reuse strategies that address issues specific to model transformations. Some of the effective reuse strategies are adopted from other domains, specifically, programming languages. Others are custom developed for models. In this paper, we survey techiques from both categories.Specifically, we present two techniques adoped from the PL world: subtyping and mapping, and then two techniques, lifting and aggregating, that are novel in the modeling world. Subtyping is a way to reuse a transformation for different - but similar - input modelling languages. Mapping a transformation designed for single models reuses it for model collections, such as megamodels. Lifting a transformation reuses it for aggregate representations of models, such as product lines. Aggregating reuses both transformation fragments (during transformation creation) and partial execution results (during transformation execution) across multiple transformations.We then point to potential new directions for research in reuse that draw on the strengths of the programming and the modeling worlds.

Marsha Chechik, Michalis Famelis, Rick Salay, Daniel Strüber

Program Verification

Frontmatter
On Type Checking Delta-Oriented Product Lines

A Software Product Line (SPL) is a set of similar programs generated from a common code base. Delta Oriented Programming (DOP) is a flexible approach to implement SPLs. Efficiently type checking an SPL (i.e., checking that all its programs are well-typed) is challenging. This paper proposes a novel type checking approach for DOP. Intrinsic complexity of SPL type checking is addressed by providing early detection of type errors and by reducing type checking to satisfiability of a propositional formula. The approach is tunable to exploit automatically checkable DOP guidelines for making an SPL more comprehensible and type checking more efficient. The approach and guidelines are formalized by means of a core calculus for DOP of product lines of Java programs.

Ferruccio Damiani, Michael Lienhardt

Open Access

Modelling and Verifying a Priority Scheduler for an SCJ Runtime Environment

Safety-Critical Java (SCJ) is a version of Java suitable for programming real-time safety-critical systems; it is the result of an international standardisation effort to define a subset of the Real-Time Specification for Java (RTSJ). SCJ programs require the use of specialised virtual machines. We present here the result of our verification of the scheduler of the only SCJ virtual machine up to date with the standard and publicly available, the icecap HVM. We describe our approach for analysis of (SCJ) virtual machines, and illustrate it using the icecap HVM scheduler. Our work is based on a state-rich process algebra that combines Z and CSP, and we take advantage of well established tools.

Leo Freitas, James Baxter, Ana Cavalcanti, Andy Wellings
Why Just Boogie?
Translating Between Intermediate Verification Languages

The verification systems Boogie and Why3 use their respective intermediate languages to generate verification conditions from high-level programs. Since the two systems support different back-end provers (such as Z3 and Alt-Ergo) and are used to encode different high-level languages (such as C# and Java), being able to translate between their intermediate languages would provide a way to reuse one system’s features to verify programs meant for the other. This paper describes a translation of Boogie into WhyML (Why3’s intermediate language) that preserves semantics, verifiability, and program structure to a large degree. We implemented the translation as a tool and applied it to 194 Boogie-verified programs of various sources and sizes; Why3 verified 83 % of the translated programs with the same outcome as Boogie. These results indicate that the translation is often effective and practically applicable.

Michael Ameri, Carlo A. Furia

Probabilistic Systems

Frontmatter
Statistical Approximation of Optimal Schedulers for Probabilistic Timed Automata

The verification of probabilistic timed automata involves finding schedulers that optimise their nondeterministic choices with respect to the probability of a property. In practice, approaches based on model checking fail due to state-space explosion, while simulation-based techniques like statistical model checking are not applicable due to the nondeterminism. We present a new lightweight on-the-fly algorithm to find near-optimal schedulers for probabilistic timed automata. We make use of the classical region and zone abstractions from timed automata model checking, coupled with a recently developed smart sampling technique for statistical verification of Markov decision processes. Our algorithm provides estimates for both maximum and minimum probabilities. We compare our new approach with alternative techniques, first using tractable examples from the literature, then motivate its scalability using case studies that are intractable to numerical model checking and challenging for existing statistical techniques.

Pedro R. D’Argenio, Arnd Hartmanns, Axel Legay, Sean Sedwards
Probabilistic Formal Analysis of App Usage to Inform Redesign

Evaluation and redesign of user-intensive mobile applications is challenging because users are often heterogeneous, adopting different patterns of activity, at different times. We set out a process of integrating statistical, longitudinal analysis of actual logged behaviours, formal, probabilistic discrete state models of activity patterns, and hypotheses over those models expressed as probabilistic temporal logic properties to inform redesign. We employ formal methods not to the design of the mobile application, but to characterise the different probabilistic patterns of actual use over various time cuts within a population of users. We define the whole process from identifying questions that give us insight into application usage, to event logging, data abstraction from logs, model inference, temporal logic property formulation, visualisation of results, and interpretation in the context of redesign. We illustrate the process through a real-life case study, which results in a new and principled way for selecting content for an extension to the mobile application.

Oana Andrei, Muffy Calder, Matthew Chalmers, Alistair Morrison, Mattias Rost
Extension of PRISM by Synthesis of Optimal Timeouts in Fixed-Delay CTMC

We present a practically appealing extension of the probabilistic model checker PRISM rendering it to handle fixed-delay continuous-time Markov chains (fdCTMCs) with rewards, the equivalent formalism to the deterministic and stochastic Petri nets (DSPNs). fdCTMCs allow transitions with fixed-delays (or timeouts) on top of the traditional transitions with exponential rates. Our extension supports an evaluation of expected reward until reaching a given set of target states. The main contribution is that, considering the fixed-delays as parameters, we implemented a synthesis algorithm that computes the epsilon-optimal values of the fixed-delays minimizing the expected reward. We provide a performance evaluation of the synthesis on practical examples.

Ľuboš Korenčiak, Vojtěch Řehák, Adrian Farmadin

Concurrency

Frontmatter
Monitoring Multi-threaded Component-Based Systems

This paper addresses the monitoring of logic-independent linear-time user-provided properties on multi-threaded component-based systems. We consider intrinsically independent components that can be executed concurrently with a centralized coordination for multiparty interactions. In this context, the problem that arises is that a global state of the system is not available to the monitor. A naive solution to this problem would be to plug a monitor which would force the system to synchronize in order to obtain the sequence of global states at runtime. Such solution would defeat the whole purpose of having concurrent components. Instead, we reconstruct on-the-fly the global states by accumulating the partial states traversed by the system at runtime. We define formal transformations of components that preserve the semantics and the concurrency and, at the same time, allow to monitor global-state properties. Moreover, we present RVMT-BIP, a prototype tool implementing the transformations for monitoring multi-threaded systems described in the BIP (Behavior, Interaction, Priority) framework, an expressive framework for the formal construction of heterogeneous systems. Our experiments on several multi-threaded BIP systems show that RVMT-BIP induces a cheap runtime overhead.

Hosein Nazarpour, Yliès Falcone, Saddek Bensalem, Marius Bozga, Jacques Combaz
A Generalised Theory of Interface Automata, Component Compatibility and Error

Interface theories allow systems designers to reason about the composability and compatibility of concurrent system components. Such theories often extend both de Alfaro and Henzinger’s Interface Automata and Larsen’s Modal Transition Systems, which leads, however, to several issues that are undesirable in practice: an unintuitive treatment of specified unwanted behaviour, a binary compatibility concept that does not scale to multi-component assemblies, and compatibility guarantees that are insufficient for software product lines.In this paper we show that communication mismatches are central to all these problems and, thus, the ability to represent such errors semantically is an important feature of an interface theory. Accordingly, we present the error-aware interface theory EMIA, where the above shortcomings are remedied by introducing explicit fatal error states. In addition, we prove via a Galois insertion that EMIA is a conservative generalisation of the established MIA (Modal Interface Automata) theory.

Sascha Fendrich, Gerald Lüttgen
On Implementing a Monitor-Oriented Programming Framework for Actor Systems

We examine the challenges of implementing a framework for automating Monitor-Oriented Programming in the context of actor-based systems. The inherent modularity resulting from delineations induced by actors makes such systems well suited to this style of programming because monitors can surgically target parts of the system without affecting the computation in other parts. At the same time, actor systems pose new challenges for the instrumentation of the resp. monitoring observations and actions, due to the intrinsic asynchrony and encapsulation that characterise the actor model. We discuss a prototype implementation that tackles these challenges for the case of Erlang OTP, an industry-strength platform for building actor-based concurrent systems. We also demonstrate the effectiveness of our Monitor-Oriented Programming framework by using it to augment the functionality of a third-party software written in Erlang.

Ian Cassar, Adrian Francalanza
Towards a Thread-Local Proof Technique for Starvation Freedom

Today, numerous elaborate algorithms for the effective synchronization of concurrent processes operating on shared memory exist. Of particular importance for the verification of such concurrent algorithms are thread-local proof techniques, which allow to reason about the sequential program of one process individually. While thread-local verification of safety properties has received a lot of attention in recent years, this is less so for liveness properties, in particular for liveness under the assumption of fairness.In this paper, we propose a new thread-local proof technique for starvation freedom. Starvation freedom states that under a weakly fair schedule every process will eventually make progress. We contrast our new proof technique with existing global proof techniques based on ranking functions, and employ it exemplarily for the proof of starvation freedom of ticket locks, the standard locking algorithm of the Linux kernel.

Gerhard Schellhorn, Oleg Travkin, Heike Wehrheim
Reasoning About Inheritance and Unrestricted Reuse in Object-Oriented Concurrent Systems

Code reuse is a fundamental aspect of object-oriented programs, and in particular, the mechanisms of inheritance and late binding provide great flexibility in code reuse, without semantical limitations other than type-correctness. However, modular reasoning about late binding and inheritance is challenging, and formal reasoning approaches place semantical restrictions on code reuse in order to preserve properties from superclasses. The overall aim of this paper is to develop a formal framework for modular reasoning about classes and inheritance, supporting unrestricted reuse of code, as well as of specifications. The main contribution is a Hoare-style logic supporting free reuse, worked out for a high-level concurrent object-oriented language. We also show results on verification reuse, based on a combination of Hoare-style logic and static checking. An example illustrates the difference to comparable reasoning formalisms.

Olaf Owe
A Formal Model of the Safety-Critical Java Level 2 Paradigm

Safety-Critical Java (SCJ) introduces a new programming paradigm for applications that must be certified. The SCJ specification (JSR 302) is an Open Group Standard, but it does not include verification techniques. Previous work has addressed verification for SCJ Level 1 programs. We support the much more complex SCJ Level 2 programs, which allows the programming of highly concurrent multi-processor applications with Java threads, and wait and notify mechanisms. We present a formal model of SCJ Level 2 that captures the state and behaviour of both SCJ programs and the SCJ API. This is the first formal semantics of the SCJ Level 2 paradigm and is an essential ingredient in the development of refinement-based reasoning techniques for SCJ Level 2 programs. We show how our models can be used to prove properties of the SCJ API and applications.

Matt Luckcuck, Ana Cavalcanti, Andy Wellings

Safety and Liveness

Frontmatter

Open Access

Deciding Monadic Second Order Logic over -Words by Specialized Finite Automata

Several automata models are each capable of describing all $$\omega $$ω-regular languages. The most well-known such models are Büchi, parity, Rabin, Streett, and Muller automata. We present deeper insights and further enhancements to a lesser-known model. This model was chosen and the enhancements developed with a specific goal: Decide monadic second order logic (MSO) over infinite words more efficiently.MSO over various structures is of interest in different applications, mostly in formal verification. Due to its inherent high complexity, most solvers are designed to work only for subsets of MSO. The most notable full implementation of the decision procedure is MONA, which decides MSO formulae over finite words and trees.To obtain a suitable automaton model, we further studied a representation of $$\omega $$ω-regular languages by regular languages, which we call loop automata. We developed an efficient algorithm for homomorphisms in this representation, which is essential for deciding MSO. Aside from the algorithm for homomorphism, all algorithms for deciding MSO with loop automata are simple. Minimization of loop automata is basically the same as minimization of deterministic finite automata. Efficient minimization is an important feature for an efficient decision procedure for MSO. Together this should theoretically make loop automata a well-suited model for efficiently deciding MSO over $$\omega $$ω-words.Our experimental evaluation suggests that loop automata are indeed well suited for deciding MSO over $$\omega $$ω-words efficiently.

Stephan Barth
Property Preservation for Extension Patterns of State Transition Diagrams

In this paper, we consider extensions of state machines with additional functionality. We analyze how typical safety or liveness properties are affected when extending or refining the model. We identify several classes of extensions where properties are preserved. The extensions include adding new transitions at a state, refining transitions, as well as adding failure cases and adding additional, new functionality. We propose new concepts for refinements based on elimination of added behavior with context to capture property-preserving extensions in a precise and (mostly) syntactic way.

Christian Prehofer
Symbolic Reachability Analysis of B Through ProB and LTSmin

We present a symbolic reachability analysis approach for B that can provide a significant speedup over traditional explicit state model checking. The symbolic analysis is implemented by linking ProB to LTSmin, a high-performance language independent model checker. The link is achieved via LTSmin ’s Pins interface, allowing ProB to benefit from LTSmin ’s analysis algorithms, while only writing a few hundred lines of glue-code, along with a bridge between ProB and C using ØMQ. ProB supports model checking of several formal specification languages such as B, Event-B, Z and $${\textsc {Tla}}^{+}$$TLA+. Our experiments are based on a wide variety of B-Method and Event-B models to demonstrate the efficiency of the new link. Among the tested categories are state space generation and deadlock detection; but action detection and invariant checking are also feasible in principle. In many cases we observe speedups of several orders of magnitude. We also compare the results with other approaches for improving model checking, such as partial order reduction or symmetry reduction. We thus provide a new scalable, symbolic analysis algorithm for the B-Method and Event-B, along with a platform to integrate other model checking improvements via LTSmin in the future.

Jens Bendisposto, Philipp Körner, Michael Leuschel, Jeroen Meijer, Jaco van de Pol, Helen Treharne, Jorden Whitefield

Model Learning

Frontmatter
Enhancing Automata Learning by Log-Based Metrics

We study a general class of distance metrics for deterministic Mealy machines. The metrics are induced by weight functions that specify the relative importance of input sequences. By choosing an appropriate weight function we may fine-tune a metric so that it captures some intuitive notion of quality. In particular, we present a metric that is based on the minimal number of inputs that must be provided to obtain a counterexample, starting from states that can be reached by a given set of logs. For any weight function, we may boost the performance of existing model learning algorithms by introducing an extra component, which we call the Comparator. Preliminary experiments show that use of the Comparator yields a significant reduction of the number of inputs required to learn correct models, compared to current state-of-the-art algorithms. In existing automata learning algorithms, the quality of subsequent hypotheses may decrease. Generalising a result of Smetsers et al., we show that the quality of hypotheses that are generated by the Comparator never decreases.

Petra van den Bos, Rick Smetsers, Frits Vaandrager
Refactoring of Legacy Software Using Model Learning and Equivalence Checking: An Industrial Experience Report

Many companies struggle with large amounts of legacy software that is difficult to maintain and to extend. Refactoring legacy code typically requires large efforts and introduces serious risks because often crucial business assets are hidden in legacy components. We investigate the support of formal techniques for the rejuvenation of legacy embedded software, concentrating on control components. Model learning and equivalence checking are used to improve a new implementation of a legacy control component. Model learning is applied to both the old and the new implementation. The resulting models are compared using an equivalence check of a model checker. We report about our experiences with this approach at Philips. By gradually increasing the set of input stimuli, we obtained implementations of a power control service for which the learned behaviour is equivalent.

Mathijs Schuts, Jozef Hooman, Frits Vaandrager
On Robust Malware Classifiers by Verifying Unwanted Behaviours

Machine-learning-based Android malware classifiers perform badly on the detection of new malware, in particular, when they take API calls and permissions as input features, which are the best performing features known so far. This is mainly because signature-based features are very sensitive to the training data and cannot capture general behaviours of identified malware. To improve the robustness of classifiers, we study the problem of learning and verifying unwanted behaviours abstracted as automata. They are common patterns shared by malware instances but rarely seen in benign applications, e.g., intercepting and forwarding incoming SMS messages. We show that by taking the verification results against unwanted behaviours as input features, the classification performance of detecting new malware is improved dramatically. In particular, the precision and recall are respectively 8 and 51 points better than those using API calls and permissions, measured against industrial datasets collected across several years. Our approach integrates several methods: formal methods, machine learning and text mining techniques. It is the first to automatically generate unwanted behaviours for Android malware detection. We also demonstrate unwanted behaviours constructed for well-known malware families. They compare well to those described in human-authored descriptions of these families.

Wei Chen, David Aspinall, Andrew D. Gordon, Charles Sutton, Igor Muttik

SAT and SMT Solving

Frontmatter
Efficient Deadlock-Freedom Checking Using Local Analysis and SAT Solving

We build upon established techniques of deadlock analysis by formulating a new sound but incomplete framework for deadlock freedom analysis that tackles some sources of imprecision of current incomplete techniques. Our new deadlock candidate criterion is based on constraints derived from the analysis of the state space of pairs of components. This new characterisation represents an improvement in the accuracy of current incomplete techniques; in particular, the so-called non-hereditary deadlock-free systems (i.e. deadlock-free systems that have a deadlocking subsystem), which are neglected by most incomplete techniques, are tackled by our framework. Furthermore, we demonstrate how SAT checkers can be used to efficiently implement our framework in a way that, typically, scales better than current techniques for deadlock analysis. This is demonstrated by a series of practical experiments.

Pedro Antonino, Thomas Gibson-Robinson, A. W. Roscoe

Open Access

SMT Solvers for Validation of B and Event-B Models

We present an integration of the constraint solving kernel of the ProB model checker with the SMT solver Z3. We apply the combined solver to B and Event-B predicates, featuring higher-order datatypes and constructs like set comprehensions. To do so we rely on the finite set logic of Z3 and provide a new translation from B to Z3, better suited for constraint solving. Predicates can then be solved by the two solvers working hand in hand: constraints are set up in both solvers simultaneously and (intermediate) results are transferred. We thus combine a constraint logic programming based solver with a DPLL(T) based solver into a single procedure. The improved constraint solver finds application in many validation tasks, from animation of implicit specifications, to test case generation, bounded and symbolic model checking on to disproving of proof obligations. We conclude with an empirical evaluation of our approach focusing on two dimensions: comparing low and high-level encodings of B as well as comparing pure ProB to ProB combined with Z3.

Sebastian Krings, Michael Leuschel
Avoiding Medication Conflicts for Patients with Multimorbidities

Clinical pathways are care plans which detail essential steps in the care of patients with a specific clinical problem, usually a chronic disease. A pathway includes recommendations of medications prescribed at different stages of the care plan. For patients with three or more chronic diseases (known as multimorbidities) the multiple pathways have to be applied together. One common problem for such patients is the adverse interaction between medications given for different diseases. This paper proposes a solution for avoiding medication conflicts for patients with multimorbidities through the use of formal methods. We introduce the notion of a pharmaceutical graph to capture the medications associated to different stages of a pathway. We then explore the use of an optimising SMT solver (Z3) to quickly find the set of medications with the minimal number and severity of conflicts which is assumed to be the safest. We evaluate the approach on a well known case of an elderly patient with five multimorbidities.

Andrii Kovalov, Juliana Küster Filipe Bowles

Testing

Frontmatter
Temporal Random Testing for Spark Streaming

With the rise of Big Data technologies, distributed stream processing systems (SPS) have gained popularity in the last years. Among them, Spark Streaming stands out as a particularly attractive option with a growing adoption in the industry. In this work we explore the combination of temporal logic and property-based testing for testing Spark Streaming programs, by adding temporal logic operators to ScalaCheck generators and properties. This allows us to deal with the time component that complicates the testing of Spark Streaming programs and SPS in general. In particular we propose a discrete time linear temporal logic for finite words, that allows to associate a timeout to each temporal operator in order to increase the expressiveness of generators and properties. Finally, our prototype is presented with some examples.

Adrián Riesco, Juan Rodríguez-Hortalá
Combining Static Analysis and Testing for Deadlock Detection

Static deadlock analyzers might be able to verify the absence of deadlock. However, they are usually not able to detect its presence. Also, when they detect a potential deadlock cycle, they provide little (or even no) information on their output. Due to the complex flow of concurrent programs, the user might not be able to find the source of the anomalous behaviour from the abstract information computed by static analysis. This paper proposes the combined use of static analysis and testing for effective deadlock detection in asynchronous programs. When the program features a deadlock, our combined use of analysis and testing provides an effective technique to catch deadlock traces. While if the program does not have deadlock, but the analyzer inaccurately spotted it, we might prove deadlock freedom.

Elvira Albert, Miguel Gómez-Zamalloa, Miguel Isabel
Fuzzing JavaScript Engine APIs

JavaScript is one of the most wide-spread programming languages: it drives the web applications in browsers, it runs on server side, and it gets to the embedded world as well. Because of its prevalence, ensuring the correctness of its execution engines is highly important. One of the hardest parts to test in an execution environment is the API exposed by the engine. Thus, we focus on fuzz testing of JavaScript engine APIs in this paper. We formally define a graph representation that is suited to describe type information in an engine, explain how to build such graphs, and describe how to use them for API fuzz testing. Our experimental evaluation of the techniques on a real-life in-use JavaScript engine shows that the introduced approach gives better coverage than available existing fuzzing techniques and could also find valid issues in the tested system.

Renáta Hodován, Ákos Kiss

Theorem Proving and Constraint Satisfaction

Frontmatter
A Component-Based Approach to Hybrid Systems Safety Verification

We study a component-based approach to simplify the challenges of verifying large-scale hybrid systems. Component-based modeling can be used to split large models into partial models to reduce modeling complexity. Yet, verification results also need to transfer from components to composites. In this paper, we propose a component-based hybrid system verification approach that combines the advantages of component-based modeling (e.g., reduced model complexity) with the advantages of formal verification (e.g., guaranteed contract compliance). Our strategy is to decompose the system into components, verify their local safety individually and compose them to form an overall system that provably satisfies a global contract, without proving the whole system. We introduce the necessary formalism to define the structure and behavior of components and a technique how to compose components such that safety properties provably emerge from component safety.

Andreas Müller, Stefan Mitsch, Werner Retschitzegger, Wieland Schwinger, André Platzer
Verifying Pointer Programs Using Separation Logic and Invariant Based Programming in Isabelle

In this paper we present a technique based on invariant based programming and separation logic for verifying pointer programs. Invariant based programs are directed graphs where the nodes are called situations and are labeled by predicates (invariants). The edges are called transitions, and are labeled by guarded assignment statements. We represent the situations using Isabelle’s locales. A locale is a theory parameterized by constants and types. The constant parameters are used for modeling the state of the program, and the type parameters are used for obtaining generic programs. The invariants are modeled as locale assumptions (axioms), and the transitions are modeled as locale interpretations. For modeling pointer programs we use separation logic. The final result of this construction is a collection of mutually recursive Isabelle functions that implements the program. We apply this technique to Lindstrom’s algorithm for traversing a binary tree without additional memory.

Viorel Preoteasa
A Constraint Satisfaction Method for Configuring Non-local Service Interfaces

Modularity and decontextualization are core principles of a service-oriented architecture. However, the principles are often lost when it comes to an implementation of services due to rigid service interfaces. This paper focuses on a two-fold problem. On the one hand, the interface description language must be flexible for maintaining service compatibility in different contexts without modification of the service itself. On the other hand, the composition of interfaces in a distributed environment must be provably consistent.We present a novel approach for automatic interface configuration in distributed services. We introduce a Message Definition Language (MDL), an interface description language with support of subtyping, flow inheritance and polymorphism. The MDL supports configuration variables that link input and output interfaces of a service and propagate requirements over an application graph. We present an algorithm that solves the interface reconciliation problem using constraint satisfaction that relies on Boolean satisfiability as a subproblem.

Pavel Zaichenkov, Olga Tveretina, Alex Shafarenko

Case Studies

Frontmatter
Rule-Based Consistency Checking of Railway Infrastructure Designs

Railway designs deal with complex and large-scale, safety-critical infrastructures, where formal methods play an important role, especially in verifying the safety of so-called interlockings through model checking. Model checking deals with state change and rather complex properties, usually incurring considerable computational burden (chiefly in terms of memory, known as state-space explosion problem). In contrast to this, we focus on static infrastructure properties, based on design guidelines and heuristics. The purpose is to automate much of the manual work of the railway engineers through software that can do verification on-the-fly. In consequence, this paper describes the integration of formal methods into the railway design process, by formalizing relevant technical rules and expert knowledge. We employ a variant of Datalog and use the standardized “railway markup language” railML as basis and exchange format for the formalization. We describe a prototype tool and its (ongoing) integration in industrial railway CAD software, developed under the name RailCOMPLETE®. We apply this tool chain in a Norwegian railway project, the upgrade of the Arna railway station.

Bjørnar Luteberget, Christian Johansen, Martin Steffen
Formal Verification of Safety PLC Based Control Software

Programmable Logic Controllers (PLCs) are widely used in the industry for various industrial automation tasks. Besides non-safety applications, the usage of PLCs became accepted in safety-critical installations, where the cost of failure is high. In these cases the used hardware is special (so-called fail-safe or safety PLCs), but also the software needs special considerations. Formal verification is a method that can help to develop high-quality software for critical tasks. However, such method should be adapted to the special needs of the safety PLCs, that are often particular compared to the normal PLC development domain. In this paper we propose two complementary solutions for the formal verification of safety-critical PLC programs based on model checking and equivalence checking using formal specification. Furthermore, a case study is presented, demonstrating our approach.

Dániel Darvas, István Majzik, Enrique Blanco Viñuela
CloudSDV Enabling Static Driver Verifier Using Microsoft Azure

In this paper we describe our experience of enabling Static Driver Verifier to use the Microsoft Azure cloud computing platform. We first describe in detail our architecture and methodology for enabling SDV to operate in the Microsoft Azure cloud. We then present our results of using CloudSDV on single drivers and driver suites using various configurations of the cloud relative to a local machine. Our experiments show that using the cloud, we are able to achieve speedups in excess of 20x, which has enabled us to perform mass scale verification in a matter of hours as opposed to days. Finally, we present a brief discussion about our results and experiences.

Rahul Kumar, Thomas Ball, Jakob Lichtenberg, Nate Deisinger, Apoorv Upreti, Chetan Bansal
Backmatter
Metadaten
Titel
Integrated Formal Methods
herausgegeben von
Erika Ábrahám
Marieke Huisman
Copyright-Jahr
2016
Electronic ISBN
978-3-319-33693-0
Print ISBN
978-3-319-33692-3
DOI
https://doi.org/10.1007/978-3-319-33693-0