Skip to main content

2019 | Buch

Formal Methods – The Next 30 Years

Third World Congress, FM 2019, Porto, Portugal, October 7–11, 2019, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 23rd Symposium on Formal Methods, FM 2019, held in Porto, Portugal, in the form of the Third World Congress on Formal Methods, in October 2019.

The 44 full papers presented together with 3 invited presentations were carefully reviewed and selected from 129 submissions. The papers are organized in topical sections named: Invited Presentations; Verification; Synthesis Techniques; Concurrency; Model Checking Circus; Model Checking; Analysis Techniques; Specification Languages; Reasoning Techniques; Modelling Languages; Learning-Based Techniques and Applications; Refactoring and Reprogramming; I-Day Presentations.

Inhaltsverzeichnis

Frontmatter

Invited Presentations

Frontmatter
The Human in Formal Methods

Formal methods are invaluable for reasoning about complex systems. As these techniques and tools have improved in expressiveness and scale, their adoption has grown rapidly. Sustaining this growth, however, requires attention to not only the technical but also the human side. In this paper (and accompanying talk), we discuss some of the challenges and opportunities for human factors in formal methods.

Shriram Krishnamurthi, Tim Nelson
Successes in Deployed Verified Software (and Insights on Key Social Factors)

In this talk, we will share our experience in the successful deployment of verified software in a wide range of application domains, and, importantly, our insights on the key factors enabling such successful deployment, in particular the importance of the social aspects of a group working effectively together.Our formally verified microkernel, seL4, is now used across the world in a number of applications that keeps growing. Our experience is that such an uptake is enabled not only by a technical strategy, but also by a tight integration of people from multiple disciplines and with both research and engineering profiles. This requires a strong social culture, with well designed processes, for working as one unified team. We share our observations on what concrete social structures have been key for us in creating real-world impact from research breakthroughs.

June Andronick

Verification

Frontmatter
Provably Correct Floating-Point Implementation of a Point-in-Polygon Algorithm

The problem of determining whether or not a point lies inside a given polygon occurs in many applications. In air traffic management concepts, a correct solution to the point-in-polygon problem is critical to geofencing systems for Unmanned Aerial Vehicles and in weather avoidance applications. Many mathematical methods can be used to solve the point-in-polygon problem. Unfortunately, a straightforward floating-point implementation of these methods can lead to incorrect results due to round-off errors. In particular, these errors may cause the control flow of the program to diverge with respect to the ideal real-number algorithm. This divergence potentially results in an incorrect point-in-polygon determination even when the point is far from the edges of the polygon. This paper presents a provably correct implementation of a point-in-polygon method that is based on the computation of the winding number. This implementation is mechanically generated from a source-to-source transformation of the ideal real-number specification of the algorithm. The correctness of this implementation is formally verified within the Frama-C analyzer, where the proof obligations are discharged using the Prototype Verification System (PVS).

Mariano M. Moscato, Laura Titolo, Marco A. Feliú, César A. Muñoz
Formally Verified Roundoff Errors Using SMT-based Certificates and Subdivisions

When compared to idealized, real-valued arithmetic, finite precision arithmetic introduces unavoidable errors, for which numerous tools compute sound upper bounds. To ensure soundness, providing formal guarantees on these complex tools is highly valuable.In this paper we extend one such formally verified tool, FloVer. First, we extend FloVer with an SMT-based domain using results from an external SMT solver as an oracle. Second, we implement interval subdivision on top of the existing analyses. Our evaluation shows that these extensions allow FloVer to efficiently certify more precise bounds for nonlinear expressions.

Joachim Bard, Heiko Becker, Eva Darulova
Mechanically Verifying the Fundamental Liveness Property of the Chord Protocol

Chord is a protocol providing a scalable distributed hash table over an underlying peer-to-peer network. It is very popular due to its simplicity, performance and claimed correctness. However, the original version of the Chord maintenance protocol, presented with an informal proof of correctness, was since then shown to be in fact incorrect. It is actually tricky to come up with a provably-correct version as the protocol combines data structures, asynchronous communication, concurrency, and fault tolerance. Additionally, the correctness property amounts to a form of stabilization, a particular kind of liveness property. Previous work only addressed automated proofs of safety; and pen-and-paper, or automated but much bounded, proofs of stabilization. In this article, we report on the first mechanized proof of the liveness property for Chord. Furthermore, our proof addresses the full parameterized version of the protocol, weakens previously-devised invariants and operating assumptions, and is essentially automated (requiring limited effort when manual assistance is needed).

Jean-Paul Bodeveix, Julien Brunel, David Chemouil, Mamoun Filali
On the Nature of Symbolic Execution

In this paper, we provide a formal definition of symbolic execution in terms of a symbolic transition system and prove its correctness with respect to an operational semantics which models the execution on concrete values. We first introduce such a formal model for a basic programming language with a statically fixed number of programming variables. This model is extended to a programming language with recursive procedures which are called by a call-by-value parameter mechanism. Finally, we show how to extend this latter model of symbolic execution to arrays and object-oriented languages which feature dynamically allocated variables.

Frank S. de Boer, Marcello Bonsangue

Synthesis Techniques

Frontmatter
GR(1)*: GR(1) Specifications Extended with Existential Guarantees

Reactive synthesis is an automated procedure to obtain a correct-by-construction reactive system from its temporal logic specification. GR(1) is an expressive assume-guarantee fragment of LTL that enables efficient synthesis and has been recently used in different contexts and application domains. A common form of providing the system’s requirements is through use cases, which are existential in nature. However, GR(1), as a fragment of LTL, is limited to universal properties.In this paper we introduce GR(1)*, which extends GR(1) with existential guarantees. We show that GR(1)* is strictly more expressive than GR(1) as it enables the expression of guarantees that are inexpressible in LTL. We solve the realizability problem for GR(1)* and present a symbolic strategy construction algorithm for GR(1)* specifications. Importantly, in comparison to GR(1), GR(1)* remains efficient, and induces only a minor additional cost in terms of time complexity, proportional to the extended length of the formula.

Gal Amram, Shahar Maoz, Or Pistiner
Counterexample-Driven Synthesis for Probabilistic Program Sketches

Probabilistic programs are key to deal with uncertainty in, e.g., controller synthesis. They are typically small but intricate. Their development is complex and error prone requiring quantitative reasoning over a myriad of alternative designs. To mitigate this complexity, we adopt counterexample-guided inductive synthesis (CEGIS) to automatically synthesise finite-state probabilistic programs. Our approach leverages efficient model checking, modern SMT solving, and counterexample generation at program level. Experiments on practically relevant case studies show that design spaces with millions of candidate designs can be fully explored using a few thousand verification queries.

Milan Češka, Christian Hensel, Sebastian Junges, Joost-Pieter Katoen
Synthesis of Railway Signaling Layout from Local Capacity Specifications

We present an optimization-based synthesis method for laying out railway signaling components on a given track infrastructure to fulfill capacity specifications. The specifications and the optimization method are designed to be suitable for the scope of signaling construction projects and their associated interlocking systems, but can be adapted to related problems in, e.g., highway, tram, or airport runway designs. The main synthesis algorithm starts from an initial heuristic over-approximation of required signaling components and iterates towards better designs using two main optimization techniques: (1) global simultaneous planning of all operational scenarios using incremental SAT-based optimization to eliminate redundant signaling components, and (2) a derivative-free numerical optimization method using as cost function timing results given by a discrete event simulation engine, applied on all the plans from (1).Synthesizing all of the signaling layout might not always be appropriate in practice, and partial synthesis from an already valid design is a more practical alternative. In consequence, we focus also on the usefulness of the individual optimization steps: SAT-based planning is used to suggest removal of redundant signaling components, whereas numerical optimization of timing results is used to suggest moving signaling components around on the layout, or adding new components. Such changes are suggested to railway engineers using an interactive tool where they can investigate the consequences of applying the various optimizations.

Bjørnar Luteberget, Christian Johansen, Martin Steffen
Pegasus: A Framework for Sound Continuous Invariant Generation

Continuous invariants are an important component in deductive verification of hybrid and continuous systems. Just like discrete invariants are used to reason about correctness in discrete systems without unrolling their loops forever, continuous invariants are used to reason about differential equations without having to solve them. Automatic generation of continuous invariants remains one of the biggest practical challenges to automation of formal proofs of safety in hybrid systems. There are at present many disparate methods available for generating continuous invariants; however, this wealth of diverse techniques presents a number of challenges, with different methods having different strengths and weaknesses. To address some of these challenges, we develop Pegasus: an automatic continuous invariant generator which allows for combinations of various methods, and integrate it with the KeYmaera X theorem prover for hybrid systems. We describe some of the architectural aspects of this integration, comment on its methods and challenges, and present an experimental evaluation on a suite of benchmarks.

Andrew Sogokon, Stefan Mitsch, Yong Kiam Tan, Katherine Cordwell, André Platzer

Concurrency

Frontmatter
A Parametric Rely-Guarantee Reasoning Framework for Concurrent Reactive Systems

Reactive systems are composed of a well defined set of event handlers by which the system responds to environment stimulus. In concurrent environments, event handlers can interact with the execution of other handlers such as hardware interruptions in preemptive systems, or other instances of the reactive system in multicore architectures. The rely-guarantee technique is a suitable approach for the specification and verification of reactive systems. However, the languages in existing rely-guarantee implementations are designed only for “pure programs”, simulating reactive systems makes the program and rely-guarantee conditions unnecessary complicated. In this paper, we decouple the system reactions and programs using a rely-guarantee interface, and develop PiCore, a parametric rely-guarantee framework for concurrent reactive systems. PiCore has a two-level inference system to reason on events and programs associated to events. The rely-guarantee interface between the two levels allows the reusability of existing languages and their rely-guarantee proof systems for programs. In this work we show how to integrate in PiCore two existing rely-guarantee proof systems. This work has been fully mechanized in Isabelle/HOL. As a case study, we have applied PiCore to the concurrent buddy memory allocation of a real-world OS, providing a verified low-level specification and revealing bugs in the C code.

Yongwang Zhao, David Sanán, Fuyuan Zhang, Yang Liu
Verifying Correctness of Persistent Concurrent Data Structures

Non-volatile memory (NVM), aka persistent memory, is a new paradigm for memory preserving its contents even after power loss. The expected ubiquity of NVM has stimulated interest in the design of persistent concurrent data structures, together with associated notions of correctness. In this paper, we present the first formal proof technique for durable linearizability, which is a correctness criterion that extends linearizability to handle crashes and recovery in the context of NVM. Our proofs are based on refinement of IO-automata representations of concurrent data structures. To this end, we develop a generic procedure for transforming any standard sequential data structure into a durable specification. Since the durable specification only exhibits durably linearizable behaviours, it serves as the abstract specification in our refinement proof. We exemplify our technique on a recently proposed persistent memory queue that builds on Michael and Scott’s lock-free queue.

John Derrick, Simon Doherty, Brijesh Dongol, Gerhard Schellhorn, Heike Wehrheim
Compositional Verification of Concurrent Systems by Combining Bisimulations

One approach to verify a property expressed as a modal $$\mu $$ μ -calculus formula on a system with several concurrent processes is to build the underlying state space compositionally (i.e., by minimizing and recomposing the state spaces of individual processes, keeping visible only the relevant actions occurring in the formula), and check the formula on the resulting state space. It was shown previously that, when checking the formulas of the $$L_{\mu }^{ dsbr }$$ L μ dsbr fragment of $$\mu $$ μ -calculus (consisting of weak modalities only), individual processes can be minimized modulo divergence-preserving branching (divbranching) bisimulation. In this paper, we refine this approach to handle formulas containing both strong and weak modalities, so as to enable a combined use of strong or divbranching bisimulation minimization on concurrent processes depending whether they contain or not the actions occurring in the strong modalities of the formula. We extend $$L_{\mu }^{ dsbr }$$ L μ dsbr with strong modalities and show that the combined minimization approach preserves the truth value of formulas of the extended fragment. We implemented this approach on top of the CADP verification toolbox and demonstrated how it improves the capabilities of compositional verification on realistic examples of concurrent systems.

Frédéric Lang, Radu Mateescu, Franco Mazzanti

Model Checking Circus

Frontmatter
Towards a Model-Checker for Circus

Among several approaches aiming at the correctness of systems, model-checking is one technique to formally assess system models regarding their desired/undesired behavioural properties. We aim at model-checking the notation that combines Z, CSP, and Morgan’s refinement calculus, based on the Unifying Theories of Programming. In this paper, we experiment with approaches for capturing processes in CSP, and for each approach, we evaluate the impact of our decisions on the state-space explored as well as the time spent for such a checking using FDR. We also experimented with the consequences of model-checking CSP models that capture both state invariants and preconditions of models.

Artur Oliveira Gomes, Andrew Butterfield
Circus2CSP: A Tool for Model-Checking Circus Using FDR

In this paper, we introduce Circus2CSP, a tool that automatically translates Circus into $${CSP}_{M}$$ CSP M , with an implementation based on a published manual translation scheme. This scheme includes new and modified translation rules that emerged as a result of experimentation. We addressed issues with FDR state-space explosion, by optimising our models using the Circus Refinement Laws. We briefly describe the usage of Circus2CSP along with a discussion of some experiments comparing our tool with the literature.

Artur Oliveira Gomes, Andrew Butterfield

Model Checking

Frontmatter
How Hard Is Finding Shortest Counter-Example Lassos in Model Checking?

Modern model checkers help system engineers to pinpoint the reason for the faulty behavior of a system by providing counter-example traces. For finite-state systems and $$\omega $$ ω -regular specifications, they come in the form of lassos. Lassos that are unnecessarily long should be avoided, as they make finding the cause for an error in a trace harder.We give the first thorough characterization of the computational complexity of finding the shortest and approximately shortest counter-example lassos in model checking for the full class of $$\omega $$ ω -regular specifications. We show how to build (potentially exponentially larger) tight automata for arbitrary $$\omega $$ ω -regular specifications, which can be used to reduce finding shortest counter-example lassos for some finite-state system to finding a shortest accepting lasso in a (product) Büchi automaton. We then show that even approximating the size of the shortest counter-example lasso is an NP-hard problem for any polynomial approximation function, which demonstrates the hardness of obtaining short counter-examples in practical model checking. Minimizing only the length of the lasso cycle is however possible in polynomial time for a fixed but arbitrary upper limit on the size of strongly connected components in specification automata.

Rüdiger Ehlers
From LTL to Unambiguous Büchi Automata via Disambiguation of Alternating Automata

This paper proposes a new algorithm for the generation of unambiguous Büchi automata (UBA) from LTL formulas. Unlike existing tableau-based LTL-to-UBA translations, our algorithm deals with very weak alternating automata (VWAA) as an intermediate representation. It relies on a new notion of unambiguity for VWAA and a disambiguation procedure for VWAA. We introduce optimizations on the VWAA level and new LTL simplifications targeted at generating small UBA. We report on an implementation of the construction in our tool Duggi and discuss experimental results that compare the automata sizes and computation times of Duggi with the tableau-based LTL-to-UBA translation of the SPOT tool set. Our experiments also cover the analysis of Markov chains under LTL specifications, which is an important application of UBA.

Simon Jantsch, David Müller, Christel Baier, Joachim Klein
Generic Partition Refinement and Weighted Tree Automata

Partition refinement is a method for minimizing automata and transition systems of various types. Recently, we have developed a partition refinement algorithm that is generic in the transition type of the given system and matches the run time of the best known algorithms for many concrete types of systems, e.g. deterministic automata as well as ordinary, weighted, and probabilistic (labelled) transition systems. Genericity is achieved by modelling transition types as functors on sets, and systems as coalgebras. In the present work, we refine the run time analysis of our algorithm to cover additional instances, notably weighted automata and, more generally, weighted tree automata. For weights in a cancellative monoid we match, and for non-cancellative monoids such as (the additive monoid of) the tropical semiring even substantially improve, the asymptotic run time of the best known algorithms. We have implemented our algorithm in a generic tool that is easily instantiated to concrete system types by implementing a simple refinement interface. Moreover, the algorithm and the tool are modular, and partition refiners for new types of systems are obtained easily by composing pre-implemented basic functors. Experiments show that even for complex system types, the tool is able to handle systems with millions of transitions.

Hans-Peter Deifel, Stefan Milius, Lutz Schröder, Thorsten Wißmann
Equilibria-Based Probabilistic Model Checking for Concurrent Stochastic Games

Probabilistic model checking for stochastic games enables formal verification of systems that comprise competing or collaborating entities operating in a stochastic environment. Despite good progress in the area, existing approaches focus on zero-sum goals and cannot reason about scenarios where entities are endowed with different objectives. In this paper, we propose probabilistic model checking techniques for concurrent stochastic games based on Nash equilibria. We extend the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards) to allow reasoning about players with distinct quantitative goals, which capture either the probability of an event occurring or a reward measure. We present algorithms to synthesise strategies that are subgame perfect social welfare optimal Nash equilibria, i.e., where there is no incentive for any players to unilaterally change their strategy in any state of the game, whilst the combined probabilities or rewards are maximised. We implement our techniques in the PRISM-games tool and apply them to several case studies, including network protocols and robot navigation, showing the benefits compared to existing approaches.

Marta Kwiatkowska, Gethin Norman, David Parker, Gabriel Santos

Analysis Techniques

Frontmatter
Abstract Execution

We propose a new static software analysis principle called Abstract Execution, generalizing Symbolic Execution: While the latter analyzes all possible execution paths of a specific program, Abstract Execution analyzes a partially unspecified program by permitting abstract symbols representing unknown contexts. For each abstract symbol, we faithfully represent each possible concrete execution resulting from its substitution with concrete code. There is a wide range of applications of Abstract Execution, especially for verifying relational properties of schematic programs. We implemented Abstract Execution in a deductive verification framework and proved correctness of eight well-known statement-level refactoring rules, including two with loops. For each refactoring we characterize the preconditions that make it semantics-preserving. Most preconditions are not mentioned in the literature.

Dominic Steinhöfel, Reiner Hähnle
Static Analysis for Detecting High-Level Races in RTOS Kernels

We propose a static analysis based approach for detecting high-level races in RTOS kernels popularly used in safety-critical embedded software. High-Level races are indicators of atomicity violations and can lead to erroneous software behaviour with serious consequences. Hitherto techniques for detecting high-level races have relied on model-checking approaches, which are inefficient and apriori unsound. In contrast we propose a technique based on static analysis that is both efficient and sound. The technique is based on the notion of disjoint blocks recently introduced in Chopra et al. [5]. We evaluate our technique on three popular RTOS kernels and show that it is effective in detecting races, many of them harmful, with a high rate of precision.

Abhishek Singh, Rekha Pai, Deepak D’Souza, Meenakshi D’Souza
Parallel Composition and Modular Verification of Computer Controlled Systems in Differential Dynamic Logic

Computer-Controlled Systems (CCS) are a subclass of hybrid systems where the periodic relation of control components to time is paramount. Since they additionally are at the heart of many safety-critical devices, it is of primary importance to correctly model such systems and to ensure they function correctly according to safety requirements. Differential dynamic logic $$d\mathcal {L}$$ d L is a powerful logic to model hybrid systems and to prove their correctness. We contribute a component-based modeling and reasoning framework to $$d\mathcal {L}$$ d L that separates models into components with timing guarantees, such as reactivity of controllers and controllability of continuous dynamics. Components operate in parallel, with coarse-grained interleaving, periodic execution and communication. We present techniques to automate system safety proofs from isolated, modular, and possibly mechanized proofs of component properties parameterized with timing characteristics.

Simon Lunel, Stefan Mitsch, Benoit Boyer, Jean-Pierre Talpin
An Axiomatic Approach to Liveness for Differential Equations

This paper presents an approach for deductive liveness verification for ordinary differential equations (ODEs) with differential dynamic logic. Numerous subtleties complicate the generalization of well-known discrete liveness verification techniques, such as loop variants, to the continuous setting. For example, ODE solutions may blow up in finite time or their progress towards the goal may converge to zero. Our approach handles these subtleties by successively refining ODE liveness properties using ODE invariance properties which have a well-understood deductive proof theory. This approach is widely applicable: we survey several liveness arguments in the literature and derive them all as special instances of our axiomatic refinement approach. We also correct several soundness errors in the surveyed arguments, which further highlights the subtlety of ODE liveness reasoning and the utility of our deductive approach. The library of common refinement steps identified through our approach enables both the sound development and justification of new ODE liveness proof rules from our axioms.

Yong Kiam Tan, André Platzer
Local Consistency Check in Synchronous Dataflow Models

Dataflow graphs are typically used to model signal processing applications. Consistency is a necessary condition for the existence of a dataflow graph schedule using bounded memory. Existing methods to check this property are based on a static analysis. At every modification on the dataflow graph, the consistency property has to be checked again and on the entire graph, after its construction. In this paper, we argue that for each modification, the consistency can be checked only on the modified graph elements, and during its construction. We propose an alternative method, that can be applied either on the entire graph, or locally, at each modification of a dataflow graph. For both cases, we analyse our algorithm’s advantages, and compare its performance to an existing algorithm. For the experimental setup, we generate random graphs with worst-case instances and realistic instances. Our theoretical analysis shows that the proposed algorithm can reduce the number of operations required for the consistency verification, even on entire graphs. The experimental results show that our algorithm outperforms the state-of-the-art algorithm on the considered benchmark.

Dina Irofti, Paul Dubrulle
Gray-Box Monitoring of Hyperproperties

Many important system properties, particularly in security and privacy, cannot be verified statically. Therefore, runtime verification is an appealing alternative. Logics for hyperproperties, such as HyperLTL, support a rich set of such properties. We first show that black-box monitoring of HyperLTL is in general unfeasible, and suggest a gray-box approach. Gray-box monitoring implies performing analysis of the system at run-time, which brings new limitations to monitorability (the feasibility of solving the monitoring problem). Thus, as another contribution of this paper, we refine the classic notions of monitorability, both for trace properties and hyperproperties, taking into account the computability of the monitor. We then apply our approach to monitor a privacy hyperproperty called distributed data minimality, expressed as a HyperLTL property, by using an SMT-based static verifier at runtime.

Sandro Stucki, César Sánchez, Gerardo Schneider, Borzoo Bonakdarpour
Quantitative Verification of Numerical Stability for Kalman Filters

Kalman filters are widely used for estimating the state of a system based on noisy or inaccurate sensor readings, for example in the control and navigation of vehicles or robots. However, numerical instability may lead to divergence of the filter, and establishing robustness against such issues can be challenging. We propose novel formal verification techniques and software to perform a rigorous quantitative analysis of the effectiveness of Kalman filters. We present a general framework for modelling Kalman filter implementations operating on linear discrete-time stochastic systems, and techniques to systematically construct a Markov model of the filter’s operation using truncation and discretisation of the stochastic noise model. Numerical stability properties are then verified using probabilistic model checking. We evaluate the scalability and accuracy of our approach on two distinct probabilistic kinematic models and several implementations of Kalman filters.

Alexandros Evangelidis, David Parker
Concolic Testing Heap-Manipulating Programs

Concolic testing is a test generation technique which works effectively by integrating random testing generation and symbolic execution. Existing concolic testing engines focus on numeric programs. Heap-manipulating programs make extensive use of complex heap objects like trees and lists. Testing such programs is challenging due to multiple reasons. Firstly, test inputs for such programs are required to satisfy non-trivial constraints which must be specified precisely. Secondly, precisely encoding and solving path conditions in such programs are challenging and often expensive. In this work, we propose the first concolic testing engine called CSF for heap-manipulating programs based on separation logic. CSF effectively combines specification-based testing and concolic execution for test input generation. It is evaluated on a set of challenging heap-manipulating programs. The results show that CSF generates valid test inputs with high coverage efficiently. Furthermore, we show that CSF can be potentially used in combination with precondition inference tools to reduce the user effort.

Long H. Pham, Quang Loc Le, Quoc-Sang Phan, Jun Sun

Specification Languages

Frontmatter
Formal Semantics Extraction from Natural Language Specifications for ARM

This paper proposes a method to systematically extract the formal semantics of ARM instructions from their natural language specifications. Although ARM is based on RISC architecture and the number of instructions is relatively small, an abundance of variations diversely exist under various series including Cortex-A, Cortex-M, and Cortex-R. Thus, the semi-automatic semantics formalisation of rather simple instructions results in reducing tedious human efforts for tool developments e.g., the symbolic execution. We concentrate on six variations: M0, M0+, M3, M4, M7, and M33 of ARM Cortex-M series, aiming at covering IoT malware. Our systematic approach consists of the semantics interpretation by applying translation rules, augmented by the sentences similarity analysis to recognise the modification of flags. Among 1039 collected specifications, the formal semantics of 662 instructions have been successfully extracted by using only 228 manually prepared rules. They are utilised afterwards to preliminarily build a dynamic symbolic execution tool for Cortex-M called Corana. We experimentally observe that Corana is capable of effectively tracing IoT malware under the presence of obfuscation techniques like indirect jumps, as well as correctly detecting dead conditional branches, which are regarded as opaque predicates.

Anh V. Vu, Mizuhito Ogawa
GOSPEL—Providing OCaml with a Formal Specification Language

This paper introduces GOSPEL, a behavioral specification language for OCaml. It is designed to enable modular verification of data structures and algorithms. GOSPEL is a contract-based, strongly typed language, with a formal semantics defined by means of translation into Separation Logic. Compared with writing specifications directly in Separation Logic, GOSPEL provides a high-level syntax that greatly improves conciseness and makes it accessible to programmers with no familiarity with Separation Logic. Although GOSPEL has been developed for specifying OCaml code, we believe that many aspects of its design could apply to other programming languages. This paper presents the design and semantics of GOSPEL, and reports on its application for the development of a formally verified library of general-purpose OCaml data structures.

Arthur Charguéraud, Jean-Christophe Filliâtre, Cláudio Lourenço, Mário Pereira
Unification in Matching Logic

Matching Logic is a framework for specifying programming language semantics and reasoning about programs. Its formulas are called patterns and are built with variables, symbols, connectives and quantifiers. A pattern is a combination of structural components (term patterns), which must be matched, and constraints (predicate patterns), which must be satisfied. Dealing with more than one structural component in a pattern could be cumbersome because it involves multiple matching operations. A source for getting patterns with many structural components is the conjunction of patterns. Here, we propose a method that uses a syntactic unification algorithm to transform conjunctions of structural patterns into equivalent patterns having only one structural component and some additional constraints. We prove the soundness and the completeness of our approach, and we provide sound strategies to generate certificates for the equivalences.

Andrei Arusoaie, Dorel Lucanu
Embedding High-Level Formal Specifications into Applications

The common formal methods workflow consists of formalising a model followed by applying model checking and proof techniques. Once an appropriate level of certainty is reached, code generators are used in order to gain executable code.In this paper, we propose a different approach: instead of generating code from formal models, it is also possible to embed a model checker or animator into applications in order to use the formal models themselves at runtime. We present the enabling technology ProB 2.0, a Java API to the ProB animator and model checker. We describe several case studies that use ProB 2.0 to interact with a formal specification at runtime.

Philipp Körner, Jens Bendisposto, Jannik Dunkelau, Sebastian Krings, Michael Leuschel

Reasoning Techniques

Frontmatter
Value-Dependent Information-Flow Security on Weak Memory Models

Weak memory models implemented on modern multicore processors are known to affect the correctness of concurrent code. They can also affect whether or not it is secure. This is particularly the case in programs where the security levels of variables are value-dependent, i.e., depend on the values of other variables. In this paper, we illustrate how instruction reordering allowed by contemporary multicore processors leads to vulnerabilities in such programs, and present a compositional, timing-sensitive information-flow logic which can be used to detect such vulnerabilities. The logic allows step-local reasoning (one instruction at a time) about a thread’s security by tracking information about dependencies between instructions which guarantee their order of occurrence. Program security can then be established from individual thread security using rely/guarantee reasoning.

Graeme Smith, Nicholas Coughlin, Toby Murray
Reasoning Formally About Database Queries and Updates

This paper explores formal verification in the area of database technology, in particular how to reason about queries and updates in a database system. The formalism is sufficiently general to be applicable to relational and graph databases. We first define a domain-specific language consisting of nested query and update primitives, and give its operational semantics. Queries are in full first-order logic. The problem we try to solve is whether a database satisfying a given pre-condition will satisfy a given post-condition after execution of a given sequence of queries and updates. We propose a weakest-precondition calculus and prove its correctness. We finally examine a restriction of our framework that produces formulas in the guarded fragment of predicate logic and thus leads to a decidable proof problem.

Jon Haël Brenas, Rachid Echahed, Martin Strecker
Abstraction and Subsumption in Modular Verification of C Programs

Representation predicates enable data abstraction in separation logic, but when the same concrete implementation may need to be abstracted in different ways, one needs a notion of subsumption. We demonstrate function-specification subtyping, analogous to subtyping, with a subsumption rule: if $$\phi $$ ϕ is a of $$\psi $$ ψ , that is $$\phi <:\psi $$ ϕ < : ψ , then $$x:\phi $$ x : ϕ implies $$x:\psi $$ x : ψ , meaning that any function satisfying specification $$\phi $$ ϕ can be used wherever a function satisfying $$\psi $$ ψ is demanded. We extend previous notions of Hoare-logic sub-specification, which already included parameter adaption, to include framing (necessary for separation logic) and impredicative bifunctors (necessary for higher-order functions, i.e. function pointers). We show intersection specifications, with the expected relation to subtyping. We show how this enables compositional modular verification of the functional correctness of C programs, in Coq, with foundational machine-checked proofs of soundness.

Lennart Beringer, Andrew W. Appel

Modelling Languages

Frontmatter
IELE: A Rigorously Designed Language and Tool Ecosystem for the Blockchain

This paper proposes IELE, an LLVM-style language, together with a tool ecosystem for implementing and formally reasoning about smart contracts on the blockchain. IELE was designed by specifying its semantics formally in the K framework. Its implementation, a IELE virtual machine (VM), as well as a formal verification tool for IELE smart contracts, were automatically generated from the formal specification. The automatically generated formal verification tool allows us to formally verify smart contracts without any gap between the verifier and the actual VM. A compiler from Solidity, the predominant high-level language for smart contracts, to IELE has also been (manually) implemented, so Ethereum contracts can now also be executed on IELE.

Theodoros Kasampalis, Dwight Guth, Brandon Moore, Traian Florin Șerbănuță, Yi Zhang, Daniele Filaretti, Virgil Șerbănuță, Ralph Johnson, Grigore Roşu
APML: An Architecture Proof Modeling Language

To address the increasing size and complexity of modern software systems, compositional verification separates the verification of single components from the verification of their composition. In architecture-based verification, the former is done using Model Checking, while the latter is done using interactive theorem proving (ITP). As of today, however, architects are usually not trained in using a full-fledged interactive theorem prover. Thus, to bridge the gap between ITP and the architecture domain, we developed APML: an architecture proof modeling language. APML allows one to sketch proofs about component composition at the level of architecture using notations similar to Message Sequence Charts. With this paper, we introduce APML: We describe the language, show its soundness and completeness for the verification of architecture contracts, and provide an algorithm to map an APML proof to a corresponding proof for the interactive theorem prover Isabelle. Moreover, we describe its implementation in terms of an Eclipse/EMF modeling application, demonstrate it by means of a running example, and evaluate it in terms of a larger case study. Although our results are promising, the case study also reveals some limitations, which lead to new directions for future work.

Diego Marmsoler, Genc Blakqori

Learning-Based Techniques and Applications

Frontmatter
Learning Deterministic Variable Automata over Infinite Alphabets

Automated reasoning about systems with infinite domains requires an extension of automata, and in particular, finite automata, to infinite alphabets. One such model is Variable Finite Automata (VFA). VFAs are finite automata whose alphabet is interpreted as variables that range over an infinite domain. On top of their simple and intuitive structure, VFAs have many appealing properties. One such property is a deterministic fragment (DVFA), which is closed under the Boolean operations, and whose containment and emptiness problems are decidable. These properties are rare amongst the many different models for automata over infinite alphabets. In this paper, we continue to explore the advantages of DVFAs, and show that they have a canonical form, which proves them to be a particularly robust model that is easy to reason about and use in practice. Building on these results, we construct an efficient learning algorithm for DVFAs, based on the $$\textsc {L}^*$$ algorithm for regular languages.

Sarai Sheinvald
-Based Learning of Markov Decision Processes

Automata learning techniques automatically generate system models from test observations. These techniques usually fall into two categories: passive and active. Passive learning uses a predetermined data set, e.g., system logs. In contrast, active learning actively queries the system under learning, which is considered more efficient.An influential active learning technique is Angluin’s $$L^*$$ L ∗ algorithm for regular languages which inspired several generalisations from DFAs to other automata-based modelling formalisms. In this work, we study $$L^*$$ L ∗ -based learning of deterministic Markov decision processes, first assuming an ideal setting with perfect information. Then, we relax this assumption and present a novel learning algorithm that collects information by sampling system traces via testing. Experiments with the implementation of our sampling-based algorithm suggest that it achieves better accuracy than state-of-the-art passive learning techniques with the same amount of test data. Unlike existing learning algorithms with predefined states, our algorithm learns the complete model structure including the states.

Martin Tappler, Bernhard K. Aichernig, Giovanni Bacci, Maria Eichlseder, Kim G. Larsen
Star-Based Reachability Analysis of Deep Neural Networks

This paper proposes novel reachability algorithms for both exact (sound and complete) and over-approximation (sound) analysis of deep neural networks (DNNs). The approach uses star sets as a symbolic representation of sets of states, which are known in short as stars and provide an effective representation of high-dimensional polytopes. Our star-based reachability algorithms can be applied to several problems in analyzing the robustness of machine learning methods, such as safety and robustness verification of DNNs. The star-based reachability algorithms are implemented in a software prototype called the neural network verification (NNV) tool that is publicly available for evaluation and comparison. Our experiments show that when verifying ACAS Xu neural networks on a multi-core platform, our exact reachability algorithm is on average about 19 times faster than Reluplex, a satisfiability modulo theory (SMT)-based approach. Furthermore, our approach can visualize the precise behavior of DNNs because the reachable states are computed in the method. Notably, in the case that a DNN violates a safety property, the exact reachability algorithm can construct a complete set of counterexamples. Our star-based over-approximate reachability algorithm is on average 118 times faster than Reluplex on the verification of properties for ACAS Xu networks, even without exploiting the parallelism that comes naturally in our method. Additionally, our over-approximate reachability is much less conservative than DeepZ and DeepPoly, recent approaches utilizing zonotopes and other abstract domains that fail to verify many properties of ACAS Xu networks due to their conservativeness. Moreover, our star-based over-approximate reachability algorithm obtains better robustness bounds in comparison with DeepZ and DeepPoly when verifying the robustness of image classification DNNs.

Hoang-Dung Tran, Diago Manzanas Lopez, Patrick Musau, Xiaodong Yang, Luan Viet Nguyen, Weiming Xiang, Taylor T. Johnson

Refactoring and Reprogramming

Frontmatter
SOA and the Button Problem

Service-oriented architecture (SOA) is a popular architectural style centered around services, loose coupling, and interoperability. A recurring problem in SOA development is the Button Problem; how to ensure that whenever a “button is pressed” on some service—no matter what—the performance of other key services remains unaffected? The Button Problem is especially complex to solve in systems that have devolved into hardly comprehensible spaghettis of service dependencies.In a collaborative effort with industry partner First8, we present the first formal framework to help SOA developers solve the Button Problem, enabling automated reasoning about service sensitivities and candidate refactorings. Our formalization provides a rigorous foundation for a tool that was already successfully evaluated in industrial case studies, and it is built against two unique requirements: “whiteboard level of abstraction” and non-quantitative analysis.

Sung-Shik Jongmans, Arjan Lamers, Marko van Eekelen
Controlling Large Boolean Networks with Temporary and Permanent Perturbations

A salient objective of studying gene regulatory networks (GRNs) is to identify potential target genes whose perturbations would lead to effective treatment of diseases. In this paper, we develop two control methods for GRNs in the context of asynchronous Boolean networks. Our methods compute a minimal subset of nodes of a given Boolean network, such that temporary or permanent perturbations of these nodes drive the network from an initial state to a target steady state. The main advantages of our methods include: (1) temporary and permanent perturbations can be feasibly conducted with techniques for genetic modifications in biological experiments; and (2) the minimality of the identified control sets can reduce the cost of experiments to a great extent. We apply our methods to several real-life biological networks in silico to show their efficiency in terms of computation time and their efficacy with respect to the number of nodes to be perturbed.

Cui Su, Soumya Paul, Jun Pang

I-Day Presentations

Frontmatter
Formal Methods Applicability on Space Applications Specification and Implementation Using MORA-TSP

The usage of formal methods in Model Driven Engineering (MDE) has already been demonstrated with a significant boost in both productivity and quality in the design and analysis of software and systems. However, the integration of applicable tools and techniques for formal analysis needs improvement in order to create a practical MDE environment for FM, suitable for use in an industrial setting. This paper presents the European Space Agency (ESA) MORA-TSP (Multicore implementation of the On-Board Software Reference Architecture with Time and Space Partitioning capability) study. MORA-TSP comprises to develop a MDE toolset suitable to apply FM for early analysis, correctness and validation of the modeled software, in the context of space flight software.

Daniel Silveira, Andreas Jung, Marcel Verhoef, Tiago Jorge
Industrial Application of Event-B to a Wayside Train Monitoring System: Formal Conceptual Data Analysis

The experience gained in the application of Event-B to a subsystem of a wayside train monitoring system (WTMS) will be presented in this paper. The WTMS configuration management system (CMS) supports the creation and management of configuration data for the WTMS. Consistency of system data is one of the most important quality properties of a CMS since inconsistency may lead to critical malfunctioning. Therefore, the development of the data handling part of a CMS requires the use of high integrity methods in order to ensure the highest quality. Event-B, with its set-theoretic basis for modelling, its approach of refinement and the use of formal proof to ensure consistency of refinement steps, is used in this study for the conceptual modelling of system data and system operations. Due to the Agile-structured development process, the conceptual model has been created in several iterations by a changing team of developers. The challenge was to guarantee completeness and consistency of this model and to keep it aligned with the goals of all relevant stakeholders. This has been achieved by producing an incremental, refinement-based creation of a formal conceptual model together with an appropriate formalization of the conceptual data constraints. The relationship between the conceptual model and the formal conceptual model has been realized by using an appropriate traceability model. This paper describes how the application of Event-B can successfully address these challenges.

Robert Eschbach
Property-Driven Software Analysis
(Extended Abstract)

Software in industrial products, such as in the railway industry, constantly evolves to meet new or changing requirements. For projects with a lifetime spanning decades (such as the control software for energy plants, for railway lines, etc.), keeping track of the original design rationale through time is a significant challenge.

Mathieu Comptier, David Déharbe, Paulin Fournier, Julien Molinero-Perez
Practical Application of SPARK to OpenUxAS

This paper presents initial, positive results from using SPARK to prove critical properties of OpenUxAS, a service-oriented software framework developed by AFRL for mission-level autonomy for teams of cooperating unmanned vehicles. Given the intended use of OpenUxAS, there are many safety and security implications; however, these considerations are unaddressed in the current implementation. AFRL is seeking to address these considerations through the use of formal methods, including through the application of SPARK, a programming language that includes a specification language and a toolset for proving that programs satisfy their specifications. Using SPARK, we reimplemented one of the core services in OpenUxAS and proved that a critical part of its functionality satisfies its specification. This successful application provides a foundation for further applications of formal methods to OpenUxAS.

M. Anthony Aiello, Claire Dross, Patrick Rogers, Laura Humphrey, James Hamil
Adopting Formal Methods in an Industrial Setting: The Railways Case

The railway sector has seen a large number of successful applications of formal methods and tools. However, up-to-date, structured information about the industrial usage and needs related to formal tools in railways is limited. Two Shift2Rail projects, X2Rail-2 and ASTRail, have addressed this issue by performing a systematic search over the state of the art of formal methods application in railways to identify the best used practices. As part of the work of these projects, questionnaires on formal methods and tools have been designed to gather input and guidance on the adoption of formal methods in the railway domain. Even though the questionnaires were developed independently and distributed to different audiences, the responses show a certain convergence in the replies to the questions common to both. In this paper, we present a detailed report on such convergence, drawing some indications about methods and tools that are considered to constitute the most fruitful approaches to industrial adoption.

Maurice H. ter Beek, Arne Borälv, Alessandro Fantechi, Alessio Ferrari, Stefania Gnesi, Christer Löfving, Franco Mazzanti
Backmatter
Metadaten
Titel
Formal Methods – The Next 30 Years
herausgegeben von
Maurice H. ter Beek
Annabelle McIver
José N. Oliveira
Copyright-Jahr
2019
Electronic ISBN
978-3-030-30942-8
Print ISBN
978-3-030-30941-1
DOI
https://doi.org/10.1007/978-3-030-30942-8

Premium Partner