Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 41st IFIP WG 6.1 International Conference on Formal Techniques for Distributed Objects, Components, and Systems, FORTE 2021, held in Valletta, Malta, in June 2021, as part of the 16th International Federated Conference on Distributed Computing Techniques, DisCoTec 2021.

The 9 regular papers and 4 short papers presented were carefully reviewed and selected from 26 submissions. They cover topics such as: software quality, reliability, availability, and safety; security, privacy, and trust in distributed and/or communicating systems; service-oriented, ubiquitous, and cloud computing systems; component-and model-based design; object technology, modularity, and software adaptation; self-stabilisation and self-healing/organising; and verification, validation, formal analysis, and testing of the above.

Due to the Corona pandemic this event was held virtually.



Full Papers


On Bidirectional Runtime Enforcement

Runtime enforcement is a dynamic analysis technique that instruments a monitor with a system in order to ensure its correctness as specified by some property. This paper explores bidirectional enforcement strategies for properties describing the input and output behaviour of a system. We develop an operational framework for bidirectional enforcement and use it to study the enforceability of the safety fragment of Hennessy-Milner logic with recursion (sHML). We provide an automated synthesis function that generates correct monitors from sHML formulas, and show that this logic is enforceable via a specific type of bidirectional enforcement monitors called action disabling monitors.
Luca Aceto, Ian Cassar, Adrian Francalanza, Anna Ingólfsdóttir

A Multi-agent Model for Polarization Under Confirmation Bias in Social Networks

We describe a model for polarization in multi-agent systems based on Esteban and Ray’s standard measure of polarization from economics. Agents evolve by updating their beliefs (opinions) based on an underlying influence graph, as in the standard DeGroot model for social learning, but under a confirmation bias; i.e., a discounting of opinions of agents with dissimilar views. We show that even under this bias polarization eventually vanishes (converges to zero) if the influence graph is strongly-connected. If the influence graph is a regular symmetric circulation, we determine the unique belief value to which all agents converge. Our more insightful result establishes that, under some natural assumptions, if polarization does not eventually vanish then either there is a disconnected subgroup of agents, or some agent influences others more than she is influenced. We also show that polarization does not necessarily vanish in weakly-connected graphs under confirmation bias. We illustrate our model with a series of case studies and simulations, and show how it relates to the classic DeGroot model for social learning.
Mário S. Alvim, Bernardo Amorim, Sophia Knight, Santiago Quintero, Frank Valencia

A Formalisation of SysML State Machines in mCRL2

This paper reports on a formalisation of the semi-formal modelling language SysML in the formal language mCRL2, in order to unlock formal verification and model-based testing using the mCRL2 toolset for SysML models. The formalisation focuses on a fragment of SysML used in the railway standardisation project EULYNX. It comprises the semantics of state machines, communication between objects via ports, and an action language called ASAL. It turns out that the generic execution model of SysML state machines can be elegantly specified using the rich data and process languages of mCRL2. This is a big step towards an automated translation as the generic model can be configured with a formal description of a specific set of state machines in a straightforward manner.
Mark Bouwman, Bas Luttik, Djurre van der Wal

How Adaptive and Reliable is Your Program?

We consider the problem of modelling and verifying the behaviour of systems characterised by a close interaction of a program with the environment. We propose to model the program-environment interplay in terms of the probabilistic modifications they induce on a set of application-relevant data, called data space. The behaviour of a system is thus identified with the probabilistic evolution of the initial data space. Then, we introduce a metric, called evolution metric, measuring the differences in the evolution sequences of systems and that can be used for system verification as it allows for expressing how well the program is fulfilling its tasks. We use the metric to express the properties of adaptability and reliability of a program, which allow us to identify potential critical issues of it w.r.t. changes in the initial environmental conditions. We also propose an algorithm, based on statistical inference, for the evaluation of the evolution metric.
Valentina Castiglioni, Michele Loreti, Simone Tini

Branching Place Bisimilarity: A Decidable Behavioral Equivalence for Finite Petri Nets with Silent Moves

Place bisimilarity \(\sim _p\) is a behavioral equivalence for finite Petri nets, proposed in [1] and proved decidable in [13]. In this paper we propose an extension to finite Petri nets with silent moves of the place bisimulation idea, yielding branching place bisimilarity \(\approx _p\), following the intuition of branching bisimilarity [6] on labeled transition systems. We prove that \(\approx _p\) is a decidbale equivalence relation. Moreover, we argue that it is strictly finer than branching fully-concurrent bisimilarity [12, 22], essentially because \(\approx _p\) does not consider as unobservable those \(\tau \)-labeled net transitions with pre-set size larger than one, i.e., those resulting from multi-party interaction.
Roberto Gorrieri

Prioritise the Best Variation

Binary session types guarantee communication safety and session fidelity, but alone they cannot rule out deadlocks arising from the interleaving of different sessions. In Classical Processes (CP) [53]—a process calculus based on classical linear logic—deadlock freedom is guaranteed by combining channel creation and parallel composition under the same logical cut rule. Similarly, in Good Variation (GV) [39, 54]—a linear concurrent \(\lambda \)-calculus—deadlock freedom is guaranteed by combining channel creation and thread spawning under the same operation, called fork. In both CP and GV, deadlock freedom is achieved at the expense of expressivity, as the only processes allowed are tree-structured. Dardha and Gay [19] define Priority CP (PCP), which allows cyclic-structured processes and restores deadlock freedom by using priorities, in line with Kobayashi and Padovani [34, 44]. Following PCP, we present Priority GV (PGV), a variant of GV which decouples channel creation from thread spawning. Consequently, we type cyclic-structured processes and restore deadlock freedom by using priorities. We show that our type system is sound by proving subject reduction and progress. We define an encoding from PCP to PGV and prove that the encoding preserves typing and is sound and complete with respect to the operational semantics.
Wen Kokke, Ornela Dardha

Towards Multi-layered Temporal Models:

A Proposal to Integrate Instant Refinement in CCSL
For the past 50 years, temporal constraints have been a key driver in the development of critical systems, as ensuring their safety requires their behaviour to meet stringent temporal requirements. A well established and promising approach to express and verify such temporal constraints is to rely on formal modelling languages. One such language is CCSL, first introduced as part of the MARTE UML profile, which allows the developer, through entities called clocks, to abstract any system into events on which constraints can be expressed, and then assessed using TimeSquare, a tool which implements its operational semantics. By nature, CCSL handles horizontal separation (component based design at one step in the system development) of concerns through the notion of clocks, but does not yet take into account the other major separation of concerns used in modern system development: vertical separation, also called refinement in the literature (relations between the various steps of the system development). This paper proposes an approach to extend CCSL with a notion of refinement in order to handle temporal models relying on both vertical and horizontal parts. Our proposal relies on the notion of multi-layered time to provide two new CCSL relations expressing two different yet complementary notions of refinement. Their integration with the other CCSL constructs is discussed and their use is illustrated while the relevance and future impacts of this extended version of CCSL is detailed.
Mathieu Montin, Marc Pantel

A Case Study on Parametric Verification of Failure Detectors

Partial synchrony is a model of computation in many distributed algorithms and modern blockchains. Correctness of these algorithms requires the existence of bounds on message delays and on the relative speed of processes after reaching Global Stabilization Time (GST). This makes partially synchronous algorithms parametric in time bounds, which renders automated verification of partially synchronous algorithms challenging. In this paper, we present a case study on formal verification of both safety and liveness of a Chandra and Toueg failure detector that is based on partial synchrony. To this end, we specify the algorithm and the partial synchrony assumptions in three frameworks: \(\textsc {TLA}^+\), Ivy, and counter automata. Importantly, we tune our modeling to use the strength of each method: (1) We are using counters to encode message buffers with counter automata, (2) we are using first-order relations to encode message buffers in Ivy, and (3) we are using both approaches in \(\textsc {TLA}^+\). By running the tools for \(\textsc {TLA}^+\) (TLC and APALACHE) and counter automata (FAST), we demonstrate safety for fixed time bounds. This helped us to find the inductive invariants for fixed parameters, which we used as a starting point for the proofs with Ivy. By running Ivy, we prove safety for arbitrary time bounds. Moreover, we show how to verify liveness of the failure detector by reducing the verification problem to safety verification. Thus, both properties are verified by developing inductive invariants with Ivy. We conjecture that correctness of other partially synchronous algorithms may be proven by following the presented methodology.
Thanh-Hai Tran, Igor Konnov, Josef Widder

with Leftovers: A Mechanisation in Agda

Linear type systems need to keep track of how programs use their resources. The standard approach is to use context splits specifying how resources are (disjointly) split across subterms. In this approach, context splits redundantly echo information which is already present within subterms. An alternative approach is to use leftover typing [2, 23], where in addition to the usual (input) usage context, typing judgments have also an output usage context: the leftovers. In this approach, the leftovers of one typing derivation are fed as input to the next, threading through linear resources while avoiding context splits. We use leftover typing to define a type system for a resource-aware \(\pi \)-calculus [26, 27], a process algebra used to model concurrent systems. Our type system is parametrised over a set of usage algebras [20, 34] that are general enough to encompass shared types (free to reuse and discard), graded types (use exactly n number of times) and linear types (use exactly once). Linear types are important in the \(\pi \)-calculus: they ensure privacy and safety of communication and avoid race conditions, while graded and shared types allow for more flexible programming. We provide a framing theorem for our type system, generalise the weakening and strengthening theorems to include linear types, and prove subject reduction. Our formalisation is fully mechanised in about 1850 lines of Agda [37].
Uma Zalakain, Ornela Dardha

Short and Journal-First Papers


Supervisory Synthesis of Configurable Behavioural Contracts with Modalities

Service contracts characterise the desired behavioural compliance of a composition of services, typically defined by the fulfilment of all service requests through service offers. Contract automata are a formalism for specifying behavioural service contracts. Based on the notion of synthesis of the most permissive controller from Supervisory Control Theory, a safe orchestration of contract automata can be computed that refines a composition into a compliant one. This short paper summarises the contributions published in [8], where we endow contract automata with two orthogonal layers of variability: (i) at the structural level, constraints over service requests and offers define different configurations of a contract automaton, depending on which requests and offers are selected or discarded; and (ii) at the behavioural level, service requests of different levels of criticality can be declared, which induces the novel notion of semi-controllability. The synthesis of orchestrations is thus extended to respect both the structural and the behavioural variability constraints. Finally, we show how to efficiently compute the orchestration of all configurations from only a subset of these configurations. A recently redesigned and refactored tool supports the developed theory.
Davide Basile, Maurice H. ter Beek, Pierpaolo Degano, Axel Legay, Gian-Luigi Ferrari, Stefania Gnesi, Felicita Di Giandomenico

Off-the-Shelf Automated Analysis of Liveness Properties for Just Paths

(Extended Abstract)
Recent work by van Glabbeek and coauthors suggests that the liveness property for Peterson’s mutual exclusion algorithm, which states that any process wanting to enter the critical section will eventually enter it, cannot be analysed in CCS and related formalisms. In our article, we explore the formal underpinning of this suggestion and its ramifications. In particular, we show that the liveness property for Peterson’s algorithm can be established convincingly with the mCRL2 toolset, which has a conventional ACP-style process-algebra based specification formalism.
Mark Bouwman, Bas Luttik, Tim Willemse

Towards a Spatial Model Checker on GPU

The tool VoxLogicA merges the state-of-the-art library of computational imaging algorithms ITK with the combination of declarative specification and optimised execution provided by spatial logic model checking. The analysis of an existing benchmark for segmentation of brain tumours via a simple logical specification reached very high accuracy. We introduce a new, GPU-based version of VoxLogicA and present preliminary results on its implementation, scalability, and applications.
Laura Bussi, Vincenzo Ciancia, Fabio Gadducci

Formal Verification of HotStuff

HotStuff is a recent algorithm for repeated distributed consensus used in permissioned blockchains. We present a simplified version of the HotStuff algorithm and verify its safety using both Ivy and the TLA Proof Systems tools.
We show that HotStuff deviates from the traditional view-instance model used in other consensus algorithms and instead follows a novel tree model to solve this fundamental problem. We argue that the tree model results in more complex verification tasks than the traditional view-instance model. Our verification efforts provide initial evidence towards this claim.
Leander Jehl



Better Late Than Never or: Verifying Asynchronous Components at Runtime

This paper presents detectEr, a runtime verification tool for monitoring asynchronous component systems. The tool synthesises executable monitors from properties expressed in terms of the safety fragment of the modal \(\mu \)-calculus. In this paper, we show how a number of useful properties can be flexibly runtime verified via the three forms of instrumentation—inline, outline, and offline—offered by detectEr to cater for specific system set-up constraints.
Duncan Paul Attard, Luca Aceto, Antonis Achilleos, Adrian Francalanza, Anna Ingólfsdóttir, Karoliina Lehtinen

Tutorial: Designing Distributed Software in mCRL2

Distributed software is very tricky to implement correctly as some errors only occur in peculiar situations. For such errors testing is not effective. Mathematically proving correctness is hard and time consuming, and therefore, it is rarely done. Fortunately, there is a technique in between, namely model checking, that, if applied with skill, is both efficient and able to find rare errors.
  In this tutorial we show how to create behavioural models of parallel software, how to specify requirements using modal formulas, and how to verify these. For that we use the mCRL2 language and toolset (www.​mcrl2.​org/​). We discuss the design of an evolution of well-known mutual exclusion protocols, and how model checking not only provides insight in their behaviour and correctness, but also guides their design.
Jan Friso Groote, Jeroen J. A. Keiren


Weitere Informationen

Premium Partner