Skip to main content

Über dieses Buch

This volume was published in honor of Rocco De Nicola’s 65th birthday. The Festschrift volume contains 27 papers written by close collaborators and friends of Rocco De Nicola and was presented to Rocco on the 1st of July 2019 during a two-day symposium held in Lucca, Italy.
The papers present many research ideas that have been influenced by Rocco's work. They testify his intellectual curiosity, versatility and tireless research activity, and provide an overview of further developments to come. The volume consists of six sections. The first one contains a laudation illustrating the distinguished career and the main scientific contributions by Rocco and a witness of working experiences with Rocco. The remaining five sections comprise scientific papers related to specific research interests of Rocco and are ordered according to his scientific evolution: Observational Semantics; Logics and Types; Coordination Models and Languages; Distributed Systems Modelling; Security.



Homage from Friends


From Tuscany to Scotland and Back

A Homage to Rocco de Nicola for His 65th Birthday
Rocco De Nicola graduated at the University of Pisa in Scienze dell’Informazione on December 1978 summa cum laude.
Ugo Montanari

Building International Doctoral Schools in Computer Science in Italy, De Nicola’s Way

Rocco De Nicola has played a key role in establishing two international PhD schools in computer science in Italy, one at IMT Lucca and the other at the GSSI in L’Aquila. This short article describes some of the principles that have guided the establishment of the doctoral programmes at those institutions and the lessons the authors have learnt by working with Rocco De Nicola at the GSSI.
Luca Aceto, Gianlorenzo D’Angelo, Michele Flammini, Omar Inverso, Ludovico Iovino, Catia Trubiani

Observational Semantics


An Equational Characterisation of the Must Testing Pre-order for Regular Processes

In his PhD thesis of 1985 Rocco De Nicola showed how the must testing pre-order over the process calculus CCS can be captured using a set of in-equations and an infinitary proof rule. We show how, at least for regular processes, this infinitary rule is unnecessary. We present a standard proof system, which uses a simple co-inductive rule, which is both sound and complete for regular processes.
Matthew Hennessy

Testing Equivalence vs. Runtime Monitoring

Rocco De Nicola’s most cited paper, which was coauthored with his PhD supervisor Matthew Hennessy, introduced three seminal testing equivalences over processes represented as states in labelled transition systems. This article relates those classic process semantics with the framework for runtime monitoring developed by the authors in the context of the project ‘TheoFoMon: Theoretical Foundations for Monitorability’. It shows that may-testing semantics is closely related to the basic monitoring set-up within that framework, whereas, over strongly-convergent processes, must-testing semantics is induced by a collection of monitors that can detect when processes are unable to perform certain actions.
Luca Aceto, Antonis Achilleos, Adrian Francalanza, Anna Ingólfsdóttir, Karoliina Lehtinen

Reward Testing Equivalences for Processes

May and must testing were introduced by De Nicola and Hennessy to define semantic equivalences on processes. May-testing equivalence exactly captures safety properties, and must-testing equivalence liveness properties. This paper proposes reward testing and shows that the resulting semantic equivalence also captures conditional liveness properties. It is strictly finer than both the may- and must-testing equivalence.
Rob van Glabbeek

Playing with Bisimulation in Erlang

Erlang is a functional and concurrent programming language. The aim of this paper is to investigate basic properties of the Erlang concurrency model, which is based on asynchronous communication through mailboxes accessed via pattern matching. To achieve this goal, we consider Core Erlang (which is an intermediate step in Erlang compilation) and we define, on top of its operational semantics, an observational semantics following the approach used to define asynchronous bisimulation for the \(\pi \)-calculus. Our work allows us to shed some light on the management of process identifiers in Erlang, different from the various forms of name mobility already studied in the literature. In fact, we need to modify standard definitions to cope with such specific features of Erlang.
Ivan Lanese, Davide Sangiorgi, Gianluigi Zavattaro

Genesis and Evolution of ULTraS: Metamodel, Metaequivalences, Metaresults

We discuss the genesis of the ULTraS metamodel and summarize its evolution arising from the introduction of coherent resolutions of nondeterminism and reachability-consistent semirings.
Marco Bernardo

Coordination Models and Languages


X-Klaim Is Back

Klaim is a coordination language specifically designed to model and program distributed systems consisting of mobile components interacting through multiple distributed tuple spaces. The Klaim’s theoretical foundations provided a solid ground for the implementation of the Klaim’s programming model. To practically program Klaim-based applications, the X-Klaim programming language has been proposed. It extends Klaim with enriched primitives and standard control flow constructs, and is compiled in Java to be executed. However, due to the limits of X-Klaim in terms of usability and the aging of the technology at the basis of its compiler, X-Klaim has been progressively neglected. Motivated by the success that Klaim has gained, the popularity that still has in teaching distributed computing, and its possible future exploitations in the development of modern ICT systems, in this paper we propose a renewed and enhanced version of X-Klaim. The new implementation, coming together with an Eclipse-based IDE tooling, relies on recent powerful frameworks for the development of programming languages.
Lorenzo Bettini, Emanuela Merelli, Francesco Tiezzi

A Distributed Ledger Technology Based on Shared Write-Once Objects

Research on blockchain technologies and applications has exploded since Satoshi Nakamoto’s seminal paper on Bitcoin. Everybody agrees that blockchain is a foundational technology, but neither has a unified definition of blockchain been established yet, nor does a commonly agreed upon standard exist. The basic principle of a blockchain is to maintain transactions on digital assets, without utilizing a central coordinator. Despite the assumed trustless environment, high security is promised.
The core technologies behind blockchain are well known in distributed computing. They comprise peer-to-peer replication and peer-to-peer consensus. In addition, cryptography is used to sign transactions and to achieve a timely order between them.
In this paper we show how a coordination middleware that relies on the virtual shared memory (VSM) paradigm can contribute to realizing a flexible and generally distributed ledger technology (DLT) that can serve as the basis for many different kinds of blockchain applications. As a proof-of-concept, the realization of different blockchain types, such as public and permissioned, and different consensus protocols are sketched on top of this VSM-based DLT.
Eva Maria Kuehn

Testing for Coordination Fidelity

Operation control in modern distributed systems must rely on decentralised coordination among system participants. In particular when the operation control involves critical infrastructures such as power grids, it is vital to ensure correctness properties of such coordination mechanisms. In this paper, we present a verification technique that addresses coordination protocols for power grid operation control. Given a global protocol specification, we show how we can rely on testing semantics for the purpose of ensuring protocol fidelity, i.e., to certify that the interaction among the grid nodes follows the protocol specification.
Yehia Abd Alrahman, Claudio Antares Mezzina, Hugo Torres Vieira

Data-Driven Choreographies à la Klaim

We propose Klaim as a suitable base for a novel choreographic framework. More precisely we advocate Klaim as a suitable language onto which to project data-driven global specifications based on distributed tuple spaces. These specifications, akin to behavioural types, describe the coordination from a global point of view. Differently from behavioural types though, our specifications express the data flow across distributed tuple spaces rather than detailing the communication pattern of processes. We devise a typing system to validate Klaim programs against projections of our global specifications. An interesting feature of our typing approach is that well-typed systems have an arbitrary number of participants. In standard approaches based on behavioural types, this is often achieved at the cost of considerable technical complications.
Roberto Bruni, Andrea Corradini, Fabio Gadducci, Hernán Melgratti, Ugo Montanari, Emilio Tuosto

Different Glasses to Look into the Three Cs: Component, Connector, Coordination

Component, connector, and coordination have been key concepts exploited in different communities to manage the complexity of concurrent and distributed system development. In this paper, we discuss three approaches within three different classes: composition in software architectures, coordination models, and programming abstractions for concurrency. These classes encompass different perspectives and solutions to face crucial challenges in developing concurrent and distributed systems. The approaches are discussed with respect to some characteristics of interest for the above classes: compositionality, incrementality, scalability, compositional reasoning, reusability, and evolution.
Farhad Arbab, Marco Autili, Paola Inverardi, Massimo Tivoli

Logics and Types


From the Archives of the Formal Methods and Tools Lab

Axiomatising and Contextualising ACTL
We present a sound and complete axiomatisation of ACTL, an action-based version of the well-known branching-time temporal logic CTL, and place it into a historical context. ACTL was originally introduced by Rocco De Nicola together with Frits Vaandrager 30 years ago, and it has played a major role in shaping the activity of our Formal Methods and Tools Lab from the nineties to this very day.
Stefania Gnesi, Maurice H. ter Beek

Featherweight Scribble

This paper gives a formal definition of the protocol specification language Scribble. In collaboration with industry, Scribble has been developed as an engineering incarnation of the formal multiparty session types. In its ten years of development, Scribble has been applied and extended in manyfold ways as to verify and ensure correctness of concurrent and distributed systems, e.g. type checking, runtime monitoring, code generation, and synthesis. This paper introduces a core version of Scribble, Featherweight Scribble. We define the semantics of Scribble by translation to communicating automata and show a behavioural-preserving encoding of Scribble protocols to multiparty session type.
Rumyana Neykova, Nobuko Yoshida

Embedding RCC8D in the Collective Spatial Logic CSLCS

Discrete mereotopology is a logical theory for the specification of qualitative spatial functions and relations defined over a discrete space, intended as a set of basic elements, the pixels, with an adjacency relation defined over it. The notions of interest are that of region, intended as an arbitrary aggregate of pixels, and of specific relations between regions. The mereotopological theory RCC8D extends the mereological theory RCC5D—a theory of region parthood for discrete spaces—with the topological notion of connection and the remaining relations (disconnection, external connection, tangential and nontangential proper parthood and their inverses). In this paper, we propose an encoding of RCC8D into CSLCS, the collective extension of the Spatial Logic of Closure Spaces SLCS. We show how topochecker, a model-checker for CSLCS, can be used for effectively checking the existence of a RCC8D relation between two given regions of a discrete space.
Vincenzo Ciancia, Diego Latella, Mieke Massink

From Behavioural Contracts to Session Types

We present a research trajectory of the authors and colleagues dealing with the correctness and meaningful composition of software components, trajectory that incrementally traverses successive paradigms and approaches: open distributed processing, contract based reasoning, behavioural typing and session types. This research is grounded on the foundational work of Robin Milner on processes and observation equivalence, and the followup work by De Nicola and Hennessy on testing relations. Indeed, these initial works have set benchmarks that define the meaning of behaviour, which has fostered a full body of research in concurrency and verification. Behavioural typing is one of the avenues opened by these early contributions. This paper is a brief and staged report of the research accomplished by the authors and colleagues, presented in chronological order, starting with their work on the computational model of open distributed processing and ending at their latest work on sessions for web services.
Alessandro Fantechi, Elie Najm, Jean-Bernard Stefani

Modal Epistemic Logic on Contracts: A Doctrinal Approach

Problems related to the construction of consensus out of distributed knowledge have become actual again under a new perspective with the diffusion of distributed ledger techniques. In particular, when dealing with contracts, different observers must agree that a contract is in a certain state, even if not all transactions performed under the contract are observable by all of them. In this paper, we revisit previous work on algebraic modelling of labelled non-deterministic concurrent processes, which identified an intuitionistic modal/temporal logic associated with a categorical model. We expand this logic with typical epistemic operators in a categorical framework in order to encompass distributed knowledge to speak about transactions and contracts.
Paolo Bottoni, Daniele Gorla, Stefano Kasangian, Anna Labella

Types for Progress in Actor Programs

Properties in the actor model can be described in terms of the message-passing behavior of actors. In this paper, we address the problem of using a type system to capture liveness properties of actor programs. Specifically, we define a simple actor language in which demands for certain types of messages may be generated during execution, in a manner specified by the programmer. For example, we may want to require that each request to an actor eventually results in a reply. The difficulty lies in that such requests can be generated dynamically, alongside the associated requirements for replies. Such replies might be sent in response to intermediate messages that never arrive, but the property may also not hold for more trivial reasons; for instance, when the code of potential senders of the reply omit the required sending command in some branches of a conditional statement. We show that, for a restricted class of actor programs, a system that tracks typestates can statically guarantee that such dynamically generated requirements will eventually be satisfied.
Minas Charalambides, Karl Palmskog, Gul Agha

Event Structure Semantics for Multiparty Sessions

We propose an interpretation of multiparty sessions as flow event structures, which allows concurrency between communications within a session to be explicitly represented. We show that this interpretation is equivalent, when the multiparty sessions can be described by global types, to an interpretation of global types as prime event structures.
Ilaria Castellani, Mariangiola Dezani-Ciancaglini, Paola Giannini

Distributed Systems Modelling


Process Calculi for Modelling Mobile, Service-Oriented, and Collective Autonomic Systems

Process-algebraic methods have proven to be excellent tools for designing and analysing concurrent systems. In this paper we review several process calculi and languages developed and studied by Rocco De Nicola and his students and colleagues in the three EU projects AGILE, SENSORIA, and ASCENS. These calculi provide a theoretical basis for engineering mobile, service-oriented, and collective autonomic systems. KLAIM is a framework for distributed mobile agents consisting of a kernel language, a stochastic extension, a logic for specifying properties of mobile applications, and an automatic tool for verifying such properties. In the AGILE project of the EU Global Computing Initiative I, KLAIM served as a the process-algebraic basis for an architectural approach to mobile systems development. For modelling and analysing service-oriented systems, a family of process-algebraic core calculi was developed in the SENSORIA project of the EU Global Computing Initiative II. These calculi address complementary aspects of service-oriented programming such as sessions and correlations. They come with reasoning and analysis techniques, specification and verification tools as well as prototypical analyses of case studies. In the ASCENS project, the language SCEL was developed for modelling and programming systems consisting of interactive autonomic components. SCEL is based on process-algebraic principles and supports formal description and analysis of the behaviours of ensembles of autonomic components.
Martin Wirsing, Rolf Hennicker

Autonomous Systems – An Architectural Characterization

The concept of autonomy is key to the IoT vision promising increasing integration of smart services and systems minimizing human intervention. This vision challenges our capability to build complex open trustworthy autonomous systems. We lack a rigorous common semantic framework for autonomous systems. It is remarkable that the debate about autonomous vehicles focuses almost exclusively on AI and learning techniques while it ignores many other equally important autonomous system design issues.
Autonomous systems involve agents and objects coordinated in some common environment so that their collective behavior meets a set of global goals. We propose a general computational model combining a system architecture model and an agent model. The architecture model allows expression of dynamic reconfigurable multi-mode coordination between components. The agent model consists of five interacting modules implementing each one a characteristic function: Perception, Reflection, Goal management, Planning and Self-adaptation. It determines a concept of autonomic complexity accounting for the specific difficulty to build autonomous systems.
We emphasize that the main characteristic of autonomous systems is their ability to handle knowledge and adaptively respond to environment changes. We advocate that autonomy should be associated with functionality and not with specific techniques. Machine learning is essential for autonomy although it can meet only a small portion of the needs implied by autonomous system design.
We conclude that autonomy is a kind of broad intelligence. Building trustworthy and optimal autonomous systems goes far beyond the AI challenge.
Joseph Sifakis

Fluidware: An Approach Towards Adaptive and Scalable Programming of the IoT

The objective of this paper is to present the vision and structure of Fluidware, an approach towards an innovative programming model to ease the development of flexible and robust large-scale IoT services and applications. The key distinctive idea of Fluidware is to abstract collectives of devices of the IoT fabric as sources, digesters, and targets of distributed “flows” of contextualized events, carrying information about data produced and actuating commands. Accordingly, programming of services and applications relies on declarative specification of “funnel processes” to channel, elaborate, and re-direct such flows in a fully-distributed way, as a means to coordinate the activities of devices and realize services and applications. The potential applicability of Fluidware and its expected advantages are exemplified via a case study scenario in the area of ambient assisted living.
Giancarlo Fortino, Barbara Re, Mirko Viroli, Franco Zambonelli

HeadREST: A Specification Language for RESTful APIs

Representational State Transfer (REST), an architectural style providing an abstract model of the web, is by far the most popular platform to build web applications. Developing such applications require well-documented interfaces. However, and despite important initiatives such as the Open API Specification, the support for interface description is currently quite limited, focusing essentially on simple syntactic aspects. In this paper we present HeadREST, a dependently-typed language that allows describing semantic aspects of interfaces in a style reminiscent of Hoare triples.
Vasco T. Vasconcelos, Francisco Martins, Antónia Lopes, Nuno Burnay



Revealing the Trajectories of KLAIM Tuples, Statically

Klaim (Kernel Language for Agents Interaction and Mobility) has been devised to design distributed applications composed by many components deployed over the nodes of a distributed infrastructure and to offer programmers primitive constructs for communicating, distributing and retrieving data. Data could be sensitive and some nodes could not be secure. As a consequence it is important to track data in their traversal of the network. To this aim, we propose a Control Flow Analysis that over-approximates the behaviour of Klaim processes and tracks how tuple data can move in the network.
Chiara Bodei, Pierpaolo Degano, Gian-Luigi Ferrari, Letterio Galletta

Lightweight Information Flow

We develop a type system for identifying the information flow between variables in a program in the Guarded Commands language. First we characterise the types of information flow that may arise between variables in a non-deterministic program: explicit, implicit, bypassing, correlated or sanitised. Next we allow to specify security policies in a number of traditional ways based on mandatory access control: defining a security lattice, working with components or decentralised labels, both as pertains to confidentiality and integrity. Offending information flows are those identified by the type system and that violate the security policy; a program is sufficiently secure if it contains only acceptable information flows.
Flemming Nielson, Hanne Riis Nielson

A Framework for Provenance-Preserving History Distribution and Incremental Reduction

Provenance properties help asses the level of trust on the integrity of resources and events. One of the problems of interest is to find the right balance between the expressive power of the provenance specification language and the amount of historical information that needs to be remembered for each resource or event. This gives rise to possibly conflicting objectives relevant to integrity, privacy, and performance. Related problems are how to reduce historical information in a way that the provenance properties of interest are preserved, that is suitable for a distributed setting, and that relies on an incremental construction. We investigate these problems in a simple model of computation where resources/events and their dependencies form an acyclic directed graph, and computation steps consist of addition of new resources and of provenance-based queries. The model is agnostic with respect to the actual provenance specification language. We present then a framework, parametric on such language, for distributing, and incrementally constructing reduced histories in a sound and complete way. In the resulting model of computation, reduced histories are computed incrementally and queries are tested locally on reduced histories. We study different choices for instantiating the framework with concrete provenance specification languages, and their corresponding provenance-preserving history reduction techniques.
Alberto Lluch Lafuente

Utility-Preserving Privacy Mechanisms for Counting Queries

Differential privacy (DP) and local differential privacy (LPD) are frameworks to protect sensitive information in data collections. They are both based on obfuscation. In DP the noise is added to the result of queries on the dataset, whereas in LPD the noise is added directly on the individual records, before being collected. The main advantage of LPD with respect to DP is that it does not need to assume a trusted third party. The main disadvantage is that the trade-off between privacy and utility is usually worse than in DP, and typically to retrieve reasonably good statistics from the locally sanitized data it is necessary to have a huge collection of them. In this paper, we focus on the problem of estimating counting queries from collections of noisy answers, and we propose a variant of LDP based on the addition of geometric noise. Our main result is that the geometric noise has a better statistical utility than other LPD mechanisms from the literature.
Natasha Fernandes, Kacem Lefki, Catuscia Palamidessi


Weitere Informationen

Premium Partner