Skip to main content

2019 | Buch

From Software Engineering to Formal Methods and Tools, and Back

Essays Dedicated to Stefania Gnesi on the Occasion of Her 65th Birthday

insite
SUCHEN

Über dieses Buch

This volume was published in honor of Stefania Gnesi’s 65th birthday. The Festschrift volume contains 32 papers written by close collaborators and friends of Stefania and was presented to her on October 8, 2019 one-day colloquium held in Porto, Portugal,

The Festschrift consists of eight sections, seven of which reflect the main research areas to which Stefania has contributed. Following a survey of Stefania's legacy in research and a homage by her thesis supervisor, these seven sections are ordered according to Stefania's life cycle in research, from software engineering to formal methods and tools, and back: Software Engineering; Formal Methods and Tools; Requirements Engineering; Natural Language Processing; Software Product Lines; Formal Verification; and Applications.

Inhaltsverzeichnis

Frontmatter
The Legacy of Stefania Gnesi
From Software Engineering to Formal Methods and Tools, and Back

Stefania Gnesi was born in Livorno in 1954. She studied Computer Science at the University of Pisa, where she graduated summa cum laude in 1978.

Maurice H. ter Beek, Alessandro Fantechi, Laura Semini
From Dynamic Programming to Programming Science
Some Recollections in Honour of Stefania Gnesi

Stefania Gnesi graduated summa cum laude in Scienze dell’Informazione at the University of Pisa in June 1978.

Ugo Montanari

Software Engineering

Frontmatter
Ten Years of Self-adaptive Systems: From Dynamic Ensembles to Collective Adaptive Systems

Self-adaptive systems have been introduced to manage situations where software systems operate under continuous perturbations due to the unpredicted behaviors of their clients and the occurrence of exogenous changes in the environment in which they operate. Adaptation is triggered by the run-time occurrence of an extraordinary circumstance, and it is handled by an adaptation process that involves components affected by the issue, and is able to handle the run-time modification of the structure and behavior of a running system. In this paper we report our experience gained in the last 10 years on models, techniques and applications in the field of self-adaptation. We present the various steps taken by means of a formal framework introduced to characterize the different aspects of an ensemble-based software engineering approach. We present (i) how to model dynamic ensembles using typed graph grammars, (ii) how to specialize and re-configure ensembles and, (ii) how to manage collective adaptations in an ensemble. All these aspects have been part of our research on self-adaptation and have been used to specify and deploy concrete solutions in different application domains.

Antonio Bucchiarone, Marina Mongiello
Multi-modelling and Co-simulation in the Engineering of Cyber-Physical Systems: Towards the Digital Twin

Ensuring the dependability of Cyber-Physical Systems (CPSs) poses challenges for model-based engineering, stemming from the semantic heterogeneity of the models of computational, physical and human processes, and from the range of stakeholders involved. We argue that delivering such dependability requires a marriage of multi-disciplinary models developed during design with models derived from real operational data. Assets developed during design thus become the basis of a learning digital twin, able to support decision making both in redesign and in responsive operation. Starting from an open integrated toolchain leveraging formal models for CPS design, we consider the extension of this concept towards digital twins. A small example inspired by agricultural robotics illustrates some of the opportunities for research and innovation in delivering digital twins that contribute to dependability.

John Fitzgerald, Peter Gorm Larsen, Ken Pierce
Changing Software in a Changing World: How to Test in Presence of Variability, Adaptation and Evolution?

Modern software-intensive and pervasive systems need to be able to manage different requirements of variability, adaptation and evolution. The latter are surely related properties, all bringing uncertainty, but covering different aspects and requiring different approaches. Testing of such systems introduces many challenges: variability would require the test of too many configurations and variants well beyond feasibility; adaptation should be based on context-aware testing over many predictable or even unpredictable scenarios; evolution would entail testing a system for which the reference model has become out-of-date. It is evident how current testing approaches are not adequate for such types of systems. We make a brief overview of testing challenges for changing software in a changing world, and hint at some promising approaches, arguing how these would need to be part of a holistic validation approach that can handle uncertainty.

Antonia Bertolino, Paola Inverardi
Improving Software Engineering Research Through Experimentation Workbenches

Experimentation with software prototypes plays a fundamental role in software engineering research. In contrast to many other scientific disciplines, however, explicit support for this key activity in software engineering is relatively small. While some approaches to improve this situation have been proposed by the software engineering community, experiments are still very difficult and sometimes impossible to replicate.In this paper, we propose the concept of an experimentation workbench as a means of explicit support for experimentation in software engineering research. In particular, we discuss core requirements that an experimentation workbench should satisfy in order to qualify as such and to offer a real benefit for researchers. Beyond their core benefits for experimentation, we stipulate that experimentation workbenches will also have benefits in regard to reproducibility and repeatability of software engineering research. Further, we illustrate this concept with a scenario and a case study, and describe relevant challenges as well as our experience with experimentation workbenches.

Klaus Schmid, Sascha El-Sharkawy, Christian Kröher

Formal Methods and Tools

Frontmatter
Innovating Medical Image Analysis via Spatial Logics

Current computer-assisted medical imaging for the planning of radiotherapy requires high-level mathematical and computational skills. These are often paired with the case-by-case integration of highly specialised technologies. The lack of modularity at the right level of abstraction in this field hinders research, collaboration and transfer of expertise among medical physicists, engineers and technicians. The longer term aim of the introduction of spatial logics and spatial model checking in medical imaging is to provide an open platform introducing declarative medical image analysis. This will provide domain experts with a convenient and very concise way to specify contouring and segmentation operations, grounded on the solid mathematical foundations of Topological Spatial Logics. We show preliminary results, obtained using the spatial model checker VoxLogicA, for the automatic identification of specific brain tissues in a healthy brain and we discuss a selection of challenges for spatial model checking for medical imaging.

Gina Belmonte, Vincenzo Ciancia, Diego Latella, Mieke Massink
Formal Methods in Designing Critical Cyber-Physical Systems

Cyber-Physical Systems (CPS) are increasingly applied in critical contexts, where they have to support safe and secure operations, often subject to stringent timing requirements. Typical examples are scenarios involving automated living or working spaces in which humans operate, or human-robot collaborations (HRC) in modern manufacturing. Formal methods have been traditionally investigated to support modeling and verification of critical systems. In this paper, we review some of the main new challenges arising in the application of formal methods to modeling and verification of CPS. We do that by presenting two case studies (emergency response in a smart city and a smart manufacturing system), reflecting past work of the authors, from which some general lessons are distilled.

Mehrnoosh Askarpour, Carlo Ghezzi, Dino Mandrioli, Matteo Rossi, Christos Tsigkanos
Automata-Based Behavioural Contracts with Action Correlation

The rigorous design of Service-Oriented Computing (SOC) applications has been identified as one of the primary research challenges for the next 10 years. Many foundational theories for SOC have been defined, but they often rely on mechanisms different from real-world SOC technologies, hindering actual service modelling and verification. In this paper, we propose a novel automata-based formalism of service contracts equipped with a mechanism, inspired by current web service technologies, exploiting correlation data to drive service interactions and with formal foundations enabling reasoning about service correctness.

Davide Basile, Rosario Pugliese, Francesco Tiezzi, Pierpaolo Degano, Gian-Luigi Ferrari
Logical Support for Bike-Sharing System Design

Automated bicycle-sharing systems (bss) are a prominent example of reconfigurable cyber-physical systems for which the locality and connectivity of their elements are central to the way in which they operate. These features motivate us to study bss from the perspective of Actor-Network Theory – a framework for modelling cyber-physical-system protocols in which systems are networks of actors that are no longer limited to programs but can also include humans and physical artefacts. In order to support logical reasoning about information-flow properties that occur in bss, we use a logical framework that we have recently developed for actor networks, which results from a two-stage hybridization process. The first stage corresponds to a logic that captures the locality and connectivity of actors in a given configuration of the network; the second stage corresponds to a logic of possible interactions between actors, which captures the dynamics of the system in terms of network reconfigurations. To illustrate the properties that can be checked using this framework, we provide an actor-network specification of a particular bss, and use a recently developed tool for hybridized logics to highlight and correct an information-flow vulnerability of the system.

Ionuţ Ţuţu, Claudia Elena Chiriţă, Antónia Lopes, José Luiz Fiadeiro
A Generic Dynamic Logic with Applications to Interaction-Based Systems

We propose a generic dynamic logic with the usual diamond and box modalities over structured actions. Instead of using regular expressions of actions our logic is parameterised by the form of the actions which can be given by an arbitrary language for complex, structured actions. In particular, our logic can be instantiated by languages that describe complex interactions between system components. We study two instantiations of our logic for specifying global behaviours of interaction-based systems: one on the basis of global session types and the other one using UML sequence diagrams. Moreover, we show that our proposed generic logic, and hence all its instantiations, satisfy bisimulation invariance and a Hennessy-Milner theorem.

Rolf Hennicker, Martin Wirsing

Requirements Engineering

Frontmatter
Ambiguity in Requirements Engineering: Towards a Unifying Framework

A long stream of research in RE has been devoted to analyzing the occurrences and consequences of ambiguity in requirements documents. Ambiguity often occurs in documents, most often in natural language (NL) ones, but occasionally also in formal specifications, be it because of abstraction, or of imprecise designation of which real-world entities are denotated by certain expressions. In many of those studies, ambiguity has been considered a defect to be avoided. In this paper, we investigate the nature of ambiguity, and advocate that the simplistic view of ambiguity as merely a defect in the document does not do justice to the complexity of this phenomenon. We offer a more extensive analysis, based on the multiple linguistic sources of ambiguity, and present a list of real-world cases, both in written matter and in oral interviews, that we analyze based on our framework. We hope that a better understanding of the phenomenon can help in the analysis of practical experiences and in the design of more effective methods to detect, mark and handle ambiguity.

Vincenzo Gervasi, Alessio Ferrari, Didar Zowghi, Paola Spoletini
QuARS: A Pioneer Tool for NL Requirement Analysis

This paper summarizes the achievements of Stefania Gnesi’s research activity in the area of the natural language requirements analysis and quality evaluation. The development of the QuARS tool has been the pivotal step of this research stream led by Stefania Gnesi at ISTI–CNR. A functional description of the QuARS tool is provided as well as a short report of its evolutions over a decade. The wide use of QuARS in several research and industrial contexts demonstrates the validity and the originality of Stefania’s contribution in such an area of software engineering.

Giuseppe Lami, Mario Fusani, Gianluca Trentanni
Detecting Feature Interactions in FORML Models

Requirement engineers must know how features (units of functionality) interact, in order to resolve undesired interactions. Model checking has been proposed as an effective method for detecting feature interactions. We propose a method for (1) modelling features as distinct modules (explicating intended interactions with other features), (2) composing feature modules into a system model that preserves intended interactions, (3) translating this rich model into the input language of a model checker, and (4) automatically generating correctness properties whose violations reveal unintended feature interactions.

Sandy Beidu, Joanne M. Atlee

Natural Language Processing

Frontmatter
Comparing Results of Natural Language Disambiguation Tools with Reports of Manual Reviews of Safety-Related Standards

Methods and tools for detecting and measuring ambiguity in texts have been proposed for years, yet their efficacy is still under study for improvement, encouraged by results in various application fields (requirements, legal documents, interviews, ...). The paper presents a fresh-started process aimed at validating such methods and tools by applying some of them to a semi-structured data corpus. This corpus represents results of manual reviews, done by international experts, along with their source texts. The purpose is to check how much results of automated analysis are consistent with the reviewers reports. The application domain is that of safety-related system/software Standards in Railway. Thus, if we increase confidence in tools, then we also increase confidence in Standard correctness, which in turn impacts in conforming products. Care is taken in using, for scientific purpose only, sensitive, unpublished source data (the comments, protected by NDAs), that are kept reviewer-anonymous before statistical results are produced, while the Standards are publicly available texts. The results will also be used to improve the tools themselves, even if much elaboration is still to be carried out: the research is still at its beginning, so metrics for tool evaluation is a goal, whose characteristics are just sketched and discussed in the paper.

Isabella Biscoglio, Attilio Ciancabilla, Mario Fusani, Giuseppe Lami, Gianluca Trentanni
Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence

Recent advances in artificial intelligence raise a number of concerns. Among the challenges to be addressed by researchers, accountability of artificial intelligence solutions is one of the most critical. This paper focuses on artificial intelligence applications using natural language to investigate if the core semantics defined for a large-scale natural language processing system could assist in addressing accountability issues. Core semantics aims to obtain a full interpretation of the content of natural language texts, representing both implicit and explicit knowledge, using only ‘subj-action-(obj)’ structures and causal, temporal, spatial and personal-world links. The first part of the paper offers a summary of the difficulties to be addressed and of the reasons why representing the meaning of a natural language text is relevant for artificial intelligence accountability. In the second part, a-proof-of-concept for the application of such a knowledge representation to support accountability, and a detailed example of the analysis obtained with a prototype system named CoreSystem is illustrated. While only preliminary, these results give some new insights and indicate that the provided knowledge representation can be used to support accountability, looking inside the box.

Roberto Garigliano, Luisa Mich
QuOD: An NLP Tool to Improve the Quality of Business Process Descriptions

[Context and Motivation] In real-world organisations, business processes (BPs) are often described by means of natural language (NL) documents. Indeed, although semi-formal graphical notations exist to model BPs, most of the legacy process knowledge—when not tacit—is still conveyed through textual procedures or operational manuals, in which the BPs are specified. This is particularly true for public administrations (PAs), in which a large variety of BPs exist (e.g., definition of tenders, front-desk support) that have to be understood and put into practice by civil servants. [Question/problem] Incorrect understanding of the BP descriptions in PAs may cause delays in the delivery of services to citizens, or, in some cases, incorrect execution of the BPs. [Principal idea/results] In this paper, we present the development of an NLP-based tool named QuOD (Quality Analyser for Official Documents), oriented to detect linguistic defects in BP descriptions and to provide recommendations for improvements. [Contribution] QuOD is the first tool that addresses the problem of identifying NL defects in BP descriptions of PAs. The tool is available online at http://narwhal.it/quod/index.html .

Alessio Ferrari, Giorgio O. Spagnolo, Antonella Fiscella, Guido Parente

Software Product Lines

Frontmatter
A Decade of Featured Transition Systems

Variability-intensive systems (VIS) form a large and heterogeneous class of systems whose behaviour can be modified by enabling or disabling predefined features. Variability mechanisms allows the adaptation of software to the needs of their users and the environment. However, VIS verification and validation (V&V) is challenging: the combinatorial explosion of the number of possible behaviours and undesired feature interactions are amongst such challenges. To tackle them, Featured Transitions Systems (FTS) were proposed a decade ago to model and verify the behaviours of VIS. In an FTS, each transition is annotated with a combination of features determining which variants can execute it. An FTS can model all possible behaviours of a given VIS. This compact model enabled us to create efficient V&V algorithms taking advantage of the behaviours shared amongst features resulting in a reduction of the V&V effort by several orders of magnitude. In this paper, we will cover the formalism, its applications and sketch promising research directions.

Maxime Cordy, Xavier Devroey, Axel Legay, Gilles Perrouin, Andreas Classen, Patrick Heymans, Pierre-Yves Schobbens, Jean-François Raskin
Product Line Verification via Modal Meta Model Checking

Modal Meta Model Checking (M3C) is a method and tool supporting meta-level product lining and evolution that comprises both context-free system structure and modal refinement. The underlying Context-Free Modal Transition Systems (CFMTSs) can be regarded as loose specifications of meta models, and modal refinement as a way to increase the specificity of allowed domain specific languages (DSLs) by constraining the range of allowed syntax specifications. Model checking with M3C allows one to verify properties specified in a branching-time logic for all DSLs of a given level of specificity in one go. The paper illustrates the impact of M3C in an industrial setting where well-formed documents serve as contracts between a provider and its customers in two steps: it establishes CFMTS as a formalism to specify product lines of document description types (DTDs – or related formalisms like JSON schema), and it shows how M3C-based product line verification can be used to guarantee that violations of essential well-formedness constraints of a corresponding user document are detected by standard DTD-checkers. The resulting hierarchical product line verification allows Creios GmbH, a service provider for E-commerce systems to provide a wide range of tailored shop applications whose essential business rules are checked by a standard DTD-checker.

Tim Tegeler, Alnis Murtovi, Markus Frohme, Bernhard Steffen
Towards Model Checking Product Lines in the Digital Humanities: An Application to Historical Data

Rapid development in computing techniques and databases’ systems have aided in the digitization of, and access to, various historical (big) data, with significant challenges of analysis and interoperability. The Death and Burial Data, Ireland project aims to build a Big Data interoperability framework loosely based on the Knowledge Discovery Data (KDD) process to integrate Civil Registration of Death data with other data types collated in Ireland from 1864 to 1922. For our project, we resort to a Document Type Description (DTD) product line to represent and manage various representations and enrichments of the data. Well-formed documents serve as contracts between a provider (of the data set) and its customers (the researchers that consult them). We adopt the Context-Free Modal Transition Systems as a formalism to specify product lines of DTDs. The goal is to then proceed to product line verification using context-free model checking techniques, specifically the M3C checker of [14] to ensure that they are fit for purpose. The goal is to later implement and manage the corresponding family of data models and processes in the DIME framework, leveraging its flexible data management layer to define and efficiently manage the interoperable historical data framework for future use.The resulting hierarchical product line verification will allow our technical platform to act as a high-quality service provider for digital humanities researchers, providing them with a wide range of tailored applications implementing the KDD process, whose essential business rules are easily checked by a standard DTD checker.

Ciara Breathnach, Najhan M. Ibrahim, Stuart Clancy, Tiziana Margaria
Variability Modelling and Analysis During 30 Years

Variability modelling and analysis are among the most important activities in software engineering in general and in software product line engineering in particular. In 1990, the FODA report supposed a revolution in the importance of modelling and analysing of variability. In 2020, 30 years of variability modelling and analysis will be celebrated. In this paper, a short overview of the history and the importance of variability modelling and analysis is given, in concordance to that anniversary and on the occasion of Stefania Gnesi’s retirement. She was part of this amazing history.

David Benavides

Formal Verification

Frontmatter
A Systematic Approach to Programming and Verifying Attribute-Based Communication Systems

A methodology is presented for the systematic development of systems of many components, that interact by relying on predicates over attributes that they themselves mutually expose. The starting point is a novel process calculus AbC (for Attribute-based Communication) introduced for modelling collective-adaptive systems. It is shown how to refine the model by introducing a translator from AbC into UML-like state machines that can be analyzed by UMC. In order to execute the specification, another translator is introduced that maps AbC terms into ABEL, a domain-specific framework that offers faithful AbC-style programming constructs built on top of Erlang. It is also shown how the proposed methodology can be used to assess relevant properties of systems and to automatically obtain an executable program for a non-trivial case study.

Rocco De Nicola, Tan Duong, Omar Inverso, Franco Mazzanti
On the Prediction of Smart Contracts’ Behaviours

Smart contracts are pieces of software stored on the blockchain that control the transfer of assets between parties under certain conditions. In this paper we analyze the bahaviour of smart contracts and the interaction with external actors in order to maximize objective functions. We define a core language of programs with a minimal set of smart contract primitives and we describe the whole system as a parallel composition of smart contracts and users. We therefore express the system behaviour as a first logic formula in Presburger arithmetics and study the maximum profit for each actor by solving arithmetic constraints.

Cosimo Laneve, Claudio Sacerdoti Coen, Adele Veschetti
Hunting Superfluous Locks with Model Checking

Parallelization of existing sequential programs to increase their performance and exploit recent multi and many-core architectures is a challenging but inevitable effort. One increasingly popular parallelization approach is based on OpenMP, which enables the designer to annotate a sequential program with constructs specifying the parallel execution of code blocks. These constructs are then interpreted by the OpenMP compiler and runtime, which assigns blocks to threads running on a parallel architecture. Although this scheme is very flexible and not (very) intrusive, it does not prevent the occurrence of synchronization errors (e.g., deadlocks) or data races on shared variables. In this paper, we propose an iterative method to assist the OpenMP parallelization by using formal methods and verification. In each iteration, potential data races are identified by applying to the OpenMP program a lockset analysis, which computes the set of shared variables that potentially need to be protected by locks. To avoid the insertion of superfluous locks, an abstract, action-based formal model of the OpenMP program is extracted and analyzed using the ACTL on-the-fly model checker of the CADP formal verification toolbox. We describe the method, compare it with existing work, and illustrate its practical use.

Viet-Anh Nguyen, Wendelin Serwe, Radu Mateescu, Eric Jenn
Formal Verification of Railway Timetables - Using the UPPAAL Model Checker

This paper considers the challenge of validating railway timetables and investigates how formal methods can be used for that. The paper presents a re-configurable, formal model which can be configured with a timetable for a railway network, properties of that network, and various timetabling parameters (such as station and line capacities, headways, and minimum running times) constraining the allowed train behaviour. The formal model describes the system behaviour of trains driving according to the given railway timetable. Model checking can then be used to check that driving according to the timetable does not lead to illegal system states. The method has successfully been applied to a real world case study: a time table for 12 trains at Nærumbanen in Denmark.

Anne E. Haxthausen, Kristian Hede
An Axiomatization of Strong Distribution Bisimulation for a Language with a Parallel Operator and Probabilistic Choice

In the setting of a simple process language featuring non-deterministic choice and a parallel operator on the one hand and probabilistic choice on the other hand, we propose an axiomatization capturing strong distribution bisimulation. Contrary to other process equivalences for probabilistic process languages, in this paper distributions rather than states are the leading ingredients for building the semantics and the accompanying equational theory, for which we establish soundness and completeness.

Jan Friso Groote, Erik P. de Vink

Applications

Frontmatter
Enabling Auditing of Smart Contracts Through Process Mining

The auditing sector is acquiring a strong interest in the diffusion of blockchain technologies. Such technologies guarantee the persistence, and authenticity of transactions related to the execution of a contract, and then enable auditing activities. In particular, they make possible to check if observed sequences of transactions are in line with the possibly expected ones. In other words, auditing blockchain transactions allow users to check if the smart contract fits the expectation of the designers, that for instance could check if a given activity is performed or if it satisfies a given set of properties. In such a setting we propose a methodology that exploits process mining techniques to evaluate smart contracts, and to support the work of the auditor. Models resulting from the mining can be used to diagnose if the deployed application works as expected, and possibly to continuously improve them. We illustrate the use of our approach using a small, but real, case study.

Flavio Corradini, Fausto Marcantoni, Andrea Morichetta, Andrea Polini, Barbara Re, Massimiliano Sampaolo
A Refined Framework for Model-Based Assessment of Energy Consumption in the Railway Sector

Awareness and efforts to moderate energy consumption, desirable from both economical and environmental perspectives, are nowadays increasingly pursued. However, when critical sectors are addressed, energy saving should be cautiously tackled, so to not impair stringent dependability properties such contexts typically require. This is the case of the railway transportation system, which is the critical infrastructure this paper focuses on. For this system category, the attitude has been typically to neglect efficient usage of energy sources, motivated by avoiding to put dependability in danger. The new directives, both at national and international level, are going to change this way of thinking. Our study intends to be a useful support to careful energy consumption. In particular, a refined stochastic modeling framework is offered, tailored to the railroad switch heating system, through which analyses can be performed to understand the sophisticated dynamics between the system (both the cyber and physical components) and the surrounding weather conditions.

Silvano Chiaradonna, Felicita Di Giandomenico, Giulio Masetti, Davide Basile
Modelling of Railway Signalling System Requirements by Controlled Natural Languages: A Case Study

The railway sector has been a source of inspiration for generations of researchers challenged to develop models and tools to analyze safety and reliability. Threats were coming mainly from within, due to occasionally faults in hardware components. With the advent of smart trains, the railway industry is venturing into cybersecurity and the railway sector will become more and more compelled to protect assets from threats against information & communication technology. We discuss this revolution at large, while speculating that instruments developed for security requirements engineering can then come in support of in the railway sector. And we explore the use of one of them: the Controlled Natural Language for Data Sharing Agreement (CNL4DSA). We use it to formalize a few exemplifying signal management system requirements. Since CNL4DSA enables the automatic generation of enforceable access control policies, our exercise is preparatory to implementing the security-by design principle in railway signalling management engineering.

Gabriele Lenzini, Marinella Petrocchi
Single-Step and Asymptotic Mutual Information in Bipartite Boolean Nets

In this paper we contrast two fundamentally different ways to approach the analysis of transition system behaviours. Both methods refer to the (finite) global state transition graph; but while method A, familiar to software system designers and process algebraists, deals with execution paths of virtually unbounded length, typically starting from a precise initial state, method B, associated with counterfactual reasoning, looks at single-step evolutions starting from all conceivable system states.Among various possible state transition models we pick boolean nets – a generalisation of cellular automata in which all nodes fire synchronously. Our nets shall be composed of parts P and Q that interact by shared variables. At first we adopt approach B and a simple information-theoretic measure – mutual information $$M(y_P,y_Q)$$ – for detecting the degree of coupling between the two components after one transition step from the uniform distribution of all global states. Then we consider an asymptotic version $$M(y_P^*,y_Q^*)$$ of mutual information, somehow mixing methods A and B, and illustrate a technique for obtaining accurate approximations of $$M(y_P^*,y_Q^*)$$ based on the attractors of the global graph.

Tommaso Bolognesi
Application of Model Checking to Fault Tolerance Analysis

A basic concept in modeling fault tolerant systems is that anticipated faults, being obviously outside of our control, may or may not occur. A fault tolerant system design can be proved to correctly behave under a given fault hypothesis, by proving the observational equivalence between the system design specification and the fault-free system specification. Additionally, model checking of a temporal logic formula which gives an abstract notion of correct behavior can be applied to verify the correctness of the design. Another activity that must be considered in fault tolerance is the issue of fault detection, since the existence of undetectable faults makes the system more vulnerable. The usage of model checking and temporal logic gives opportunities to better analyze the system behavior in presence of faults and to identify undetectable faults.

Cinzia Bernardeschi, Andrea Domenici
How Formal Methods Can Contribute to 5G Networks

Communication networks have been one of the main drivers of formal methods since the 70’s. The dominant role of software in the new 5G mobile communication networks will once again foster a relevant application area for formal models and techniques like model checking, model-based testing or runtime verification. This chapter introduces some of these novel application areas, specifically for Software Defined Networks (SDN) and Network Function Virtualization (NFV). Our proposals focus on automated methods to create formal models that satisfy a given set of requirements for SDN and NFV.

María-del-Mar Gallardo, Francisco Luque-Schempp, Pedro Merino-Gómez, Laura Panizo
Backmatter
Metadaten
Titel
From Software Engineering to Formal Methods and Tools, and Back
herausgegeben von
Maurice H. ter Beek
Alessandro Fantechi
Laura Semini
Copyright-Jahr
2019
Electronic ISBN
978-3-030-30985-5
Print ISBN
978-3-030-30984-8
DOI
https://doi.org/10.1007/978-3-030-30985-5

Premium Partner